Building an AI-Powered Telegram News Channel

I am a news and politics junkie. I love staying updated, but I eventually found myself subscribed to about 20 different Telegram channels, spending up to an hour and a half every single day just scrolling through messages to catch up on current events.

I wanted to create something simpler for myself and my friends: a single daily digest. The goal was to take the channels I’ve curated and learned to trust over the years, strip away the channel owners’ political biases and subjective opinions, and just leave the objective facts - the absolute truth.

But doing this manually meant opening 20+ channels, scanning hundreds of messages, mentally deduplicating the news, rewriting everything objectively in Hebrew, formatting it, and publishing. Every single day. It was exhausting. So, I built a tool to do it in minutes.

This post covers how my channel, החדשות בדקה (@TLDR_IL), came to be, the technical decisions behind its architecture, and the surprisingly deep rabbit hole of prompt engineering for Hebrew news summarization.

The Architecture: Building the Pipeline

To make this vision a reality, I needed a system that could seamlessly read my trusted sources, process the raw information, and let me review the output before hitting publish.

I built a Next.js app with a simple four-step pipeline:

  1. Fetch - Pull messages from monitored Telegram channels for a configurable time window (24 hours usually).
  2. Summarize - Send messages to GPT (back then it was GPT-4o, today it’s GPT-5.4) with structured output, and get back categorized, objective Hebrew bullets.
  3. Edit - Use a drag-and-drop editor to curate, reorder, and refine the AI output.
  4. Publish - Send the final summary directly to my Telegram channel.

The tech stack keeping this together: Next.js 15 (App Router), TypeScript, GramJS for Telegram, OpenAI API, Prisma + PostgreSQL on Neon, TanStack Query, and shadcn/ui.

Why GramJS instead of the standard Bot API?

Getting the data was step one, and it required a crucial early decision. Telegram’s standard Bot API can only access channels where the bot is an admin. But I wanted to read from channels I’m subscribed to as a regular user. Including private ones.

GramJS implements Telegram’s MTProto protocol, which lets you authenticate as a personal account. This means my app can read from any channel I’m a member of, exactly like opening the Telegram app on my phone. The tradeoff is that authentication is slightly more complex: you need to go through a phone number + code flow once to generate a session string, which then lives securely in your environment variables.

Making sense of the data: Why structured output matters

Once the plumbing was in place, I thought the hard part was over. Early versions of the app simply asked GPT for a text summary. The results were unpredictable: sometimes markdown, sometimes plain text, inconsistent categories, and absolutely no way to programmatically edit individual news items.

The breakthrough was switching to OpenAI’s response_format: { type: 'json_schema' } with strict: true. The model now returns a typed JSON structure:

{
  "titles": [
    {
      "category": "ביטחון",
      "items": [
        {
          "text": "צה\"ל תקף מספר מטרות בדרום לבנון",
          "sourceMessageIds": ["chan123:456"]
        }
      ]
    }
  ]
}

Every single news item has source attribution mapping back to the original Telegram messages. This structured format is what made it possible to build a proper editing UI - allowing me to drag items between sections, rename categories, or quickly delete and reword individual bullets before publishing.

The Prompt Engineering Rabbit Hole

With the data flowing and structured perfectly, I hit the next wall: summarizing Hebrew news objectively is much harder than it sounds.

The first few prompts produced output that was technically correct but felt unnatural or overly detailed. I had to write explicit rules to shape the AI into a neutral, big-picture journalist. Some specific issues I had to solve included:

Geographic context for casual readers

News channels often mention highly specific names of small villages (like Al-Bureij - who knows where it is?). For someone who doesn’t follow the Middle East map religiously, this is just noise. I had to instruct the model to generalize obscure locations into broader, recognizable regions.

For example, converting specific village names into “the northern Gaza Strip,” “the central Strip,” or “a village near Hebron.”

Event consolidation

Without strict instructions, the model would treat every minor update as a separate breaking news bullet. I needed it to see the bigger picture. I added consolidation rules: instead of listing “2 rockets intercepted over Sderot” and then “3 rockets intercepted over Ashkelon” as separate items, the AI learns to merge them into “5 rockets intercepted across southern Israel.” This transformed the output from a chaotic feed of notifications into a readable summary.

Maintaining objectivity and removing channel bias

Remember the main goal? Leaving only the objective truth. That meant completely removing the specific phrasing, spin, and “flavor” of the original Telegram channels. I also didn’t want the summary directly naming specific channels (they are my sources, not the reader’s). I prompted the model to adopt a neutral, journalistic tone, using hedging language like “לפי דיווחים” (according to reports) or “נמסר כי” (it was reported that). This acts as a filter, wiping away the original authors’ opinions and leaving just the core facts.

Category flexibility

I defined common categories (security, politics, economy), but I also taught the model to dynamically create new ones when a massive story dominates the news cycle. During a major event, you don’t want a generic “Security” header - you want “The Operation in Lebanon” as its own dedicated section.

What I learned

This project fundamentally changed how I think about AI in production. Calling the model is the easy part. The hard part is everything around it: getting the input data clean, shaping the prompt to produce consistently objective output, building an editing layer that makes the AI output truly useful, and handling all the edge cases that only show up when you use a tool daily.

The next two posts in this series will cover how the editing UX evolved from “just publish what GPT says” to a multi-step editorial workflow, and how I built a system for handling adversarial news sources with two-tier summarization.


If you read Hebrew and want to see the output of this system in action, subscribe to the channel: החדשות בדקה on Telegram.