github windkh/node-red-contrib-telegrambot V17.4.4

latest release: V17.4.5
3 hours ago

V17.4.4 — fix 409 Conflict loop on redeploy/restart after a polling failure

Direct fix for @petermeter69's newly reported symptom on #442:

"on pressing the deploy button of Node Red since v.17.3.0 (but not before) when resuming after a failure I experience a periodic series of errors: TelegramError: ETELEGRAM: 409 Conflict: terminated by other getUpdates request; make sure that only one bot instance is running. This disappears when I restart the whole Node Red (node-red-restart)."

What was happening

409 Conflict from Telegram means the server saw a second getUpdates for the same bot token while a previous one was still in flight. Two things in the V17.3.0 → V17.4.3 code path were combining to keep this state alive:

  1. polling_error handler treated 409 like a transient network error — it called stopPolling() followed by restartPolling() (the local 3 s setTimeout that triggers startPolling({restart:true})). The library's own polling loop also retries on the next interval. Our restart on top of that raced yet another getUpdates request, and so each 409 spawned the conditions for the next.
  2. V17.3.0's df46aa0 dropped the explicit _polling teardown that previous versions did and trusted the documented startPolling({restart:true}) soft-restart. That keeps the library's internal polling state across restarts — which is normally fine, but means a new getUpdates carries over enough state to look like a continuation of the previous request to Telegram's side, and the server's deduplication sees two simultaneous polls for the same token.

node-red-restart worked around it because it rebuilds the whole process; the Telegram-side conflict cleared on its own during the few seconds the bot was absent.

What changes in V17.4.4

Two small changes in bot-node.js:

  1. polling_error handler now detects 409 specifically (error.message.indexOf('ETELEGRAM: 409 Conflict') === 0) and skips the stopPolling+restartPolling chain. The library's own polling loop will naturally retry on the next interval; piling our restart on top of that is precisely what perpetuates the conflict. In verbose mode the log shows 409 Conflict — another getUpdates still in flight server-side; letting it clear naturally.
  2. restartPolling resets self.telegramBot._polling = null before startPolling({restart:true}) so the library treats the next poll as a fresh boot, with no internal state carryover from the previous polling session. Reaches into the library's private API on purpose; the documented soft-restart alone wasn't sufficient.

What V17.4.4 does NOT fix

Two distinct concerns from #442 are still open and being worked on separately:

  • Auto-resume after temporary network outage — petermeter69's hypothesis that the bot's keep-alive socket pool gets wedged and is not actually rebuilt on restart, even though scheduleRestart calls abortBot + recreate. Inspection of V17.4.4 confirms this.request (and the agentOptions it carries) is constructed once per config node and reused across rebuilds — so the agent pool is not genuinely fresh after scheduleRestart. Fix in flight.
  • The underlying connect ETIMEDOUT / EAI_AGAIN failures themselves are network-layer issues outside the plugin's reach.

Tests

215 passing. Unchanged from V17.4.3 — the 409 handling is exercised by manual reproduction on petermeter69's setup; we don't yet have an automated integration test that can simulate the server-side conflict shape against the mock Telegram API.

Don't miss a new node-red-contrib-telegrambot release

NewReleases is sending notifications on new releases.