Feasibility & cost estimation: Local LLM (LM Studio) + Telegram Bot with multi-persona architecture (Option C approach)
Hi respectful devs,
I’m validating the feasibility and cost of a local LLM + Telegram bot architecture before hiring a developer.
I’m running a model via LM Studio and want to connect it to a single Telegram bot that supports:
LM Studio only allows a single system prompt.
While I’ve improved internal hierarchy and state separation, I still experience minor hierarchy conflicts and prompt drift under certain conditions.
Previously I used two bots (onboarding + main bot), but I’m now consolidating into a cleaner backend-managed architecture (Option C in the linked doc).
Full technical breakdown here:
LINK: https://closed-enthusiasm-856.notion.site/BEST-solution-for-Prompt-Engineering-LM-Telegram-IT-need-2f98a5f457ac80ec93bbffb65697b960
My main questions:
I’m avoiding OpenAI APIs due to moderation constraints, so this must run locally.