Stop answering the same question five times a week.
Slack-native RAG bot trained on your docs. Source citations, permissions respected, freshness monitored. Monthly retainer delivery.
Who this is for
Ops or IT lead at a 100+ person company where Notion and Confluence search are too weak for the volume of docs, and engineers end up answering the same questions repeatedly.
The pain today
- Notion/Confluence search producing irrelevant results
- Engineers answering duplicate questions in Slack daily
- New hires taking months to navigate the doc landscape
- Policies and procedures known to only a few people
- Document freshness unclear — is this doc still accurate?
The outcome you get
- Slack-native bot answering questions from your docs in seconds
- Source citations on every answer — employees click through to verify
- Permissions respected — doc-level ACLs honored in bot responses
- Freshness signals — stale docs flagged in answers
- Analytics on what's being asked — identifies doc gaps
Slack/Teams integration patterns
Two primary deployment patterns. Slack: bot user with slash command (/askbot) and @mention support in any channel. Threaded responses keep channels clean. DM for private questions. Microsoft Teams: equivalent bot via Teams Bot Framework, @mention in channels or DM. Both: sign-in with SSO so bot knows who's asking (critical for permissions), conversation context preserved within a thread so follow-up questions work naturally. I build with Slack as the primary target (more companies use it) and Teams as a standard second deployment. Works natively, no separate web app required.
Permissions — doc-level access
Not everyone sees every doc. HR docs for HR team. Engineering docs for engineers. Finance docs for finance. Bot must honor these boundaries or the first question from an intern about executive comp becomes a data incident. Implementation: sync document permissions from source (Notion, Confluence, Google Drive) into the bot's retrieval index, tagged per document. On each query, filter retrieval to documents the asking user has access to. Verified in testing — explicitly probe the bot with questions that should fail for specific users. Audit log captures every question with who asked and what docs were accessed. Permissions are the detail that makes internal bots feel safe instead of terrifying.
Freshness and reindexing
Docs drift. A policy doc last updated 18 months ago may be wrong. Bot needs to signal freshness. Implementation: ingest docs with last-modified timestamps, surface the timestamp in answers ('Based on a doc last updated 3 months ago'), weight retrieval toward fresher docs when multiple match, flag documents over 12 months old for review. Reindexing: daily for fast-changing docs (engineering runbooks), weekly for stable docs (company policies), on-demand when someone updates a doc. Employees see when their question is being answered from fresh vs stale context and can verify against sources.
Case study: Instill skill library experience
Instill is my self-initiated AI product — a prompt library for AI tools, launched Q1 2026 with 30+ active users, 1,000+ skills saved, 45+ projects. The core mechanic (browse, retrieve, execute context-appropriate content) shares architectural DNA with internal KB bots. The production discipline — evaluation pipelines, hallucination handling, user expectation management, permissions and access control — transfers directly. I know where internal AI assistants fail because I operate one through real user iteration. That experience saves months of learning on client engagements.
Pricing
Internal KB bots fit the AI Automation retainer at $3,000/mo. First-version timeline: 4–5 weeks to ingest core doc sources, wire Slack/Teams, ship permissions. Retainer continues through expansion (new doc sources, freshness tuning, answer quality refinement). 14-day money-back, cancel anytime, Work Made for Hire. LLM API costs typically $100–1,000/mo depending on question volume across the org.
What the analytics reveal
A side benefit: bot question logs show what employees are actually asking. Frequent questions that the bot struggles to answer = doc gap. Frequent questions it answers well but that still get asked repeatedly = training opportunity (maybe this should be in onboarding). Time-sensitive spikes (many questions about Q4 planning in November) reveal operational patterns. I provide a monthly report of top questions, answer quality, and suggested doc updates. The bot doesn't just answer questions — it reveals which docs to write or refresh next.
Recent proof
A comparable engagement, delivered and documented.
A prompt library that works with every AI tool
A home for your best AI prompts. Save them once, then use them in Claude, Cursor, or any AI tool you work with. No more copy-paste.
Frequently asked questions
The questions prospects ask before they book.
- Which doc sources can you ingest?
- Notion, Confluence, Google Docs, SharePoint, Dropbox Paper, GitHub Wiki, GitBook, internal APIs with HTTP endpoints. Custom sources (PDFs on shared drive, internal wiki) supported with additional ingestion pipeline work. Each source scoped in week 1 based on where your knowledge actually lives.
- How does it handle confidential or sensitive info?
- Doc-level permissions from source systems are synced to the retrieval index. Bot can only return information from docs the asking user has access to. Additional redaction rules (auto-redact SSN patterns, financial data) configurable per compliance requirements. Enterprise auth (SSO) required so bot knows who's asking.
- Can it answer policy questions with HR-grade accuracy?
- Within the quality of the underlying docs. Bot cites sources on every answer, and users are trained to verify — the bot is a starting point, not a final arbiter. For critical HR decisions (leave policies, benefits eligibility), bot responses include standard disclaimer and link to HR contact.
- What's the monthly LLM cost?
- Depends on question volume. Organization with 100 employees asking 5 questions/day averages ~$300–500/mo in LLM fees. 1,000 employees scales to $1k–3k/mo. Questions answered with cached context (repeat questions on stable docs) cost less. I optimize caching and model routing (simpler questions to cheaper models) to keep costs bounded.
- Can engineers query code documentation too?
- Yes — code-aware sources (GitHub, GitBook, internal runbooks) integrate the same way. Bot can answer 'where is the deployment pipeline defined' or 'what's the escalation path for database alerts' from engineering docs. Especially valuable for new hire onboarding and for reducing senior engineer interruption.
Ready to start?
Tell me what you need in 60 seconds. Tailored proposal in your inbox within 6 hours.