Quality Question Bank — The Key to Successful Deployment
How does AI learn our organization's domain expertise? How do we ensure the quality of AI responses?
This is the most important stop on the entire deployment journey.
What Is a Quality Question Bank?
A Quality Question Bank is not a traditional "FAQ list" — it is a structured knowledge curriculum that enables AI to truly understand your business context.
The traditional approach is to spend a lot of time on Prompt Engineering, telling AI how to respond. The Quality Question Bank approach is different — you prepare good examples, and AI automatically learns how to respond.
"Once you complete the Quality Question Bank, your prompt engineering will be perfect. Opus 4.6 will automatically read your Quality Question Bank and write the prompt engineering for you — generalized and without overfitting." — Bohan (Botrun CTO)
Six-Field Structure
Each entry in the question bank contains six fields:
| Field | Content | Example (NSTC Official Document) |
|---|---|---|
| 1. Motivation | Why is this task needed? | "NSTC approves a professor's research grant" |
| 2. Benefit Assessment | What benefits are achieved upon completion? | "Document production time reduced from 2 hours to 10 minutes" |
| 3. Question (Q) | The user's actual question, including all required parameters | "Please draft a grant approval document. Recipient: Prof. Da-Ming Wang, Amount: 2M TWD, Project: Quantum Computing Research, Institution: NTU, Period: Jan to Dec 2026" |
| 4. Answer (A) | The expected complete output | A complete grant approval document template |
| 5. Source Data | Reference documents | Past approval documents (PDF), related regulations |
| 6. Evaluation Criteria | Review standards | "Correct format, accurate amounts, proper legal citations, appropriate official document language" |
You don't need to fill in every field, but the core principle is: let AI know "why we're doing this," "how to do it," and "what good looks like."
Quality Question Bank vs Traditional Prompt Engineering
| Traditional Prompt Engineering | Quality Question Bank | |
|---|---|---|
| What you do | Manually craft prompts, repeatedly test and adjust | Prepare good examples and evaluation criteria |
| Expertise required | Need to understand how AI works | Only need to understand your own business |
| Scalability | Rewrite prompts for every new task | Add to the question bank, AI learns automatically |
| Quality stability | Prone to overfitting specific cases | Strong generalization, handles new cases well |
| Who maintains it | Requires AI engineers | Business staff can update the question bank |
How to Build Your First Quality Question Bank
Step 1: Choose a scenario
Start with the most frequent and time-consuming task in your department. For example:
- Official document drafting (grant approvals, personnel changes, procurement)
- Meeting minutes organization
- Document review (applications, environmental impact assessments)
- Data analysis reports
Step 2: Collect 5-10 real examples
Find past work that was done well. These become your "Answer (A)" field.
Key lesson: The question field must include complete parameters. During NSTC document testing, it was discovered that if the question only says "Help me write a grant approval document," AI will fill in random professor names and amounts. The correct approach is to include all values that need to be filled in the question. "Even if it's simulated, you must simulate it to look and feel like the real thing."
Step 3: Define evaluation criteria
Have your department's experts (section chiefs, senior staff) define: What counts as "well done"? What items should be checked?
Step 4: Use AI to help expand
Once you have 5-10 real examples, you can use AI to help generate more variations, then have experts review them.
Step 5: Multi-round iterative validation
Let real users try it out and record each round of Q&A. The latest question bank format supports:
- Multiple Sheets: Each sheet is a complete task
- Timestamps: Record the time of each iteration
- Multi-turn conversations: Preserve the complete iteration process, not just the final result
Real Case: NSTC Official Document Quality Question Bank
Background: Over the past two years, the National Center for High-Performance Computing (NCHC) collaborated with 7 government agencies to use the TAIDE model for official documents, investing nearly 100 million TWD — all of which failed. NCHC then turned to Botrun, adopting an Agentic AI + Quality Question Bank approach.
Approach:
- Collected 100 real NSTC official documents (approvals, retirement cases, procurement, correspondence, announcements, meeting notices)
- Built 5 Quality Question Banks using the new format with multiple sheets, multi-turn conversations, and timestamps
- Each question bank was filled out by business staff using the six-field structure, with AI engineers assisting in structuring
- Built an automated evaluation system scoring on 5 dimensions: document drafting, OCR accuracy, legal citation, response speed, and Traditional Chinese language quality
Results:
- TAIDE original model baseline: ~50 points
- Botrun agent model + Quality Question Bank: 95 points
- Document generation cost: 350 TWD/document (vs ~1,500 TWD for manual work, saving 77%)
Real Case: Ministry of Justice National Laws & Regulations Smart Portal
Background: The National Laws & Regulations Database receives 120 million queries per year. The Ministry of Justice wanted AI to explain laws in plain language and guide citizens to the right agency.
Quality Question Bank application:
- Scenario: A citizen asks "What happens if I turn right on a red light?"
- AI not only finds the Road Traffic Management and Penalty Act but also explains in plain language, states the penalty, and reminds about related considerations
- Evaluation criteria: Must not provide legal interpretations (only guidance), must cite correct statutes, must inform about the latest amendments
Pricing model: "4 TWD per query, capped at 480K TWD" — simple enough for any procurement officer to understand immediately.
Next Steps
- ← Back to Getting Started
- Ready to go? — See Hackathon Validation
- Want to learn about the launch process? — See Training & Optimization