Hackathon Validation (Half Day to 3 Days)

How do you show leadership and colleagues that AI actually works? How do you avoid building something nobody uses?

Back to Getting Started


Why Hold a Hackathon?

A hackathon isn't just for technical people. In Botrun's deployment process, a hackathon is the fastest way for decision-makers to see results firsthand.

Lessons learned: The team once "spent an 800K TWD budget as if it were 8M TWD," investing massive resources without aligning with customer needs. The current SOP requires: Director-level or above decision-makers must participate in person, bringing real data for on-site hands-on work.


Three Levels of Hackathon

Level Duration Best For What You Get
Experience Half day (4 hours) Department heads doing initial evaluation AI feasibility validation + preliminary cost estimate
Workshop Full day (8 hours) Director-level teams serious about deployment A demonstrable POC built with real data
Deep Dive 3 days Large-scale cross-department deployments Complete solution + launch plan + contract terms

How Does a Hackathon Work?

Pre-event preparation (what you need to do):

On-site process:

  1. Botrun mentors explain how to use the AI Play platform
  2. You upload real data and build a Quality Question Bank on-site
  3. AI produces results on the spot (official documents, analysis reports, review opinions, etc.)
  4. You and experts evaluate: Do the results meet the standard?
  5. Iterate and refine until you're satisfied

Post-event follow-up (within 7 days):


Real Case: Ministry of Interior — Department of Civil Affairs Hackathon

Problem: The Department of Civil Affairs reviews a large volume of religious organization applications each year. Data entry and verification of foreign nationals' information was particularly time-consuming.

What was done during the hackathon:

  1. Uploaded real application documents (passport copies, application forms)
  2. AI automatically recognized foreign nationals' names, passport numbers, and dates of birth
  3. AI automatically calculated Taiwan participation ratios (must reach 50%)
  4. On-site verification: AI recognition results vs manual cross-checking

User satisfaction: 7-10 out of 10. The biggest help was foreign national data recognition, significantly reducing data entry and verification time.

Follow-up: The Department Director personally secured 240K TWD in character pack funding and planned a 4.8M TWD renewal budget for the following fiscal year.


Next Steps