AI Education Lab

AI Decision Lab

Last updated: 2026-04-22

A standalone one-page lab for learning why AI systems can look accurate and still behave badly. Change the base rate, model quality, and review rate to see when AI should assist humans instead of deciding alone.

Base rates matterRare events can make a model feel smart while flooding the system with false alarms.
Humans are limitedReview helps when it is targeted, not when teams drown in alerts.
Policy beats hypeThe goal is useful workflow design, not the biggest dashboard number.

Decision Snapshot

Fraud detection
Precision0%
Recall0%
False Alarm Rate0%
Review Load0%

Confusion Matrix

Counts out of 10,000
True Positives0

False Positives0

False Negatives0

True Negatives0

Policy Readout

Assistive AI
Balanced system
72

How to read thisStart with the preset, then move only one slider at a time so you can see which tradeoff actually changed the outcome.
Current queue shapeMost cases pass cleanly, while a smaller flagged queue competes for review time.
Recommended operating modeUse the model as assistive triage unless precision, harm pressure, and review load all stay in a comfortable zone.
Accuracy can mislead
Where humans help
Best next move