AI in Education: Where It Belongs — And More Importantly, Where It Doesn't
We built a revision app for our own kids. Watching them use it taught us where AI belongs in education — and where it doesn't.
The thing that surprised us most
When you build a revision app for your own kids, you expect them to start using it for everything.
They didn't.
No matter what we put in PrepWise — adaptive quizzes, free-text marking, AI-generated explanations, parent dashboards — the boys never once treated it as a replacement for their teachers. Or us asking how revision was going on a Sunday evening.
They used the app for the things only an app can do well. For everything else, they still went to a human.
That observation became the spine of how we think about AI in education.
What we watched them do
Two patterns showed up over and over.
They went to their teacher for the things a teacher does best.
Conceptual confusion. “My teacher explained osmosis but I still don't get it.” They didn't want the app to re-explain in the teacher's voice. They wanted to go back to the teacher with a sharper question.
They came to us for the things parents are best at.
Reassurance. Sanity-checking. “Mum, am I behind?” Sometimes just sitting at the table while they worked. The app could tell them where the gaps were. It couldn't tell them they were going to be okay.
The app's job, they showed us, was the bit in the middle: structured daily practice, ruthless feedback on what they got wrong, a daily plan that took the decision out of their hands. They had a clear instinct about what an app should and shouldn't be asked to do. They were right.
Finding where AI fits — and where it doesn't
We didn't arrive at the principles cleanly. We had to put AI in the wrong place a few times before we found the right place.
We started by letting AI mark the answers.
Made sense in theory — AI can read English, AI knows what a mark scheme is, AI should be able to mark. In practice the same answer scored differently on different runs. The boys noticed within days. So we moved AI's role: scoring is now deterministic, mark point by mark point, built from the actual exam-board rubrics. AI sits alongside, helping us spot patterns and refine the rules. The marking itself stays consistent. Same answer, same score, every time.
We tried using AI to explain concepts in long text.
The explanations were genuinely good. But teenagers don't read long text — they scan it. So we changed AI's job. It now generates quick mini-games for re-explanations: different angle, different format, sometimes a memory match, sometimes a spot-the-lie. Same AI, much better fit for how teenagers actually engage.
We tried letting AI write parent summaries from scratch.
“Your child made progress in Maths this week.” AI-generated, technically true, completely useless. A parent has no idea what to do with that. So we changed AI's role here too. We give it real mark-point data and let it translate that into something specific: “She understands the mechanism but struggles to write it in exam language.” The data does the work. AI just translates.
Each of these taught us roughly the same lesson: AI works brilliantly when its job is narrow and well-defined, and works alongside a human. It struggles when it's asked to be the human.
The seven principles we ended up with
The rules we follow internally. We're publishing them because parents and teachers deserve to know what AI is doing inside the apps their children use.
AI doesn't replace teachers.
A teacher knows your child, in your child's class, in your child's school, in this term. We don't.
Our job is to give teachers better data — where the class is consistently struggling, which mark points are being missed, what to plan tomorrow's lesson around. Then we get out of the way.
AI doesn't replace tutors.
A tutor sitting next to a child for an hour sees things AI never will. Where they hesitate. Where they light up. How they handle pressure.
We give tutors better information about where the gaps are. The hour belongs to them.
AI doesn't replace parents.
A weekly summary doesn't substitute for sitting down with your kid and asking how Maths is going.
AI gives you something specific to ask about — “she understands the mechanism but struggles to write it in exam language” — so the conversation has somewhere to go. The conversation is still yours.
AI explains in a second way, not the first.
We teach with proven content first. AI is the assistant who steps in when a student gets stuck on the same question twice — different words, different angle, sometimes a quick game.
The mark scheme stays fixed. Only the teaching varies.
AI ships invisibly first.
We ship AI features in tracks, ordered by risk:
- Internal tools first — AI that improves our content, never visible to students
- Professional-facing next — AI that helps teachers and tutors prioritise
- Student-facing last — AI that touches the child only after the earlier tracks have proven themselves
Each track earns its place by data, not by date.
Until AI is reliably consistent, it's a perspective on marking — not the score.
Scoring in PrepWise is deterministic. Built from exam-board rubrics, mark point by mark point. The same answer scores the same on Monday, on Friday, in six months.
For now, AI sits next to the marker, not in its place. AI helps us spot patterns, flag where a student's answer is unusual, and tell us where our own scoring rules might need updating. The score itself comes from the rubric.
When AI marking becomes genuinely reliable — same answer, same score, every day, every model release — we'll revisit this. We're not against AI marking on principle. We're against unreliable marking on principle.
Every AI feature is killable.
We're not married to any AI feature. We're married to the outcome.
If a feature degrades the product, we turn it off. Every AI surface in PrepWise sits behind a flag we can flip.
What this looks like in PrepWise today
It's easy to write principles. Harder to ship them. Here's how the seven show up in the live product:
| What it does | How it actually works |
|---|---|
| Marks free-text answers | Deterministic scoring engine built from exam-board rubrics. Not AI. |
| Picks what to revise tomorrow | Rule-based adaptive routing. AI helps rank options; rules choose the right question. |
| Re-explains when stuck twice | AI-generated mini-games, served on demand. Shipping shortly to our beta cohort. The deterministic scorer still does the marking. |
| Translates progress for parents | Mark-point data → plain English. AI does the translation; the data is real. |
| Improves our own content | AI looks across thousands of student answers and tells us where our content needs fixing. Internal use only — students never see this. |
Coming next — AI-helped learning, shipped carefully
Three things we're building, in this order:
1. Re-explanation games when you're stuck twice.
Shipping shortly to our beta cohort. When a student gets the same question wrong twice, AI generates a quick mini-game — different angle, different format, sometimes a memory match, sometimes spot-the-lie. The score still comes from the rubric. The teaching is what varies.
2. Plain-English progress summaries for parents.
A short weekly receipt. AI translates mark-point-level data into one or two sentences a parent can act on, so the Sunday-evening conversation has somewhere to go.
3. Smarter suggestions for what to revise tomorrow.
Today the daily plan is rule-based. Solid, predictable, it works. Soon AI will sit on top of it, learning from each student's gap pattern and suggesting where to focus next. The rules still pick the questions. AI helps prioritise the order.
Each one ships only after the one before it has earned its place — by data, not by date. Each one sits behind a flag we can flip if it doesn't help.
None of them touch marking.
Why we're publishing this
Let's be clear. We're not saying AI is bad. We use it every day — to build PrepWise, to power the parts of the product that actually need it, to do the jobs we couldn't do alone. The boys benefit from it.
What we're saying is something different: getting the best out of a kid still takes a teacher who knows them and a parent who's paying attention. An app — AI-powered or not — can help. It can drill, it can track, it can flag the gaps, it can re-explain. It can't substitute for the people in the child's life who are doing the work alongside them.
We built PrepWise to be the tool in the middle. The thing that makes the daily revision work easier so the bigger conversations — the ones at the kitchen table, the ones in the classroom — can be richer.
Not the thing that quietly tries to replace them.
We thought it was worth writing this down because most apps don't.
AI helps students learn. AI assists the people around them — teachers, tutors, parents — but doesn't replace any of them. And until AI marking is reliably consistent, the score stays with the rubric.
The line, in one sentence.
See What We Built
PrepWise is live. Free during alpha. 5,800+ questions across 6 GCSE subjects, deterministic scoring, daily plan, parent dashboard.
If you work in EdTech, education publishing, or ed-adjacent investment — get in touch.
Alfie Crasto is the founder of PrepWise. He built it at his kitchen table for his twin boys Allen and Aaron, who are in Year 10. PrepWise is at prepwise.uk.