Building a More Reliable Sophia
- Byron McClure
- Nov 1
- 4 min read
What we're fixing and where we're headed
Look, I'm not going to sugarcoat it. October was rough.

If you've been using School Psych AI over the past month, you've experienced some frustrating moments. Maybe Sophia gave you an output that didn't quite match what you asked for. Maybe uploading a PDF felt like watching paint dry. Maybe you tried a prompt that used to work perfectly, and suddenly it didn't.
We saw it. We felt it. And we've been working around the clock to fix it.
What Went Wrong (And Why We're Talking About It)
Here's the thing: when you rely on a platform to help you do your job, and that platform starts acting unpredictably, it creates more than inconvenience. It creates uncertainty. It chips away at the trust we've built over the past two years with you.
That's on us. Full stop.
Some of the issues were within our control. Some weren't. But regardless of where the problems originated, as the CEO of School Psych AI, I am responsible for making things right.
You've built School Psych AI into your daily workflow. You've shared it with colleagues. You've trusted us with your most time-sensitive work.
We don't take that lightly.

The Three Things We Fixed
Over the past few weeks, Eduard has been laser-focused on three specific improvements. Here's what has been updated and what it means for you.
Making Sophia More Accurate
When we talk about accuracy, we mean grounding Sophia in the specific knowledge she needs to give you reliable responses. This matters most for assessment and report writing tools, where precision counts.
We've rebuilt how Sophia accesses subject-specific information. Now when you're working with assessments, she has better context for descriptions, qualitative descriptors, and the nuances that matter in your field. She better understands the difference between Average and Low Average. She knows what a 90 means versus a 89 on the WISC.
These distinctions might seem small, but they're everything when you're writing reports.
Reducing Wait Times
Remember uploading a PDF and having time to make coffee, drink it, and consider making another cup before the document finished processing? Yeah, we remember that too.
PDFs are notoriously difficult for machines to read accurately, especially forms like the Conners 4. We tested it against every major AI platform (think ChatGPT, Gemini, and Claude, and they all struggled with the format.
So we built a new approach. Now Sophia guides the backend system in understanding what it's looking at, which means she can extract data more efficiently and accurately. We tested a 100-page document recently. It took about two minutes to process completely.
Two minutes for 100 pages, with accurate outputs. That's the standard we're holding ourselves to now.
Improving Consistency
Accuracy and speed don't matter much if the results vary wildly from one use to the next. So we implemented internal testing protocols to ensure Sophia follows instructions consistently and recalls conversation history reliably.
We test edge cases repeatedly. If you enter a score that sits right on a boundary, we run that scenario 10, 20, sometimes 100 times to see how consistent the output remains. We focus heavily on the report writer tools since those are the most used.
Recent testing on the WISC report writer showed 100% consistent output. That's where we need to be, and that's what we're maintaining going forward.
Prompting Matters More Now
Here's something important: with these updates, how you prompt Sophia has become even more critical.
If you've been prompting the same way you always have and getting different results, that's why. The system is now more precise, meaning it responds better to precise instructions.
Think about prompting like giving directions to a colleague or thought partner. If you said "summarize this," would they know what format you want? How much detail? What perspective to take?
Be specific. Tell Sophia whether you want bullets, paragraphs, or a narrative interpretation. Explain what type of response you're looking for. Give her context about what you're trying to accomplish.
The clearer your instructions, the better your output will be. And now, that relationship between prompt quality and output quality is more direct than ever.
What's Next: Stability First
We're not planning any major changes for a while. We've learned from October, and our focus now is simple: stability.
You need Sophia to produce consistent, quality outputs. That's what you fell in love with initially, and that's what we're committed to delivering consistently.
We're monitoring everything more closely. The internal testing we mentioned? That's ongoing. We're catching issues proactively now, before you experience them.
We're staying on top of feedback from Liseth our Customer Success Lead and team. We're watching system performance constantly.
Yes, we have ideas for new features and improvements. Those will come when the time is right. But right now, making sure you have a stable, dependable platform takes priority over everything else.
The Bottom Line
You need a platform that works when you need it most. One that gives you accurate results and keeps your workflow moving. That’s exactly what we’re focused on delivering.
Every update, fix, and improvement centers on one goal: reliability.
Your feedback matters. Keep sharing what’s working and what needs attention.
Each comment helps us build something stronger together.
We’re listening. We’re improving. We’re moving forward with you.
Byron M.




Comments