top of page
The Claremont UX Research Laboratory logo
Search

AI Feedback for Math Tutors

  • Writer: Hannah Ngọc-Hân Đào
    Hannah Ngọc-Hân Đào
  • Mar 23
  • 4 min read



Background

Saga Education is a non-profit dedicated to closing the opportunity gap through evidence-based, high-impact tutoring. Currently, Saga generates rich AI insights from its interactive tutoring platform, such as transcript analysis, talk time metrics, and “Glows and Grows” feedback to support tutor coaching and development.


Our research team partnered with Saga Education to better understand tutors’ workflows and identify opportunities to streamline their responsibilities. By improving platform efficiency, we aim to empower tutors to focus on what matters most: delivering high-quality, personalized instruction.


The Research Team


Serena Lao - Lead UX Researcher

Franklin Chen - UX Researcher

Lela Beal - UX Researcher

Rubab Rizvi - UX Research Assistant

Samantha Tsai - UX Research Assistant


Research Objectives

Our primary goal was to explore how AI-generated tutoring insights can be effectively delivered to tutors on Saga’s existing platform. We centered our study on three overarching research questions:

  • How can AI-generated insights be presented to tutors in a way that feels relevant, actionable, and not overwhelming?

  • What types of AI feedback (e.g., session-level, aggregate patterns, exemplar clips, coaching prompts) are most useful for tutors’ daily practice?

  • In what contexts (before, during, or after a session; weekly coaching cycle; professional development) do tutors prefer to engage with AI insights?


To achieve this, we focused on understanding tutor needs and preferences, validating user journeys, and producing wireframes and mockups for platform integration.


Methodology

We utilized a mixed-methods approach to test three key hypotheses:

H1

Tutors will report higher confidence when they receive personalized AI-driven feedback.

H2

Tutors will prefer summarized or aggregate insights (e.g., weekly highlights) over detailed per-session data.

H3

Tutors who use AI feedback regularly will show measurable improvements in tutorial quality and require less intensive coaching time.


Building on prior discovery work conducted by the Saga team, we crafted interview questions to delve deeper into understanding tutors’ workflows and needs.


Throughout each step of our research, Serena, our Lead UX Researcher, kept the Saga team updated with our progress. After incorporating feedback from the Saga team into our interview guide, we proceeded to conduct 40-minute semi-structured interviews with four Saga tutors recruited by the organization.


We coded interview transcripts to identify recurring themes, which informed our Jobs To Be Done (JTBD) experience map of Saga tutors. The experience map is broken down into four phases: preparing for a session, during the session, after the session, and professional development.


Based on these data-driven insights, we developed a series of wireframes and mockups for a redesigned Tutor Dashboard, optimized for clarity and ease of use.


Jobs To Be Done (JTBD) Scenario Map
Jobs To Be Done (JTBD) Scenario Map

Results


How did the research play out?


When we conducted user interviews with Saga tutors, we went in looking for UX data points. What we found instead were incredibly dedicated educators navigating a high-pressure environment where they often felt like they were learning on the job.


Our research played out through semi-structured interviews (n = 4). We did not simply ask the tutors about buttons, menus, or software; instead, we looked at the invisible moments. We looked at the stressful prep time before a call, the awkwardness of correcting a student in a group setting, and the frustration of feeling like feedback is generic. With an understanding of the tutors’ workflow, we sectioned our mockups into before, during, and after the session, with a separate focus on professional development (PD).


AI-Generated Lesson Plans
AI-Generated Lesson Plans

AI-Tailored Tutor Performances
AI-Tailored Tutor Performances

What are the high-level takeaways?


Our findings revealed opportunities to better connect the data the AI generates with what tutors need to feel confident and supported in their practice. Across our interviews, tutors consistently described wanting more actionable, specific guidance from their feedback. Rather than being told to “be more engaging,” they were hungry for concrete help—such as a step-by-step breakdown of a complex word problem that they can use to guide a struggling student in real-time.


Beyond the feedback content itself, we uncovered a theme around how success metrics are interpreted. Several tutors found the talk-time ratio metric discouraging because it sometimes flagged their instructional work—like reading questions aloud or translating for students—as negative. Without additional context, the data could feel more like a critique than a coaching tool.


Furthermore, tutors expressed a deep, empathetic need for privacy in the classroom. They find it painful to call out a student’s mistake in a group setting and would prefer tools that allow them to offer a helping hand without putting a student on the spot in front of their peers.



Recommendations/Next Steps

Our team developed mockups that act as concept ideas for a more supportive tutoring platform. Based on our findings, we would advise moving from generic monitoring toward empowered coaching. Specifically, we recommend:

Optional Post-Session Summaries

Tutors can choose to “Review Now” or “Email for Later,” helping prevent burnout between back-to-back sessions.

Practice/Sandbox Mode

Tutors can rehearse and refine their strategies in a low-stakes environment.

Micro-Learning Modules

Replace longer training sessions with brief, targeted tutorials (e.g., 3-minute lessons) that fit into tutors’ daily workflows.

Private Feedback Tools

Tutors can support individual students discreetly during group sessions.



If given the opportunity, we would move toward concept validation testing because we have introduced new design concepts (such as real-time feedback displays and private communication tools) that need to be evaluated to ensure they support rather than distract from the tutoring experience.


As a team, we are moving forward with a “Human-in-the-Loop” philosophy. We have realized that AI is most effective when it handles the heavy lifting of data aggregation while leaving the instructional judgment and empathy to the tutor. If given the opportunity, we would focus future designs on reducing cognitive load during the lesson.


Impact

The most rewarding part of this project was seeing the shift in how we think about tutor success. Our work may spark a conversation about how AI can be used to celebrate tutor growth, making them feel seen, supported, and adequately equipped to do the work they love. We have provided Saga Education with a research-informed roadmap that prioritizes educator confidence and retention.


Our findings highlight how addressing small friction points in feedback delivery can have a meaningful impact on the tutoring experience. We have opened a door for Saga to think about AI not simply as a reporting tool, but as a bridge to connect tutors and students in a more supportive ecosystem.

 
 
 

Comments


© 2035 by EPS Marketing. Powered and secured by Wix

bottom of page