The study developed and formalized the Q³ Framework, a publishable instructional and assessment model designed to improve how learners interacted with AI tutors by structuring question formation, quality control, and metacognitive measurement. The framework conceptualized learning with AI as a disciplined inquiry cycle in which (a) questions were intentionally sequenced along an inquiry ladder, (b) question quality was strengthened through evidence-based prompting moves, and (c) growth was quantified through rubric-scored performance and reflective artifacts. The study synthesized research on AI tutoring efficacy, including randomized controlled evidence suggesting AI tutoring could increase learning gains with reduced time on task when prompts and scaffolds were intentionally designed, as well as metacognition and self-regulated learning, and formative assessment practices supporting self-monitoring and feedback uptake. In addition, the study incorporated pilot-ready instruments and baseline peer-tutor attitudinal data from a Holotutor-related presurvey to ground an applied assessment plan aligned to Q³. Proposed results indicated that consistent use of Q³ would shift learner-AI interactions away from answer-seeking and toward higher-order, evidence-seeking inquiry, while enabling transparent assessment using a Q³ rubric aligned with Universal Design for Learning (UDL), Bloom-aligned cognitive complexity, and feedback literacy constructs.
Implementation steps and strategic initiatives
The initiative described by Dr. Muhsinah Holmes Morris at Morehouse College provides a strong foundation for a structured implementation plan. The first priority is to establish a faculty-led working group that includes instructional designers, department leadership, and student representatives to formalize the approach described in the abstract. This group should develop a detailed implementation timeline covering the first two semesters, with clear milestones, resource requirements, and accountability structures. The abstract's core insight — that the study developed and formalized the q³ framework, a publishable instructional and assessment model designed to improve how learners interacted with ai tutors by structuring question formation, quality control, and metacognitive measurement — should serve as the guiding principle for all implementation decisions.
A pilot phase should be launched in one or two courses or programs, allowing the team to test the approach in a controlled setting before broader rollout. The pilot should include clear entry and exit criteria, a structured feedback loop with participating students and faculty, and a mid-pilot review meeting to address emerging challenges. Resources including technology subscriptions, faculty release time, and professional development support should be secured before the pilot begins to avoid disruption. Documentation of the pilot process — including what worked, what did not, and what was modified — will be essential for scaling the approach.
Following a successful pilot, the institution should develop a scaling plan that extends the approach to additional courses, programs, or student populations. This plan should include a faculty onboarding package, a peer coaching program pairing experienced implementers with new adopters, and a shared resource repository. The abstract's observation that the framework conceptualized learning with ai as a disciplined inquiry cycle in which (a) questions were intentionally sequenced along an inquiry ladder, (b) question quality was strengthened through evidence-based prompting moves, and (c) growth was quantified through rubric-scored performance and reflective artifacts suggests that scaling will require attention to both technical and cultural dimensions of change. Institutional leadership should signal commitment to the initiative through public recognition of participating faculty and students.
Sustainability requires embedding the approach in institutional planning and accreditation processes. Annual reviews of implementation data should inform continuous improvement, and findings should be shared with peer institutions through professional networks and publications. Partnerships with organizations such as the SMART Global Technology Innovation Center at Tennessee State University will provide ongoing support and amplify the initiative's impact beyond Morehouse College.