Loquat - Machine Learning to Enhance Teaching and Learning Through Balanced Classroom "Talk Time"[CIES 2024 Poster Presentation]

RTI’s latest mobile EdTech product, "Loquat", uses language-agnostic speech activity detection to determine when someone is talking. It then passes the detected speech to a voice type classifier to establish who is currently speaking. This allows the app to be used in different countries and settings, regardless of the instructional language. The mobile app, built for Android devices, is designed to be easy and intuitive. It provides timely classroom support for teachers located in rural or urban areas who may not have access to a mentor or coach. The app can also be used as part of a standard coaching session where the coach helps the teacher with techniques in improving their talk time balance in the classroom.  To use Loquat, the teacher records the audio of the entire lesson or choses a particular activity to be recorded. Then, they upload the recording through the app, to the Loquat server. After the audio recording of a lesson or activity is fully processed and analyzed, there are various reports available to the teacher. Each of these reports aims to aid the teacher in initiating self-reflection, applying immediate corrective actions, and improving talk time balance in their classroom (e.g., proportion of teacher talk time versus student talk time, individual talk time versus group talk time). Loquat is designed as a tool that empowers the teacher to take immediate action thus allowing students to grow and improve their learning outcomes. The app uses modern cloud architecture and serverless, scalable infrastructure, to process the audio. It only saves the data files used for building the graphs and charts used on the device reports. All classroom recordings are removed from the servers after analysis. There is no identification of voice or voice to text transcription. We have the Loquat application interface and user documentation in 3 languages, English, French, and Spanish. We want to present results and summary of the collected feedback from user testing in Guatemala and Senegal. There are three other pilots scheduled for 2023. Those include testing pilots in Ghana, Bangladesh, and Tanzania. Our vision for Loquat is that it experiences future iterations to include language detection but also provide additional functionality, like voice models identification or affective models that help not only assess talk time but also time on task, classroom climate, language, and content. We are also interested in exploring other uses of the application with emphasis on inclusivity in parent teacher meetings and communities of practice.