keyboard_arrow_up
Developing a Virtual Reality System Integrated with Large Language Models for Real-Time Evaluation and Feedback to Improve Public Speaking Skills

Authors

Yuantu Chen1 and Justin Dang2, 1USA, 2California State Polytechnic University, USA

Abstract

Public speaking skills are considered both difficult as well as anxiety inducing for many, which in our society, dominated by frequent communication and presentation, can be problematic as well as prohibitive [1]. We propose a system using VR technology and AI large language models in order to help assist users in practicing their public speaking skills [2][3]. Users will be situated in a virtual environment, and an AI model will grade their speech and provide them notes on improvement. This required overcoming several design challenges such as prompt engineering our LLM as well as speech transcription. We performed an experiment in order to test our LLM model by having it grade varying types of speeches. Analysis of the data supports the idea that our model is consistently evaluating user speeches at the quality that we expect it should, although there are some improvements we could make to the AI model to improve its evaluation quality even further.

Keywords

Public Speaking, Virtual Reality, AI Feedback, Speech Evaluation

Full Text  Volume 15, Number 17