Public Speaking VR

Completed

Research-grade VR simulation with custom editor tools and audio processing pipeline

Main view of the VR public speaking environment
VR environment with teleprompter interface
Secondary view of the VR public speaking environment
Simple teleprompter interface design
Alternative teleprompter interface design

Role: Technical Artist / Tools Developer

Built editor tools and a data pipeline for research data collection. VR environment setup and teleprompter UI implementation.

Overview

VR simulation for studying public speaking performance. Features three teleprompter UI variations, multiple speaking environments, and research-grade data collection. Presented at AHFE 2025 conference.

Editor Tool & Audio Pipeline

Built a custom Unity editor tool and pipeline for processing participant speech data:

Audio Capture

Record player speech as .wav files during VR session

Whisper AI Integration

Editor sends audio to Python script using Whisper for speech-to-text

Speech Comparison

Python compares transcription against original script for accuracy

Excel Export

Results automatically exported to Excel for research analysis

VR Features

3 Teleprompter UIs

Different designs tested for usability

3 Speaking Scenes

Varied environments for testing

VR Navigation

Movement and teleportation

Research

  • • Pilot study with 10 participants
  • • Presented at AHFE 2025 (Applied Human Factors and Ergonomics), Orlando
  • • Research-grade data collection for academic use

Tools Used

UnityC#PythonWhisper AIExcelVR