Skip to content
Vibrant Voices

Vibrant Voices

Better Understand the World

Created on 11th February 2024

Vibrant Voices

Vibrant Voices

Better Understand the World

The problem Vibrant Voices solves

This arduino-connected device will best be applied by attaching a microphone to the user that can record the audio they are hearing from another person and interpret the emotion behind it to the user via text displayed on their phone. This would ideally be for autistic or mentally disabled individuals who struggle to understand the emotion behind others’ words, especially in a school or professional setting. We call our product, Vibrant Voices.

Vibrant Voices is a voice-to-text/text analysis program that generates a description of the emotions corresponding to the subject with the help of AI designed to understand the feelings behind human speech. We think that the need for our product lies with a target user audience of the one in 44 adults and one in 36 children on the autism spectrum who face difficulties due to hindered emotion recognition, the widespread use for text which lacks tone indication, and even training other machine learning models to be able to replicate or be sensitive to emotion based on audio/text prompts. To increase accessibility, the program is being prepared for direct implementation with an arduino, microphone, speaker, and react native application.

Challenges we ran into

In the development of this project the team came overcame many hurdles. For the creation of the backend that implements the AI, challenges were faced while finding a proper environment to develop in that could interact with the AI. We struggled trying to usetools such as Taipy, Android Studio, and React as they weren’t working on our computers and often made integration of frontend and backend. In working with Arduino, limitations on how we could modify the components forced the group to find alternative methods to approach the solution. One such example was not being able to solder on a wave shield, leading to further brainstorming which resulted in the use of a music maker shield instead. Additionally, coding for the arduino to work cohesively with the AI backend proved challenging when trying to connect the produced audio files with the python script for analysis. As a first attempt, straying from the intended use of the imported arduino library led to a lot of time spent debugging, leaving less time to optimize our integration.

Tracks Applied (1)

Generative AI - Presented By Microsoft

This project fits into the Generative AI track due to the use of text generation and voice-to-text AI analysis. These fe...Read More

Discussion

Builders also viewed

See more projects on Devfolio