ZenFren

ZenFren

A warm hug for your soul

ZenFren

ZenFren

A warm hug for your soul

Describe your project

ZenFren aims to be an emotionally intelligent conversational agent, offering features like voice interaction, guided exercises, journaling, and mood tracking prioritising anonymous access with a friendly interface.

In-Scope:
ZenFren provides empathetic mental health support using Gemini’s language understanding and a fine-tuned BERT model for real-time emotion detection providing emotional intelligence to the model and context aware responses. It includes guided breathing exercises, meditation sessions, and journaling/mood-tracking through Google Docs and Sheets integrations. Accessibility is enhanced with speech-to-text and text-to-speech options. ZenFren also detects crisis situations like mentions of self-harm, advising users to seek professional help, while maintaining an anonymous usage model to safeguard user privacy. The platform is built using NextJS for the frontend and Python for the backend, ensuring scalability, high performance, and seamless handling of high user loads.

Out-Of-Scope:
ZenFren does not offer direct crisis intervention or connect users to live therapists or emergency services. It lacks advanced cognitive behavioral therapy (CBT) modules, clinical assessments, video-based therapy, and telehealth integrations. Insurance provider integrations and gamified interaction models are not included. The current version does not support features like community-based discussions or reward-based motivation systems for engagement.

Future Opportunities:
Future opportunities include integrating advanced CBT modules, telehealth services, and partnerships with mental health professionals for live support. Personalized mental health recommendations based on user activity patterns and enhanced crisis detection with more advanced models are potential additions. Introducing community-building features, gamified interactions, and real-time mood analytics will increase engagement.

Challenges we ran into

During the development of ZenFren, we faced significant challenges in managing conversation histories to maintain context across multiple interactions. Storing and processing lengthy histories in memory caused performance bottlenecks, leading to slower response times and disrupted conversation flow. To resolve this, we integrated LangChain’s session management and memory caching capabilities into our backend. This allowed us to create dedicated sessions for each user, enabling efficient storage and quick retrieval of conversation histories. By temporarily caching histories and archiving them when they reached a defined length, we optimized memory usage while ensuring easy access to relevant past interactions. This approach reduced the overall memory footprint, enabling ZenFren to handle more concurrent users without compromising performance. Effective history management helped ZenFren maintain contextual continuity, ensuring contextually coherent responses even during extended conversations, ultimately enhancing the user experience.

Another challenge we faced was while integrating speech-to-text (STT) and text-to-speech (TTS) functionalities, which initially introduced latencies, disrupting the flow of conversations. To fix this, we optimized API calls, used parallel processing, and reconfigured the pipeline to run asynchronously, allowing real-time responses. Additionally, we reconfigured the audio processing pipeline to operate asynchronously, enabling ZenFren to provide immediate feedback while STT and TTS processes continued in the background. This decoupling eliminated noticeable delays, allowing users to interact seamlessly. These improvements reduced latency, making ZenFren highly responsive and user-friendly for voice-based interactions, ensuring a seamless, natural conversation experience.

Tracks Applied (1)

20. Gemini-Enhanced AI for Mental Health & Emotional Support

How does this solve the challenge? ZenFren uses Gemini’s language understanding and a BERT emotion classifier for empath...Read More

Discussion