1v1 Coding Battle Arena
Real-time 1v1 coding battles to boost your DSA skills through fair matchmaking, smart evaluation, and competitive rewards.
Created on 5th April 2025
•
1v1 Coding Battle Arena
Real-time 1v1 coding battles to boost your DSA skills through fair matchmaking, smart evaluation, and competitive rewards.
The problem 1v1 Coding Battle Arena solves
This platform is designed to help students improve their Data Structures and Algorithms (DSA) skills by introducing a competitive yet supportive environment. It simplifies and enhances the traditional coding practice routine by allowing users to participate in real-time 1v1 coding battles. By matching players with similar Elo ratings, it ensures fair competition that aligns with individual skill levels, making the learning process both engaging and balanced. One of its key features is the ability to submit solutions in pseudocode, which removes language barriers and puts the spotlight on pure problem-solving ability. This is particularly helpful for students who may be more comfortable expressing logic without syntax constraints. A machine learning model evaluates these pseudocode submissions, providing consistent and intelligent feedback. Additionally, the gamified experience—featuring ranks, rewards, and coins—adds an extra layer of motivation, encouraging regular participation and continuous growth. Ultimately, this platform transforms DSA practice into a dynamic, accessible, and rewarding experience for all learners.
Challenges we ran into
The major challenge we faced during this project was accurately analyzing and evaluating pseudocode using a Machine Learning model. Initially, the model lacked well-defined evaluation criteria, which led to inconsistent and unreliable assessments of the submitted solutions. Since pseudocode doesn't follow a strict syntax or structure, it was difficult for the model to interpret the logic effectively. To overcome this, we implemented a workaround where the submitted pseudocode was first converted into equivalent C code, which was then compiled and tested against predefined test cases. This approach allowed us to maintain the language-agnostic nature of the platform while still ensuring accurate evaluation through a more structured and testable format.
Technologies used