Crafting experience...
10/12/2025
A Project Made By
Submitted for
Built At
HuddleHive's WIT Hackathon #4
Hosted By
What is the problem you are trying to solve? Who does it affect?
Limited access and high cost: Personal trainers are expensive and not readily available everywhere.
Inconsistent quality: The effectiveness of training depends heavily on the individual trainer’s expertise — no single trainer can master every domain.
Lack of personalisation in digital solutions: Fitness apps and video platforms provide standardised, pre-recorded sessions that lack real-time feedback, personalisation, and form correction.
The opportunity: Can emerging technologies help automate and personalise the experience of a human personal trainer?
What is your idea? How does it fix the problem?
Personalised AI Fitness Coach:
A GenAI-powered LLM agent that interacts with users to understand their preferences, fitness level, history, injuries, and goals — and generates a fully tailored training program. (Demo: Personalised plan generation: https://youtu.be/qL_lOWl5SA4)
Real-Time Adaptive Feedback:
Users can provide instant feedback during workouts (e.g., “too hard,” “too easy,” “felt pain”), enabling the AI to adjust exercises, intensity, or provide suggestions like posture corrections or weight adjustments.(Demo: https://youtu.be/qL_lOWl5SA4)
Intelligent Movement Tracking:
Visual body tracking (powered by OpenPose) runs in the background to count reps automatically and monitor performance. (Demo: Rep-counting in action: https://youtu.be/5OpKRXu7tgg)
Posture Correction & Voice Guidance:
A posture-check feature allows users to see a visual overlay of their movement with tracking lines, while voice feedback guides form, counts reps, and provides real-time coaching (Voice Demo: https://youtu.be/icFzZ0wX9z8).
Our system is built with separate components - HTML for structure, Python for backend logic, and React with TypeScript for the frontend - all currently running locally. The frontend communicates with the backend to handle user interactions and motion tracking. At present, AI responses are hardcoded, but in the full implementation, an LLM API will power an intelligent agent that connects to a database of exercises and recommendations to generate personalised training plans based on user goals and context. The motion detection module runs on the backend, continuously tracking movements to count reps; posture visualisation is rendered on-screen only when the user enables the posture correction feature.
What did you struggle with? How did you overcome it?
Integrating the LLM Agent:
Building an LLM agent that communicates with a custom exercise database proved challenging, as the database itself had to be created and most LLM APIs are paid services. While free options like DeepSeek and LLaMA were available, the team lacked prior experience with them, making setup time-consuming.
Solution: For demonstration purposes, we hardcoded the AI interactions to showcase the concept effectively.
Motion Detection Calibration:
The motion tracking module didn’t function correctly out of the box — we needed to manually adjust the angle thresholds for accurate arm-raise detection.
Time Constraints and Debugging:
Limited time prevented full debugging and refinement.
Solution: We focused on demonstrating the working features rather than unfinished ones.
Scope Management:
To stay on schedule, we deprioritised “nice-to-have” features such as specialised coaching (e.g., gender or cycle-based plans), integrated videos and voice-overs, gamification elements, and feasibility or data privacy assessments.
What did you learn? What did you accomplish?
None of us had prior experience with computer vision or machine learning, so we learned how to quickly explore and adapt complex technologies to build a functional demo.
We developed skills in collaboration and task division, coordinating effectively across different components of the project even when full integration wasn’t achieved.
Ultimately, we translated a challenging concept into a working prototype, demonstrating key features and proving the feasibility of our idea within limited time and resources.
What are the next steps for your project? How can you improve it?
Integrate a live LLM API: Replace hardcoded responses with a fully functional AI agent capable of generating personalized training plans and adapting in real time.
Develop and connect a database: Build a structured exercise database with metadata (e.g., difficulty, muscle groups, equipment) for dynamic plan generation.
Enhance motion tracking: Improve accuracy of pose detection and expand to full-body tracking, including automatic form correction and injury prevention insights.
Add multimedia coaching: Incorporate voice guidance, video demonstrations, and feedback overlays for a more immersive user experience.
Include personalization features: Introduce advanced options such as gender-based training, cycle tracking, or adaptive difficulty levels.
Address feasibility and privacy: Evaluate data protection, risk management, and the ethical use of body-tracking and health-related data.
Improve scalability and reduce cost: Optimize the system for cloud deployment, explore lighter open-source models, and manage API usage efficiently to ensure affordability and smooth performance as the user base grows.