Crafting experience...
10/12/2025
A Project Made By
Submitted for
Built At
HuddleHive's WIT Hackathon #4
Hosted By
What is the problem you are trying to solve? Who does it affect?
Many people want to exercise regularly but face real barriers: the cost of personal trainers, limited or inconvenient gym access, time constraints, mobility or health limitations, and social anxiety about training in public. Without live feedback, beginners and even experienced exercisers often use poor form, which reduces effectiveness and increases the risk of injury. This problem reduces motivation and makes it harder for people to build lasting fitness habits.
It affects budget-conscious users who can’t afford in-person coaching, people with busy schedules or limited mobility, beginners who need clear guidance, and anyone who prefers training privately. It also helps those in rural areas or places with limited facilities, plus users recovering from injury who need safer, guided exercise.
What is your idea? How does it fix the problem?
FormaFit provides affordable, on-demand AI coaching that watches your form, counts reps, and gives concise corrective cues in real time so workouts are safer and more effective. It combines pose detection and rep-counting with a conservative feedback layer (e.g., “raise chest,” “straighten back”) and a confidence signal that asks the user to adjust the camera or slow down if detection is uncertain.
Beyond live correction, FormaFit helps build habits and motivation: users can create and save personalized routines, track session summaries and form scores over time, and view a weekly scoreboard of reps to encourage friendly competition and accountability. Progressive adjustments to routines, and simple social features (leaderboards, challenges) keep people engaged.
How do all the pieces fit together? Does your frontend make requests to your backend? Where does your database fit in?
Frontend captures the user’s camera and runs the fast, per-frame work (or at least extracts keypoints), ideally using MediaPipe/TF.js locally to keep latency low and privacy high then sends compact keypoint frames or per-rep events to the backend via REST or a realtime channel (WebSocket/WebRTC). The backend (Flask) receives those keypoints, runs higher‑level logic (rep detection, form checks, scoring), persists session metadata, updates routines/leaderboards, and returns any server‑side confirmations or aggregated feedback.
What did you struggle with? How did you overcome it?
Building a real-time AI trainer with accurate form checks and rep counting felt impossible at first, we had tight time constraints and tricky third-party API wiring to sort out. Instead of letting those blockers stop us, we focused on the hard technical core and got it working: reliable pose/key point extraction and robust rep-counting logic, a backend that safely stores sessions and leaderboards, confidence signals so the app knows when detection is uncertain, and server-side sanity checks to keep the scoreboard honest.
What we overcame matters: with that foundation in place, the system can already detect reps, give corrective cues, and persist session summaries, all the pieces a polished UI and chatbot would plug into. While the full front-end screens and a conversational coach remain future polish, we solved the central engineering challenges (latency, detection reliability, privacy and verification) so finishing the UI and safely integrating the chatbot is now a straightforward, well-scoped next step rather than a research problem.
What did you learn? What did you accomplish?
We successfully built a fully functional real-time exercise trainer that takes webcam input and turns it into live workout feedback. The system detects body pose, classifies exercises like squats, planks, curls, and pushups, counts reps, and even gives corrective cues, all while streaming annotated video back to the user through a simple web interface. Each exercise lives in its own module, making the system clean, modular, and easy to expand.
Through this process, we learned that raw pose data is often unstable, so normalization and smoothing are essential for consistency. We also discovered that a lightweight KNN classifier paired with simple rule-based logic is not only fast but surprisingly reliable and interpretable for a prototype.
Overall, we proved that real-time form tracking and feedback is achievable with accessible tools like MediaPipe, OpenCV, NumPy, scikit-learn, and Flask. With the core pipeline complete, we now have a strong foundation to build more advanced features like UI polishing, leaderboards, or AI coaching.
What are the next steps for your project? How can you improve it?
The core intelligence of the system is already in place, it can track reps, detect bad form, store progress, and even flag low-confidence predictions. The next step is to layer on the user-facing experience around that engine. That includes building a polished front-end with clear progress visuals, streaks, and friendly workout summaries, as well as integrating a conversational AI coach so users can ask questions like “How was my form today?” or “What should I work on next?”. All of the backend signals are already there, we just need to surface them in a way that feels motivating and human.
Beyond UI polish, we plan to expand exercise variety and improve personalization. Since the system already handles modular exercise logic, we can add more movements like lunges or shoulder presses with minimal rework. We also want to tune difficulty and feedback tone based on user history, gentle encouragement for beginners, stricter accountability for advanced users. Longer term, we’ll explore offline mode for full privacy, voice-guided feedback, and optional social modes like group challenges. At this point, improvement is less about research and more about execution.