Perfect Form,
Every Rep.
A personalised AI workout assistant that uses computer vision, sequential deep networks, and ensemble models for real-time pose correction and workout classification—so you perform better and reduce long-term injury risk.
Precision Layers
Google MediaPipe pose streams feed low-latency PyTorch models: a Soft Actor-Critic for joint adjustments, a DNN for exercise classification, and a Random Forest for muscle-group activation.
Workout Classification
A deep neural network classifies 22 workout types in real time from sequential pose features—with 98% macro F1—so sessions log themselves with minimal friction.
Muscle Activation
A Random Forest ensemble classifies seven muscle-group activation patterns in real time (90% F1), surfacing which groups are driving each rep.
Real-time Form Correction
A Soft Actor-Critic sequential policy adjusts twelve joint targets on the fly (~5% mean absolute error) with sub-millisecond model inference on trained regressors and classifiers.
ML CI/CD
A modular pipeline continuously integrates and ships new deep models so pose, classification, and activation stacks improve without destabilising production.
Model pipeline trending_flatThe Kinetic Protocol
Three steps to architectural movement perfection.
Pose Stream
MediaPipe extracts full-body pose landmarks in real time, giving the stack stable joints for downstream regression and classification.
Sequence Encode
Sequential deep nets and the SAC policy read the pose timeline to score form, predict adjustments for twelve joints, and keep corrections aligned with how you actually move.
Form Sync
Ensemble outputs fuse exercise class, muscle activation, and joint corrections so feedback lands in under a millisecond of model time—before bad reps add up.
Ready to evolve?
We are also exploring rehabilitation and physiotherapy flows on the same vision stack.