Skip to main content

Case Study

Intervals Pro

The brainchild of Max Revitt: an AI-first augmentation for Intervals.icu, built over the Christmas break and reaching 2,000 customers in the first two months of 2026.

The Problem

Endurance athletes — runners, cyclists, triathletes — train with structure. They follow plans. But the plans they follow are generic. A 16-week marathon block pulled from a training book does not know that the athlete slept four hours last night, flew across three time zones, or just recovered from a hamstring strain. The gap between how athletes actually train and how training plans are written is enormous.

Coaches fill that gap, but individual coaching is expensive and does not scale. Most athletes cannot afford it. The ones who can still face bottlenecks: their coach might take 48 hours to adjust a plan, by which point the athlete has already completed the session or skipped it entirely. The feedback loop is too slow.

Intervals Pro set out to close that gap with AI. The vision was straightforward: build an AI-first augmentation layer on top of Intervals.icu so athletes could talk to their training data, run daily wellness check-ins, receive practical recommendations, and build consistency through streaks. It was the brainchild of Max Revitt (CEO of Revitt), built over the Christmas break, and the question was whether it could be built fast enough to capture the market window before incumbents caught up.

Why Revitt

Max Revitt had a clear product vision and needed engineering execution that matched the ambition. There was no appetite for a six-month discovery phase, and no interest in pausing for endless workshops. The goal was a production system real athletes could use within weeks.

Revitt was the right fit because we build, not advise. We write production code from day one. There is no handoff between "strategy" and "delivery" because the same team does both. For a product that needed to move from concept to paying customers in under two months, that matters.

We joined the project as the core engineering team — responsible for architecture, implementation, infrastructure, and deployment. The scope was full-stack: backend services, frontend application, AI integration, data pipelines, and operational tooling.

Technical Approach

Tight MVP Scope

The first decision was what not to build. AI coaching could mean a hundred different things — nutrition planning, race prediction, biomechanics analysis, social features, marketplace integrations. We stripped the MVP down to one clear use case: an AI copilot for Intervals.icu that could answer training questions in chat, run daily wellness checks, and generate specific recommendations athletes could act on immediately.

That discipline was critical. Every feature adds surface area — more code, more edge cases, more support burden. A startup with zero users does not need a feature-rich platform. It needs a focused product that solves one problem well enough that athletes tell other athletes about it.

Architecture

We designed Intervals Pro around a service-oriented backend with clear boundaries between concerns. The core services were:

  • AI Coach Service — Handles conversational coaching, wellness-aware guidance, and recommendation generation based on athlete context from Intervals.icu. This is where the AI integration lives. We used large language models but wrapped them in deterministic guardrails so recommendations remained realistic and safe.

  • Intervals.icu Integration Service — Pulls and normalises training context from Intervals.icu so the assistant can reason from real activity history, load, and progression.

  • Wellness and Streak Service — Records daily check-ins, tracks adherence streaks, and feeds state into recommendation logic so the guidance reflects current readiness and consistency patterns.

  • Frontend Application — A responsive app built with TypeScript and React, focused on clear chat UX, lightweight daily check-ins, recommendation visibility, and streak momentum.

AI-Assisted Development

We used AI throughout our own development process, not just in the product itself. Code generation, test scaffolding, documentation, and boilerplate were all accelerated with AI tooling. This is part of how we shipped a production-grade platform in weeks rather than months.

The key is knowing where AI helps and where it does not. AI is excellent at generating repetitive code patterns, writing test cases from specifications, and drafting documentation. It is not reliable for architecture decisions, security-critical logic, or nuanced business rules. We used it as an accelerator, not a replacement for engineering judgement.

Infrastructure and Deployment

The platform runs on containerised infrastructure with automated deployments. We set up CI/CD pipelines early — not as an afterthought, but as a prerequisite for the speed we needed. Every merge to main triggers automated tests, builds a container image, and deploys to staging. Production deployments are a single approval step away.

Observability was built in from the start. Structured logging, application metrics, and health checks mean the team can see exactly what the system is doing at any moment. When you are growing from zero to thousands of users, you need to know immediately if something breaks.

Growth

Intervals Pro launched and hit 2,000 customers within the first two months of 2026. That growth was almost entirely organic — athletes found the product, used it, and told their training partners, club members, and online communities about it.

The growth curve validated the core thesis: athletes want personalised training plans, they are willing to pay for them, and AI can deliver the personalisation at scale. But rapid growth also created engineering challenges.

Scaling Under Load

Going from a handful of beta users to thousands of active athletes changes the engineering constraints. Plan generation is computationally expensive — each plan requires multiple AI inference calls, validation passes, and data lookups. When hundreds of athletes trigger re-planning simultaneously (Monday mornings, after weekend races), the system needs to handle the load without degrading response times.

We addressed this with queue-based processing and intelligent caching. Plan generation requests are queued and processed asynchronously, with athletes receiving their updated plans within minutes rather than waiting for synchronous responses. Frequently accessed data — athlete profiles, training history summaries — is cached to reduce database load during peak periods.

Iterating on Feedback

Real users surface real problems. Athletes reported edge cases we had not anticipated: multi-sport athletes switching between disciplines mid-week, athletes training for back-to-back events, athletes with medical conditions that constrain specific training zones. Each of these required updates to the plan generation logic and the constraint framework.

We ran tight iteration cycles — shipping updates multiple times per week based on user feedback. The CI/CD infrastructure we built early paid for itself here. A bug report in the morning could be fixed, tested, and deployed by the afternoon.

Results

  • 2,000 customers in the first two months of 2026 — organic growth driven by product quality and athlete community dynamics.
  • Production-grade from day one — no "beta" excuses, no "we'll fix it later" technical debt.
  • Rapid iteration — multiple deployments per week, driven by real user feedback.
  • Scalable architecture — handled 10x user growth without re-architecture.
  • AI that works — not a demo, not a chatbot wrapper, but a production AI system that delivers real value to real users every day.

Public Proof

Intervals Pro publishes live platform statistics at https://intervals.pro/api/v1/stats. The current athlete_count from that endpoint is used on the Revitt site as live proof of traction. This is not a screenshot of a dashboard or a number we typed into a slide deck. It is a real API returning real data from a real production system.

Takeaway

Intervals Pro demonstrates what happens when speed, technical depth, and AI-native execution come together on a single project. The platform went from concept to thousands of paying customers in weeks, not months. It did so because every decision — from MVP scoping to architecture to deployment — was made with velocity in mind, without sacrificing production quality.

This is how Revitt works. We do not produce roadmaps and hand them to someone else to build. We write the code, deploy the infrastructure, and ship the product. When AI, engineering rigour, and urgency all matter at once, that is where we operate.

If you are building an AI-native product and need engineering execution that matches your ambition, get in touch.