Skip to main content

16 Feb 2026

From Idea to 2,000 Customers in 2 Months: The Intervals Pro Story

How an AI-native delivery approach turned a concept into a real product with 2,000 customers. The full story behind Intervals Pro.

Two months. That is how long it took to go from a blank repository to a product with 2,000 paying customers. No venture funding. No growth team. No marketing department. Just sharp engineering decisions, AI-accelerated development, and a relentless focus on shipping.

Intervals Pro is an AI coaching platform for endurance athletes. It analyses training data, generates personalised coaching recommendations, and helps athletes train smarter without needing a human coach. It is the product we are most proud of at Revitt, not because it is the most complex thing we have built, but because it proves something we believe deeply: speed and quality are not trade-offs — the team that refuses to choose between them wins.

This is the full story of how we built it.

The problem we saw

Athletes generate enormous amounts of training data. Heart rate, pace, power, cadence, elevation, recovery metrics. Modern watches and sensors capture everything. But the tools available to make sense of it are fragmented and narrow.

Existing platforms solve running, or cycling, or triathlon — but rarely all of them well. Strength training is massively neglected. And nothing gives you a comprehensive picture that includes walking, injury management, wellness, and everything else that actually affects how you train. Athletes are forced to stitch together multiple apps and still end up with blind spots.

Human coaching solves the holistic problem, but it is expensive. A decent endurance coach costs hundreds of pounds per month. Most recreational athletes, the ones who would benefit most from structured guidance, cannot justify the cost. They end up following generic training plans or, worse, making it up as they go.

The gap was clear: athletes had more data than ever, tools that only saw part of the picture, and less actionable guidance than ever. AI could close that gap. Not by replacing human coaches entirely, but by making personalised, data-driven coaching accessible to everyone — across every discipline, not just one.

Why existing AI tools fall short

The building blocks are out there. There is already an MCP server for intervals.icu. There are custom GPTs built around training data. On the surface, it looks like you could stitch together an AI coaching solution from existing parts.

But you cannot. Not one that actually works.

These tools can answer a one-off question about your last run. They cannot build a deep understanding of your training over weeks and months, spot patterns in how your body responds to load, or know that your left knee flares up when you increase volume too quickly. The long-term data analysis, the kind that makes coaching genuinely useful, requires purpose-built intelligence that general-purpose AI wrappers simply do not have.

We built different prompting strategies for different use cases: structured planning prompts for building training blocks, analytical prompts for reviewing performance trends, and lightweight prompts for quick queries like "how should I adjust today's session?" A single chatbot prompt cannot do all three well. Getting the right answer depends on asking the right question in the right way, and that requires engineering, not just an API call.

Then there is the proactive layer. Intervals Pro runs daily scheduled checks against your training data: flagging when fatigue is accumulating, adjusting upcoming workouts based on how yesterday's session went, alerting you when something looks off. This is impossible on platforms like ChatGPT or a GPT wrapper. They cannot reach into your data on a schedule, run analysis autonomously, and push adjustments back into your training plan. They are reactive by design. Coaching needs to be proactive.

The difference is not AI versus no AI. It is a product built around the problem versus a generic tool pointed at it.

Why we built it ourselves

We could have pitched this as a consulting engagement. Find a sports tech company, propose a project, build it for them. But we wanted to prove something specific: that our approach to AI delivery works not just for client projects, but for building a product from zero to real customers.

Building Intervals Pro was a deliberate bet. If we could take a concept, ship it to production, and attract real customers quickly, that would be the strongest possible proof point for how we work. Stronger than any case study. Stronger than any pitch deck.

So we built it.

Architecture decisions that mattered

Speed was the priority, but not at the expense of quality. We made a few architectural decisions early that shaped everything:

Go for the backend

We chose Go for the API and core processing layer. Go gives us three things that matter when you are moving fast: fast compilation, straightforward concurrency, and minimal runtime overhead. When you are processing training data for thousands of athletes, you need a backend that handles load without drama. Go handles load without drama.

The explicitness of Go also helped. No magic frameworks, no hidden behaviour. When something broke, the error was right there in the code. When we needed to add a feature, we could trace the entire request path in minutes. This matters when you are iterating daily.

PostgreSQL for everything

We used PostgreSQL as the single source of truth. Training data, user profiles, AI-generated recommendations, subscription state, all in Postgres. No separate analytics database. No Redis cache layer (initially). No event bus.

This sounds limiting, but it was deliberate. Every additional data store is another thing to deploy, monitor, back up, and reason about. PostgreSQL handled our scale comfortably, and keeping everything in one place meant we could query across any dimension without building complex ETL pipelines.

We added targeted caching later when specific queries needed it. But starting simple meant we shipped faster and had fewer things to debug.

React and TypeScript for the frontend

The athlete-facing interface needed to be responsive, data-rich, and work well on mobile. React with TypeScript gave us component reuse, type safety across the stack, and a massive ecosystem of charting and visualisation libraries.

We used strict TypeScript configuration from day one. No any types. No implicit returns. This added a small amount of friction to initial development but saved us from entire categories of bugs. When you are iterating quickly, type safety is not a luxury. It is what lets you refactor with confidence.

Docker for deployment

Every environment, development, staging, production, ran the same Docker containers. No "works on my machine" problems. No environment-specific configuration leaking into application code. When we deployed, we knew exactly what was running because it was the same thing we had tested locally.

We set up CI/CD from the first week. Every push to main ran tests, built containers, and deployed to staging automatically. Promotion to production was a single command. This meant we could ship multiple times per day without ceremony.

How AI accelerated development

This is where things get interesting. We did not just build AI into the product. We used AI to build the product faster.

AI-assisted code generation

For well-defined, repetitive patterns, API endpoints, database migrations, test scaffolding, we used AI coding tools to generate first drafts. An engineer would review, adjust, and merge. This cut the time for boilerplate work by roughly 60%.

The key was knowing when AI code generation helped and when it did not. For business logic, AI-generated code was usually wrong in subtle ways. For structural code that followed established patterns, it was genuinely fast.

Rapid prototyping of AI features

The coaching engine itself, the part that analyses training data and generates recommendations, went through several iterations. We could prototype a new approach, test it against a sample dataset, and evaluate results in hours rather than days. The iteration speed meant we could try more approaches and converge on something good faster.

Documentation and testing

AI tools were excellent for generating test cases from specifications and for writing documentation that stayed in sync with the code. These are tasks that engineers typically deprioritise under time pressure. Having AI assistance meant we maintained quality in areas that usually suffer during rapid development.

The growth trajectory

We launched Intervals Pro with zero marketing budget. The growth came from three sources:

1. Product quality in a noisy market

The endurance training app market is crowded, but most products are either basic trackers or expensive coaching platforms. Intervals Pro sat in a gap: AI-powered coaching at an accessible price. Athletes tried it, found it genuinely useful, and kept using it.

2. Standing on the shoulders of intervals.icu

None of this would have been possible without intervals.icu. The platform already does the heavy lifting of integrating with Strava, Garmin Connect, and every other data source athletes use. It normalises the data, provides powerful analytics, and has built a thriving community around it. Intervals Pro builds on top of that foundation rather than reinventing it.

This meant athletes could start using Intervals Pro without changing their workflow — their data was already in intervals.icu. Low friction adoption drove early growth. If you are an endurance athlete and you are not already using intervals.icu, it is genuinely one of the best platforms out there and a project worth supporting.

Within the first month, we crossed 1,000 customers. By the end of month two, we were at 2,000. The infrastructure handled it without issues because we had built for production quality from day one.

Challenges we faced

It was not all smooth. Building fast creates its own problems, and we hit several:

Data quality and inconsistency

Athletes' training data is messy. GPS glitches create impossible pace spikes. Heart rate monitors drop out mid-run. Different devices report metrics in subtly different formats. Our AI coaching engine needed to handle all of this gracefully.

We spent significant time building data normalisation and validation layers. This was not exciting work, but it was essential. Bad data in meant bad recommendations out, and bad recommendations meant lost trust and lost customers.

Prompt and data model calibration across athlete types

A training recommendation that works for an elite marathon runner is actively harmful for a beginner. The AI needed to understand not just what the data said, but who the athlete was: their fitness level, their goals, their training history, their recovery capacity.

We did not train or fine-tune any models. The intelligence lives in our prompts and our data model. We built specialised prompting strategies, different system prompts for planning, analysis, and quick queries, and a rich data model that captures training philosophy, injury history, availability, and athlete preferences. Getting this calibration right required extensive testing with real athletes across different levels. We built feedback loops so that athletes could flag when recommendations felt wrong, and we used that feedback to refine our prompts and data structures continuously.

Scaling customer support

When you go from zero to 2,000 customers in two months, support volume scales with it. Athletes had questions about their recommendations, issues with data sync, and feature requests. We were a small team, and support load was a real constraint.

We addressed this by investing in clear in-app guidance, comprehensive FAQ content, and automated responses for common issues. We also prioritised fixing the root causes of support tickets over answering them individually. Every repeated question was a product bug to be fixed, not a support ticket to be closed.

Balancing speed with technical debt

Moving fast inevitably creates some technical debt. We were disciplined about this: we tracked debt explicitly, allocated time each week to address the highest-impact items, and never let debt accumulate to the point where it slowed feature development.

The trick was distinguishing between debt that mattered and debt that did not. A slightly inelegant database query that runs once a day? Not worth fixing now. A brittle integration that fails under load? Fix it immediately.

Testing that gives confidence to ship

One of the most important investments we made was in testing. Not as an afterthought, but as a core part of how we ship.

The backend has comprehensive Go tests covering billing logic, account lifecycle, tool execution, and edge cases around data handling. The frontend uses Vitest and React Testing Library for component-level coverage. On top of that, we run end-to-end tests with Playwright against a mock AI server and headless browser, simulating real user flows from login through to workout creation.

Every push to main runs the full test suite automatically. If anything fails, it does not deploy. This sounds obvious, but the discipline of maintaining it under time pressure is what makes the difference. When you are shipping multiple times a day, you need absolute confidence that the thing you are deploying works. Our test suite gives us that confidence.

The result is pain-free deployments. New features go out with minimal bugs, and when something does slip through, the feedback loop is tight enough that we catch it fast. This is not a nice-to-have. It is what makes rapid iteration sustainable rather than reckless.

Lessons learned

Building Intervals Pro taught us things that now inform every project we take on:

Scope aggressively

The version of Intervals Pro we launched had roughly a third of the features we had on our initial list. We cut everything that was not essential to the core value proposition: upload your data, get AI coaching recommendations. Everything else, social features, advanced analytics, training plan generators, came later, after we had validated the core product with real users.

Deploy infrastructure first

Before we wrote a single line of application code, we had CI/CD, monitoring, logging, and deployment automation in place. This felt slow at the start but made everything faster after. When you can deploy with confidence multiple times a day, your iteration speed is fundamentally different.

Measure from day one

We instrumented everything from the beginning. Not just application metrics, but business metrics: sign-ups, activation rates, retention, recommendation acceptance rates. This data drove every decision we made. Without it, we would have been guessing.

AI is a tool, not a strategy

The AI in Intervals Pro is genuinely core to the product. But the success was not because we used AI. It was because we solved a real problem for real people, and AI happened to be the best tool for solving it. The engineering discipline, the rapid shipping, the production quality: those mattered more than the specific technology.

What this proves

Intervals Pro is not just a product. It is a proof point for an approach to building software.

You do not need six months and a large team to build something real. You need:

  • Clear problem definition — know exactly who you are building for and why
  • Aggressive scope management — ship the core, nothing more
  • Production discipline from day one — infrastructure, testing, monitoring
  • AI where it creates genuine leverage — not for the sake of using AI
  • Speed without sloppiness — move fast and do not break things

This is how we work on every engagement at Revitt, whether we are building a new product, automating operations, or integrating systems under deadline pressure.

What is next for Intervals Pro

The product continues to grow and evolve. Right now we are focused on two areas: improving our global payment infrastructure to better serve athletes worldwide, and fine-tuning the workout planning and creation engine to produce even more precise, personalised training sessions.

The foundation we built in those first two months, solid architecture, clean code, comprehensive testing, means we can keep iterating quickly without the slowdowns that plague products built on shaky foundations.

Build something real

If this story resonates, whether you are a founder with a product idea, an operator looking to automate a workflow, or a team that needs to ship AI fast, we should talk.

We offer the same approach we used on Intervals Pro as a service: tight scope, fast delivery, production quality, AI where it helps. We are builders, not consultants.

Got an idea that needs to ship? Let's talk.