What the scan tells you

For every commitment your team has made — a client deadline, a launch date, a board promise — the scan gives you a probability. Not a guess. Not a traffic-light status. A number based on how your team actually delivers.

Example output

You told the client June 30. The simulation says there’s a 28% chance of making it. The most likely delivery date is October 21 — 16 weeks late. The main drivers: seven items stuck for over two weeks, and scope that’s grown 69% since kickoff.

That’s what we work with when we sit down. Not “are we behind?” — we already know. The question is what to do about it.

Executive summary showing portfolio overview with on-time probabilities Initiative breakdown with forecast distribution curve

Click the images to view in high resolution

Download the full example report (PDF)

How it works

Step one: I learn how you track your work

Every team uses their tools differently. “In Progress” means something different in your team than it does in mine. Before I touch a number, I build a clean model of how work actually moves through your pipeline — what the stages mean, where handoffs happen, where things stall. The raw data is useless without this.

If your Jira is a mess, that’s expected. The mess isn’t a blocker — it’s where I start.

Step two: I map what you’ve promised versus what’s left

Every open commitment, every piece of remaining work, and — this is usually the uncomfortable part — everything that got quietly added after you agreed on the scope.

Step three: I run the math

I take your team’s actual completion times — the full range, not an average — and use that to project forward. The fast weeks and the slow weeks. Then I simulate each commitment 10,000 times. Each run picks different completion times from that range and plays the remaining work forward.

Some runs finish early. Some finish late. The pattern across all those runs tells you something a gut estimate never can: how likely you are to land on the date you promised.

The output isn’t a single date. It’s a probability curve. A narrow curve means a predictable team. A wide curve means high variability in outcomes. Both are useful to know.

What the data captures, and what it doesn’t.

The simulation uses the full range of delivery speeds your team has actually experienced. That includes the weeks where reviews dragged, where someone was out, where a dependency didn’t land. It captures the patterns your team already lives with.

What it doesn’t do is predict things that have never happened. If your lead architect gets poached next Tuesday, no model saw that coming. But that’s not what catches most teams. Most teams get caught by the slow, ordinary accumulation of normal variation — the kind everyone dismisses as “just one of those weeks” until there have been eight of them.

What data I need (and what I never see)

I built an open-source exporter for Jira Cloud. It pulls one thing: when work moved from one status to another. That’s it.

No ticket descriptions. No comments. No attachments. No code. No client names.

You install it, you run it, you review the output before you send anything. Nothing leaves your environment without you choosing to share it.

Item workflow state transitions
Blocker flag history and duration
Epic and initiative grouping
Creation and resolution dates
Backlog addition timestamps
Sprint assignments and boundaries

If your team uses Azure DevOps, I can set up a comparable export — same principle, same boundaries.

If your security or legal team wants to review the process first, I’ll walk them through it. It usually takes ten minutes.

What you get

The scan covers up to five active initiatives. This is deliberate. More than five dilutes the analysis and leads to surface-level reporting. The goal is depth over breadth: a clear, honest read on the initiatives that matter most.

For each initiative, you get the on-time probability, the most likely delivery date, the key drivers of risk, and concrete recommendations. Renegotiate the deadline. Cut what’s not essential. Reassign people. Whatever fits your situation.

The scan ends with a working session. Not a presentation. We take the most dangerous commitment and decide what to do about it. You leave knowing exactly what changes on Monday morning.

Four weeks later, we check in to see what moved.

Questions I hear about the scan

My dev lead could probably build something like this with AI in a week.
The report, maybe. The thinking behind it, no. I spend the first days understanding how your workflow really works and cleaning the data before I run a single number. Without that step, you get a confident answer based on the wrong inputs. That's worse than no answer. And you don't audit your own books. An outside perspective with no stake in any specific date just sees things differently.
What if our Jira is a mess?
That's expected. I don't take your tracker's data at face value. I build a clean model of how work actually flows before I run any numbers. The mess isn't a blocker — it's where I start.
Why not just use ActionableAgile, LinearB, or another flow metrics tool?
You can. They're good at what they do: giving engineering teams dashboards to track their own flow. That's a different problem. The scan takes your actual commitments — the ones you've already made to clients — and tells you how likely you are to land each one. The output isn't a dashboard your team logs into. It's a specific probability attached to a specific promise, and a conversation about what to do with that number. Most founders who come to me already have metrics somewhere. What they don't have is someone willing to say "this has a 22% chance of landing" to the person who needs to hear it.
What if the scan shows problems but we're not ready to act?
That's fine. The scan is designed to be valuable on its own. You walk out with a decision made on your most dangerous commitment, a prioritised list of what to change, and a clear picture of which changes your team can make without me.
How much of my time does this actually take?
An hour with someone who knows how the team works, and time with two or three people close to planning and development. Plus the walkthrough session. Total time commitment on your side is roughly half a day spread across two weeks.

Want to know where you really stand?

30 minutes. No pitch. You tell me what you’ve committed to and I’ll tell you whether a scan would actually help — or if it’s not the right move.

Book a free 30 min call