Jonathan Blanchet

Go Back

The missing piece of the AI Assisted Coding puzzle


The missing piece of the AI Assisted Coding puzzle

This post is a first in a serie adressing versioning in the age of AI assisted coding.

The Limits of Today’s Version Control

Over more than twenty years in engineering, I’ve seen version control become the backbone of software teams, I’ve used filenaming versioning, SVN, Bazaar, Git… (I managed not to have to use CVS luckily). As a VP of Engineering today and a former CTO, I’m relying on Git and the platforms built on top of it to track the what and the when of code changes with remarkable precision. I know who made a change in our team, when it happened, and what the diff looks like. But the why behind a change is usually much harder to understand.

Commit messages attempt to address this gap, but in practice they often read like: “fix bug”, “update styles”, or “refactor auth”. Even with conventions—such as Conventional Commits, or by linking issues from Linear or GitHub—we still mostly track the outcome, not the reasoning process, even for really well written commit messages.

Looking back, I can think of countless times when teams, including my own, lost time chasing down context because the motivation for a design choice lived only in someone’s memory (that may no longer be part of the team) or a buried conversation. Today, much of our development work is increasingly shaped by conversations: design discussions in issue trackers, debates in pull requests, architectural notes in Notion or Linear, and now, and more importantly, going back-and-forths with LLMs in coding agents.

AI assisted coding is probably the biggest shift in how developpers works I will witness in my carreer, I’ve never been convinced most of us will be replaced by AI assistants (and I still don’t think so) but it’s definitely re-shaping our work and what I’ll be looking for when recruiting future engineers.

These conversations we now have (especially with LLMs) represent the decision-making process that leads to the final code, but they are get lost the minute the code is pushed.


Why This Gap Matters More Now

Historically, this gap was frustrating but tolerable. Engineers could ask colleagues, StackOverflow, dig through Slack, or search for documentation. But as teams move faster and codebases evolve more rapidly and as AI tools began to contribute directly to code, the lack of reasoning history is becoming a significant blocker.

LLMs are getting amazingly good at generating code (in the last 2 days, Google released Gemini3 and OpenAI released GPT-5.1-Codex-Max), but without this context they still struggle to align with past decisions. Imagine onboarding a new team member—human or AI—who can see what the code does but not why it was built that way. They risk repeating mistakes, undoing deliberate trade-offs, or introducing inconsistencies. And this is not only a problem for new engineers, but also for LLMs themselves, as they struggle to reason about the code they’ve written in another session.

To make both LLMs and humans truly effective collaborators, we need a better way to preserve reasoning as part of the development record.

Toward Reasoning-Aware Versioning

What if version control evolved beyond code diffs to include structured reasoning? Imagine being able to answer questions like:

  • Why did we choose Redis over Postgres for session storage?
  • What risks did we consider when enabling this feature flag?
  • Which trade-offs guided the design of this API last year?

Some of this context exists today in issues, PRs, or documentation, but none of it is guaranteed to live alongside the code itself. A reasoning-aware version control system would treat these answers as first-class citizens, directly linked to the history of the project.


The Shift is already underway

We can already see a shift toward richer context in modern workflows:

  • Documentation now often lives inside the codebase
  • Conventional commits aim to make history more searchable.
  • PR templates ask contributors for motivation, risks, and testing notes.
  • Issue trackers encourage structured specs and acceptance criteria.

These are steps in the right direction, but they remain fragmented. What’s missing is a unified way to tie reasoning directly to the same artifacts that Git already manages so effectively.


Looking Ahead

Reasoning-aware versioning doesn’t require bloated commits or forcing developers to write essays. It’s about capturing the essence of decisions in a structured, lightweight way, enough for future humans and LLMs to understand why the code is the way it is.

The first step is acknowledging that code alone is no longer the whole story. The next step is figuring out how to bring reasoning into the history we already rely on every day.

In the next article, we’ll explore what such reasoning records could look like, and how they might integrate with Git without disrupting existing workflows.