AI-written lines
Which code lines were created or changed by a MergeLoom AI run.
MergeLoom records line-level audit evidence for AI work routed through the platform, so teams can inspect which code was written by MergeLoom, who requested it, why it ran, and when it happened.
Which code lines were created or changed by a MergeLoom AI run.
Which job produced the change and where it ran.
Which approved work item authorized the code change.
Which user requested the AI implementation.
What checks ran before the review request was opened.
For work routed through MergeLoom, teams can trace AI-written lines back to the requester, ticket, date, validation, provider, and run.
See which lines were created or changed by MergeLoom AI runs, then drill from repository metrics down to the code itself.
Tie the AI-written code back to the requester, ticket or issue, date, validation result, provider, worker, and PR or MR.
Avoid commits that look purely human when AI was involved, giving security and compliance a clearer record of AI-assisted code.
MergeLoom can only provide strong attribution for work routed through its workflow.
That is why the product is designed as a required path from approved ticket to PR or MR, rather than another unmanaged local assistant.
The ticket, issue, or task remains attached to the AI code change.
The run is connected to the user who asked MergeLoom to perform the work.
Where the run executed and which provider configuration it used are part of the evidence trail.
Checks, review output, and the produced diff become part of the run record.
Connect tickets, PRs or MRs, agent logs, validation, self-review, diff guard, and generated code in one audit history.
Move from the original ticket or issue to the PR or MR, branch, validation result, review output, and produced diff.
Inspect what the agent did, which review items it found, whether validation passed, and whether diff guard warned.
Start with repository-level AI code metrics, then drill into the exact MergeLoom run and code change behind them.
Connect more of your stack, improve context, validate output, and keep audit evidence across every AI coding run.
Start with one tracker, one repository, and one validation path before rolling AI coding across the team.