System · Accepted state
Essay · 04 · Field noteslibrary/on-the-fork-workflow.md

The fork workflow

How customer engagements begin with a fork of the Modelsmith repository, and why that boundary matters.

Engagements begin with a fork of the Agentsia-owned Modelsmith repository, created under your organisation. The fork is where your deployment-specific configuration, private connectors, and local operating changes live. It is also the contract.

How the boundary works

Platform improvements flow upstream through pull requests. Bugfixes, integration adapters, runbooks, API extensions, UI changes, documentation, agent-operating patches. These are reusable; Agentsia takes them back into the shared platform so other customers benefit.

Proprietary datasets, eval evidence, specialist weights, secrets, and deployment decisions stay in your environment. Unless a separate explicit agreement says otherwise. The contribution boundary is not a legal formality we worked out in a rush. It is a design decision about what is safe to share, what compounds across deployments, and what should never leave the customer's boundary.

Why the fork is not cosmetic

You could imagine Agentsia as a hosted service where customers log in, configure a tenant, and run the loop. We deliberately did not build that as the primary surface. The fork model does three things the hosted model cannot:

  • It keeps the autonomous loop running inside your boundary, against your data, with no round-trip to us.
  • It gives your security and compliance reviewers something to audit rather than something to trust.
  • It creates a natural cadence for contribution: the parts of the platform that generalise can flow upstream without negotiating per-commit NDAs.
The platform should be improvable by the customers using it, without losing the property that their data is theirs alone.
The principle behind the fork

What customers actually contribute back

In practice, the upstream contributions we receive are the kind of work that is useful to everyone and sensitive to nobody. A better failure-classification rule. A cleaner compose file. A runbook for onboarding a new model. A documentation fix. The domain specifics — your rubrics, your golden standards, your weights, your production state — never make that journey.