System · Accepted state
Aboutabout.md

The specialisation control plane for enterprise model fleets.

Who we are, what we build, and how we engage.

Most enterprise AI companies introduce themselves with a product demo. We introduce ourselves with a thesis. The thesis informs the product, the product informs the engagement, and the engagement leaves your data where it started.

Agentsia builds the operating system that turns your proprietary data, your operational failures, and your institutional judgement into specialist small language models you can trust in production. The product is called Modelsmith. The deployment model is a fork of the Modelsmith repository that runs inside your environment from day one, not a hosted multi-tenant service.

The first proving ground is adtech. The long-term category is the specialisation layer above the inference substrate, independent of any one serving vendor or hardware story.

What we do

  • Build specialist small language models for narrow, high-value operational workflows.
  • Provide the iterate loop, eval framework, promotion machinery, and fleet intelligence.
  • Run training and evaluation inside the customer’s environment.
  • Leave the serving substrate, and the production artefact, in the customer’s hands.
  • Contribute reusable platform improvements back to the shared Modelsmith repository.

What we do not

  • Run a hosted multi-tenant SaaS.
  • Receive or retain customer training data on our infrastructure.
  • Sell inference compute or resell an existing runtime.
  • Replace a frontier model API for every workflow. RAG and trained weights are complementary.
  • Optimise for demos. We optimise for production, promotion, and rollback discipline.

Read

The chapters and library are where the substance lives.