System · Accepted state
Engagement · 07contact.md

Start an engagement.

We take three engagements a quarter. We are deliberate about who we work with and why.

The best first conversation is an honest one. Below: who we are looking for, how we engage, and what we ask in return. If any of it resonates, write to us.

Who we work with

Agentsia is strongest when you have all of the following:

  • Proprietary operational data unavailable to the major labs.
  • A workflow where latency, accuracy, and governance all matter.
  • Enough internal engineering maturity to operate agent systems.
  • Frustration with generic-model parity in one or more high-value workflows.
  • Willingness to invest in a repeatable model-specialisation capability rather than one-off demos.

First verticals

Our primary target market is adtech-adjacent: DSPs, SSPs, ad verification platforms, attribution vendors, measurement companies. These share the profile that makes adtech the best proving ground — high-frequency operational decisions, proprietary data, unforgiving latency budgets, and strong incentives to reduce frontier-lab API costs.

Adjacent verticals with the same structural profile include fintech (credit decisioning, fraud detection), legal tech (contract analysis), and healthtech (clinical triage).

Your first sponsor

Your first sponsor is rarely a pure data science lead alone. More likely combinations:

  • CTO or Chief Architect with a mandate to operationalise agents safely.
  • VP Engineering or platform lead responsible for enterprise agent infrastructure.
  • Head of AI or applied AI leader moving from pilots to production systems.
  • Product or operations leader sponsoring a narrow workflow where current agents are too generic, too slow, or too brittle.

How an engagement begins

Every engagement starts the same way: a fork of the Modelsmith repository, staged in your organisation, running inside your approved development environment from day one.

  1. Fork the Modelsmith repo into your organisation.
  2. Install dependencies in your approved environment. Run baseline onboarding checks before connecting data.
  3. Configure private environment settings, secrets, training host details, inference substrate.
  4. Stage proprietary datasets, operational failure modes, approved document sources.
  5. Define the first wedge: one commercially important workflow, clear success criteria, baseline frontier models.
  6. Run onboarding validation, smoke test, schema checks, small eval run.
  7. Run the first specialisation loop. Review evidence through web app or agent workflow.
  8. Open pull requests upstream for reusable platform improvements.

Platform improvements flow upstream. Customer-owned domain artefacts — datasets, evidence, weights, deployment state — stay in your environment unless a separate explicit agreement says otherwise.

We prefer to compete on substance. Everything you need to evaluate us is published.