Linguistic Agents Ltd

A new architecture for real multi-agent intelligence

Current multi-agent systems are too often built as message-passing between weakly modeled endpoints. A more serious design begins with structured environments, explicit agent models, and graph-based representation.

Environment = graph structure + small neural networks
Agent = environment + LLM

This distinction is the basis of the current direction. The top-level language-capable model should not be forced to do everything. The smartest model should orchestrate, while narrower learned systems do narrower work inside structured environments.

What is wrong today

Many systems called multi-agent are not truly multi-agent in a serious architectural sense. They are collections of endpoints exchanging messages. They may have several prompts, several roles, several model calls, and several tool interfaces, but this alone does not produce real agency.

Real multi-agent design begins only when agents are modeled as actual agents. An actual agent must carry a structured environment, a representation of the task, and explicit models of other agents. Without this, coordination remains shallow and fragile.

Current AI builders often multiply agents before defining agency.

What changes here

In this architecture, graph structure is not an implementation detail. It is part of the representational substrate itself. It makes it possible to represent entities, relations, task states, environments, and agent models explicitly rather than leaving everything dissolved inside raw text.

One practical engineering principle is central here: each agent should track not only the task, but what other agents know and what they think fellow agents know about the task. Better coordination is the first gain. Stronger verification is a later and important one.

Reinforcement learning is also one of the pillars of this direction, including reinforcement learning applied to graph neural networks. If environments are structured, then the systems learning those environments must also be treated as structured learners, not only as text predictors.

Read first