What is Required from Enterprise Architecture in the Age of AI Agents?
So, you are planning to build a bunch of AI Agents that can operate independently. How do you really make them collaborate and support more complex workflows? The answer is simple: you need agent orchestration!
In Pekka’s article on AI Agents, we introduced key concepts and ideas on agents. There’s hype around the topic, but it’s not all hot air: LLM-based AI Agents solve the problem of service orchestration. It’s a piece of Enterprise Architecture jigsaw puzzle that has lacked an effective approach.
Juho Jutila
Juho is a Business Architect with over 20 years of experience in building competitive strategies and leveraging both emerging and proven technologies to help global organisations succeed. Before Vuono Group, he worked as a consultant at Accenture, Columbia Road and Futurice, to name a few.
Orchestration refers to the coordinated management and automation of processes
Orchestration is required when the amount of complexity exceeds a certain threshold. More information flowing through the enterprise means more pressure for automation. Complex end-to-end processes require workflows that are supported by various services across IT and business systems. Orchestration tries to ensure that applications, data, infrastructure, and various functions work together in an efficient manner.
Automation is great until you reach a high level of variety
Happy cases are just that. Happy. As the volume grows, the complexity grows with it. As the amount of transactions goes up, the number of non-standard cases follows. In addition, organisations have a tendency to accumulate complexity as they grow. Expanding offering with additional products or services will add branches to processes and requires new specialisation. Things that do not breeze through the process network without human intervention grow.
Automation requires system interoperability
Traditionally, interoperability has required systems to have compatible data models. It might be feasible to integrate two systems and match their data models, but within a more complex systems landscape, it’s an endless black hole trying to keep everything in line with a canonical data model.
Ideally, there is a loose linkage between the key identifiers in different systems. Often, it is not too solid – especially if you have systems with overlapping functionalities or responsibilities. At best, you have an interlinked, distributed data model with its own quirks and gaps.
A chain of process steps from one system to another can easily be broken with incompatible or faulty data. If the landscape changes in an unmanaged way or there is a disturbance, may the Unholy Gods of PowerPoint have mercy on your SLAs.
The role of AI Agents is first to enable and then to orchestrate
This leads to the first point. The promise of LLMs is related to the way they handle poorly or completely unstructured data. Interconnections between systems do not need to be as solid as they used to be. An AI-enabled integration component that “knows” the data models of both source and target systems can act as a flexible adapter between them. Guardrails are naturally needed. We do not want these adapters to get too creative.
The second point is the orchestration itself. It can be implemented with an AI Agent. This agent has a duty to pass requests between systems and follow how the end-to-end transaction is processed.
For this, the orchestration agent must have information on the processes that execute the transaction and how to reach systems that support the processes. Think process descriptions and service catalogues. It must also have information on how the transaction has been handled so far. We can call it state transparency.
Orchestration without state transparency risks excessive resource usage and magical AI loops
An orchestrator agent can have separate bookkeeping on the process steps taken so far. It has two benefits. First, it provides visibility on how things really run and how they work. Second, it makes it possible to recognise endless loops between the systems. As we know, LLMs are not infallible. Think something like hop count for process steps. Too many steps and the agent knows things start to smell funky.
As the orchestrator follows the execution through the enterprise landscape, it can adapt to different situations dynamically. If there is a problem with data, it can try to fix it with the help of other systems. It can also communicate with human counterparts to elaborate things. It’s very much how a human being would approach the situation.
Agents can be implemented on top of the existing landscape
The beauty of an orchestrator agent is that it can request processing from another AI agent or something much more with a legacy system. No specialised smart adapter available for an ERP system from the stone age? No problem. The orchestrator can emulate that to some extent. More exotic interfaces and integration methods naturally benefit from specialised connectors that know exactly what to do and how to do it.
Systems that offer services like this typically have their own data storage or access to tissue-like data services. It’s all good. The orchestrator is primarily interested in the control flows related to end-to-end process execution. Not the detailed behavior of individual services. The aim should be not to break the existing solutions or put additional pressure on changing them. The overall aim should be to add flexibility and tolerance for problems.
If an execution step fails, a descriptive error message helps deal with it when deciding the next step. If that is not available, even a stack trace (i.e., a very technical output on a failed trial) might help the agent to retry with better odds. If the situation is dire, a support ticket can be opened, or the situation can be escalated to a service manager. Wash, rinse, and repeat.
The requirements for an effective AI Agent architecture
It’s not the easiest of topics, and the viewpoint in this article is very EA-focused. But aiming to summarise the topic with a handful of bullet points, the requirements are as follows:
Very loose coupling enabled by smart adapters that handle data conversions
State transparency for dynamic execution of processes and guardrails
Robust error handing and escalation mechanisms
Incremental deployment on top of existing systems
Are you ready to explore how AI Agent orchestration can transform your enterprise?