This is the first of a 3-part blog post on Autonomous Agents and Multi-Agent Systems. There is really exciting work in these areas at present, across multiple sectors. This post series provides a gentle introduction to the topics.
Let’s get started with Autonomous Agents and use that to lay the foundation for Multi-Agent Systems in Part 2. In Part 3 then, we will reflect on how the presence of Agents might affect the behaviour of a system and then focus on a potential future use case for Multi-Agent Systems.
Introduction
Autonomous Agents
Autonomous Agents and Multi-Agent Systems (MAS) are becoming more prevalent than ever before[1]. Everyone who browses the internet or is a passenger in a self-driving car is in some way having their experience influenced by Autonomous Agents. But what exactly are Agents? Conceptually speaking, an Agent is a computer system that takes action on behalf of its creator. One would be forgiven for asking; ‘isn’t that just an algorithm?’.

Therefore, we must be careful when assigning such definitions to Agents. Indeed, an Agent is capable of taking action on behalf of its creator. But this falls short of describing the true essence of an Agent. Algorithms are procedures which are bound by strict rules that determine what to do. Whereas Agents independently determine which action to take by considering the objectives of the creator. Furthermore, Agents are capable of learning from their environment with a view to improving their decision-making.
The above is a layman’s definition of Agents. To motivate further, we must delve into the detail. Ready?!
Autonomous Agents – Explored
General View
So far we have a conceptual notion of what an Agent is. This is somewhat useful, but falls short of a precise description. The issue is, there really is no common ground on a textbook definition of agents[2]. There would even appear to be some debate on the matter[3]. Fortunately, we have in our repertoire all but concrete definitions from some of the best researchers in the field. For example, according to Franklin and Graesser, Agents are “reactive, autonomous, goal-oriented, temporally continuous, communicative and adaptive”[4, p. 6]. These attributes are very specific and are useful in coming to terms with an accurate but general description of an Agent.
Computer-Science View
From a computer science standpoint, Jennings et al. state [1, p. 2], “an agent is a computer system, situated in some environment, that is capable of flexible autonomous action in order to meet its design objectives”. Great so we’ve got a computer system operating in an environment. There really is nothing distinctly different to this phenomenon and say, an algorithm or a piece of software. What stands out is the element of autonomy. That is to say, the segment on fulfilling its objectives with autonomous action. Let’s explore that a little further.
Agent Autonomy
With major advances in technology over the last decade, there is an increasing demand for autonomous computation in various domains[5]. Monostori et al. suggest that agents with their autonomous nature are the solution for this conundrum, with their agility and ability to react to change[6]. This trait of autonomy enjoys plenty of commonality across the agent-definition hyper-plane. For example, Spencer Jr. et al. view an agent as “a hardware or software-based computer system that enjoys the properties of autonomy, social ability, reactivity, and pro-activeness”[7, p. 364]. It is clear that although there isn’t a utopian definition of agents, we can accept that autonomy is at the core of how agents operate.
Autonomous Learning Synergies
In terms of learning, we know that Agents are capable of learning from experience in their environment. This sounds remarkably similar to humans[8], and not by chance. Agents, like humans, fulfill objectives by undergoing a series of steps. Not just any steps. But steps which see them explore their environments and take action (including random actions) which leads them closer to their goal. They then accrue rewards or penalties for their actions. Lastly, they make decisions using their experience and knowledge. In Artificial Intelligence, the notion of Agents learning in this manner is known as Reinforcement Learning.

Objectives for Who?
To wrap up the definition of an Agent, let’s focus on an aspect of their very nature – fulfilling their design objectives. Firstly, who do the objectives belong to? And what if the environment changes so much that we would like the Agent to focus on different objectives? These questions are somewhat rhetorical and highlight an important point. And that is: Agents are designed to achieve the objectives they have been set. Any subsequent moving of the objective goalpost by the human creates a disconnect between the human’s objectives and the Agent’s objectives.
Accounting For Human-Like AI
The above requires that we add caveat to the Agent’s trait of autonomy. To the extent that Agents operate autonomously, their actions are guided by objectives configured during instantiation. The objectives do not reflect changes that the creator desires – unless the creator intervenes. As it would turn out, this is not a unique problem. AI as a whole is dealing with this issue and Agents are no different. As Russel put it when suggesting a new definition for machine intelligence – “Machines are beneficial to the extent that their actions can be expected to achieve our objectives”[9, p. 13]. Perhaps one day Agents will be able to achieve our dynamic goals without intervention. Or maybe we just need to set better goals?
If you enjoyed this post, please leave a comment below. Likewise, and for more content and news, why not follow me on Twitter or subscribe to my YouTube channel. For direct contact, feel free to use the contact form.
References
- N. R. Jennings, K. Sycara, and M. Wooldridge, “A roadmap of agent research and development,” Autonomous agents and multi-agent systems, vol. 1, no. 1, pp. 7–38, 1998.
- C. M. Macal and M. J. North, “Tutorial on agent-based modeling and simulation,” in Proceedings of the Winter Simulation Conference, 2005., IEEE, 2005, 14–pp.
- N. R. Jennings, “On agent-based software engineering,” Artificial intelligence, vol. 117, no. 2, pp. 277–296, 2000.
- S. Franklin and A. Graesser, “Is it an agent, or just a program?: A taxonomy for autonomous agents,” in International workshop on agent theories, architectures, and languages, Springer, 1996, pp. 21–35.
- L. Busoniu, R. Babuska, and B. De Schutter, “A comprehensive survey of multiagent reinforcement learning,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 38, no. 2, pp. 156–172, 2008.
- L. Monostori, J. V ́ancza, and S. R. Kumara, “Agent-based systems for manufacturing,” CIRP annals, vol. 55, no. 2, pp. 697–720, 2006.
- B. Spencer Jr, M. E. Ruiz-Sandoval, and N. Kurata, “Smart sensing technology: Opportunities and challenges,” Structural Control and Health Monitoring, vol. 11, no. 4, pp. 349–368, 2004.
- A. M. Graybiel, “Habits, rituals, and the evaluative brain,” Annu. Rev. Neurosci., vol. 31, pp. 359–387, 2008.
- S. Russell, Human compatible: Artificial intelligence and the problem of control. Penguin, 2019.