Some recent work at UWS was inspired by the real or imagined activities of a colony of foraging bees. Things got awkward when the model that was being used for experiments turned out to be different from the way bees actually forage. The supervisors said “all models are wrong, but some are useful” and of course wanted to explore the proposed approach, but the student largely lost interest. To my surprise, some readers felt that the study of unnatural systems was intrinsically repugnant, and that the story illustrated the need for science and religion to work hand in hand.
Suppose that a multi-agent system, with the task of looking for a particular sort of cluster in a large data set, observes a potential sub-cluster. We can imagine an automated step of spawning a new agent trained to look for further evidence of such a cluster. However, it is a bit fanciful to think of this new agent as a specially trained infant bee, as bees may learn the habits of the nest, but do not seem to receive the sort of individual instruction found in species with nuclear families. Other work at UWS examined the development of language in interacting groups of automata, and the introduction of a new word in that experiment is not unlike the introduction of a new agent in this one, since the introduction of the word implies a new subset of individuals that use it.
Leaving aside the biological inspiration, could a commercial system be imagined with similar properties? We could imagine such a system working in the data centre of a large supermarket or bank. If new agents can be spawned in this way, there would undoubtedly be issues of monitoring or control. A novel data cluster might result in a massive generation of new agents which might appear as unexpected additional activity in the system. In a commercial data centre it is possible that such an event would lead to suspicions of an intrusion or system fault.
If the autonomous agents are required to do a lot of status reporting to explain what they are up to, the additional monitoring traffic might create so many external messages as to call into question the wisdom of using agents. On the other hand, if the reporting traffic was cleverly aggregated within the swarm, a coherent report could be made to a monitor that a particular observation led to the deployment of 123,000 agents to investigate the possible existence of a new cluster, and this activity had now ended. Some computing systems build this sort of observable surface over chaotic, Brownian, internal motion; just as the apparently random behaviour of autonomous bees creates a regular-shaped nest. For example, network management systems aggregate event reports that have a shared cause, and (doubtless) Microsoft’s performance reporting systems do something similar.
In this way, the investigation of circumstances for and rules for the creation of a new agent leads to a new and interesting control problem, where the new problem is that of explaining the new situation that has arisen, in terms that make sense to those who have not been tracking all the details…