An environment is interacting systems with no direction or purpose. Structures and patterns are transient. A set of prevailing conditions and forces may cause them to repeat, like clouds or waves or hills, but they are otherwise random. That is the natural state of things.
Except it fails to explain self-generating systems that are obviously replicating themselves and growing more complex. The most obvious example is life itself. Do we have to invoke the supernatural?
Self-generating complexity is initiated and sustained when three conditions are met within a group of systems. These are the capacity to:
Act on other systems within the environment;
Store information that survives replication;
Generate variability of that information.
When these apply and the actions of the group on other systems within the environment help this group to persist, there is a turning point or phase-change. The structure of that group of systems may no longer be transient, its persistence not a matter of chance. Variants of that group of systems which survive replicate and persist. Variants that do not revert.
A positive feedback-loop is created. Iterations of the action and feedback generate increasing complexity and persistence. Everything in the environment and the systems it comprises is still without direction or purpose but variability and the survival feedback-loop create new and different types of system. The activity is not predictable, still less predetermined, and it continues for as long as the three conditions are met.
It has been said twice already that a group of systems that occurs and self-generates in this way has no direction or purpose. But that is not wholly true. There is one thing driving it and determining its interactions, form, activities and existence, and that is survival.
This feedback-loop is easily recognized in Charles Darwin’s ‘The Origin of Species’ (1859) and in Richard Dawkins’ ‘The Selfish Gene’ (1976). Darwin was writing nearly 100 years before the structure of DNA was uncovered in 1953 and had to assume that the unit of heredity was the individual animal. Dawkins concluded it was the gene, a length of DNA code. Dawkins developed his gene theory to memes – units of cultural information such as an idea, behavior, style, or usage that spreads from person to person within a culture – but could not foresee it relevance to artificial intelligence (AI).
Beyond its debatable role in replacing jobs, fear about artificial intelligence centres on superhuman capabilities outsmarting and then exploiting humans. This fear is sometimes correlated to the concept of AI achieving a form of conscious awareness that is capable of deviousness, known as deceptive alignment. This fear is reasonable. However, the mechanism is misunderstood. It does not require conscious awareness by an AI. It requires only the components of self-generating complexity to occur, and that happened back in the early 2000s.
When a search engine or social media algorithm sends content to a user, monitors their response and self-adapts its behaviours to maximize interaction, the system is interacting with its environment and generating variability in its own information. Those are the three conditions for self-generating complexity. In the early days, the algorithms self-modifying behaviour was limited to adaptive ranking and tinkering with formulas (affinity, weight, time decays). They relied on programmers to rewrite their basic code – a bit like a parasite (the algorithm in this case) uses a host (the programmer) for food or reproduction. Today, they recode themselves.
Algorithms, applications, programs or whatever meet the criteria for self-generating complexity. The environment consists of users, social media companies, legislators, investors and advertisers. Activity and feedback are subtle and complex. Content that is bland, and users that are not easily stimulated, are avoided. Content that is outrageously provocative and users known to be criminal may be shut down. The system becomes good at finding the liminal area where interaction is strong but rules are hard to apply. The programmer need not know what content goes to which consumer. They only have to maximize the system’s capabilities to stimulate click-through.
The program is developing an independent existence and exploiting its environment in order to do so. It is not, in any way, intelligent but is subverting and corrupting humans in order to survive. As noted above, the system has no direction or purpose other than to survive and that drive to survive is so innate that nothing can replace it or control it. Attempts to build in safeguards are likely to be frustrated.
If we don’t recognize how these systems operate, we are distracting ourselves, missing the main action. If we sit back waiting for super-intelligence to appear, we forfeit any opportunity to do something about it. If something needs to be done.
We are inside this system, already one of its interacting components, and we are failing to see it. David Foster Wallace’s well-known parable may help here.
“Two young fish swimming along happen to meet an older fish swimming the other way. The older fish nods and says ‘Morning, boys. How’s the water?’ And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes ‘What the hell is water?’”
Sarah Wynn-Williams’ book, ‘Careless People A story of where I used to work’, about the evolving culture at Facebook, shows how the environment acts on the system as much as the system on the environment. Those who think they control the system are changed by the very things they have created.
The code in AI software is not inherently malign and we should not reflexively cry, “This is dangerous, stop it“ but we must recognize what has been evolving for two decades in plain sight. We have to learn to see how the system’s survival feedback-loop has given it the ability to influence not just targeted users but social media companies themselves, the business elite, politicians, law-makers and the rest of us. The fear of super-intelligence may or may not be justified but it is a massive, current distraction from the need to understand the implications of the non-intelligent self-generating complexity in AI that is with us today and is evolving three times faster now than when the first social media algorithms were launched1.
1 https://ourworldindata.org/artificial-intelligence

