A (Cybernetic) Musing: Control, Variety and Addiction


Ranulph Glanville



Medicine is a science of control and should, one might imagine, be a subject that is particularly open to cybernetic investigation and enlightenment, for cybernetics, according to the subtitle in Wiener's (1948) eponymous book, is concerned, at its heart, with control and communication.

In contrast, I have reported, on several occasions, on cases that originate in the cybernetic literature and with the grand old men (yes, I'm afraid, men) of cybernetics, who have discussed systems that are in essence uncontrollable. And there is, in my opinion, a medical predicament that is best dealt with not by controlling it but by giving up all attempts at control–which is the new element I introduce in this paper.

I will recapitulate the argument about the uncontrollable here (before discussing the medical condition), for it is powerful and interesting, and perhaps not as well known as it should be, giving, as it does a tremendous sense of scale. I have explored it more fully in "A (Cybernetic) Musing: Variety and Creativity" (Glanville, R 1998), a paper published in Cybernetics and Human Knowing, an associated journal of the ASC, where the officially published version of this paper may also be found.

The reason to recapitulate is firstly to help those who are not familiar with the argument (whether made by earlier scholars, or with the twist that I add); secondly, the hope that a restatement, specially a brief one, will bring some greater clarity and terseness that may make the argument more memorable to those who have already read it in more extended form; and thirdly to more accurately reflect the keynote I delivered at the ASC meeting in Toronto, August 2004 (the slides of which are reproduced–without animations–on the ASC web site)!


Variety and Controllability

Late in Ross Ashby's life, he and Roger Conant made the point that "Every Good Regulator of a System must be a Model of that System" (Ashby and Conant, 1970). This is a point so obvious that you wonder why it needs to be made: indeed, Gordon Pask's response when I first mentioned the paper to him was a dismissive "Of course!"

Nevertheless, the point should be made. To regulate (control) a system well, the regulator must work through a model of that system. And a model of a system must model every salient aspect of that system one-to-one, otherwise it is not an adequate model (a model of something that leaves out out the features you are interested in is useless).

The paper's title is, in effect, a restatement of Ashby's Law of Requisite Variety, the one law–according to the Principia Cybernetica web site–that everyone concerned with cybernetics and systems seems able to agree is both valid and important. This law, in Ashsy's words (also quoted by Beer) states that:

"Only variety can destroy variety." (Introduction to Cybernetics 11/7)

This rather strange statement can be restated to help us better understand the arcane language, as in the Principia Cybernetica site, which quotes Umpleby defining Ashby's Law as follows:

"(1) the amount of appropriate selection that can be performed is limited by the amount of information available. (2) for appropriate regulation the variety in the regulator must be equal to or greater than the variety in the system being regulated. Or, the greater the variety within a system, the greater its ability to reduce variety in its environment through regulation. Only variety (in the regulator) can destroy variety (in the system being regulated)."

I have a version of my own, that I believe is even more obvious, and hence clearer:

"For effective (regulatory) control, a controlling element (in a system) must have at least as many states as the element it is to control. Otherwise, any control exercised will be restrictive. "

So, for a system to remain regulated (controlled) it is vital that the controlling element (the model of the Ashby/Conant paper) has as much variety–whatever that may mean, and we'll explore that in just a moment–as the element it is to control. I don't think there's much that anyone would want to argue with, there.

This leaves us with two questions. The first concerns what variety is, and the second, what we mean by control.

Let's start with the second. I have divided control, as I understand it, into two types. The first is the sort of control that allows us to stay upright when skiing, stable in the face of perturbations. The excellent example of skiing is due to Maturana. This form of control is associated, in my mind, with the notion of regulation. The second I call Hitler control: it's when control is used to enforce restriction, the reducing of choices, which is essentially the currency of fascism. These two are frequently confused, which often leads to misunderstandings. The ambiguity is almost always unfortunate. In this paper, I mean skiing control, when I use the word control, unless I specifically indicate Hitler control.

So what is variety? Since the term was coined by Ashby, let's go back to him.

The word variety, in relation to a set of distinguishable elements, will be used to mean either (i) the number of distinct elements or (ii) the logarithm to the base 2 of the number, the context indicating the sense used. (Introduction to Cybernetics 7/7)

Again, I have my own version:

a measure of the number of states a system might occupy or attain: this depends on how the observer describes the system.

This measure has often been likened to Shannon and Weaver's measure, "Information."

We may now conclude this section by stating the conditions under which a system is controllable,

A system is controllable when the variety of the controlling element is at least equal to the variety of the controlled element.


Variety and Uncontrollability

If systems have to satisfy certain quite stringent conditions in order to be considered controllable, it seems that there is a good chance they may also be uncontrollable.

Such a situation would, according to the Law of Requisite Variety, occur whenever the variety in the element to be controlled exceeded the variety in the controlling element.

It might seem that this should not be a problem, because, in the situation we find ourselves in nowadays when trying to build controlling elements, we can usually just increase the variety by making more. We have become used, as one example, to the ever increasing power of computing systems which seem to follow "Moore's Law." We might then think that it would be both easy and natural to increase the variety, either by adding on some more or by waiting a little for the generally available power to increase.

This assumption only holds true if Moore's Law continues to operate (and there is no reason to believe it will), and if the variety of the element to be controlled is finite.

But what does finite mean, under these circumstances? Is it a mathematical absolute, or is it an operational condition?

Ashby (1964) was very fond of a calculation produced by Hans J Bremermann (as was Beer (cited in Ashby 1964)). Bremermann (1962) realised that any unit of matter has a finite computing capacity while bound by the Laws of Physics. He determined the computational capacity of a gram of matter to be 1047 bits/second (Bremermann;'s constant). Given that we know the mass of the earth, and its lifetime, we can thus calculate the computational capacity of our home planet, which turns out to be 1073 bits (Beer DDDD). By extension, we can determine the computational capacity of the universe (where the mean temperature tells us how many hydrogen atoms there are in each cubic meter). Ashby shows us this is 10100 bits (Ashby 1964). Ashby finalises this progression: "Everything material stops at 10100."

It is possible to question these numbers but as Ashby again shows, the actual numbers make very little difference to the outcome when you are dealing with combinatorics and exponentials that are so big. And anyhow, the point is not the numbers given, but that there are such numbers, for these numbers show us that in the physical world there are limits: it is finite, and we have a sense of scale of its finitude.

There are many, many examples of systems that have the potential of a vastly greater number of states (variety) than such a universal computer (!) might have computed. Even a board containing a small array of only 20 x 20 lights, all of which can be either on or off, has a variety of 10120 (I.e. the base 10 equivalent of the 2400 pictures it could show), which is enormously greater than the computations of our universal computer. It has been calculated that the possible combinations of chemical elements would lead, theoretically, to a total of 1020. Then a room with more than 5 materials would exceed the computations that our universal computer could have made, having a theoretical variety of 10100.

These vast numbers that exceed the physical possibility of even theoretical computation are called "transcomputable." The point here is not to argue for exact numbers but to introduce a class of numbers. This class, the transcomputables, are numbers that are beyond the realm of physical computation and are therefore not computable in the physical universe (as we understand it).

Thus, even some quite easily generated finite numbers cannot be computed and therefore cannot be embodied in a physical form. And that means that, if the variety in the element of a system to be controlled is transcomputable, we cannot make a controlling element that has the capacity to (skiing) control it, for the variety of the controlling must be less than the variety of the controlled and the Law of Requisite Variety is transgressed.

However, Hitler control remains possible possible.

Mike Robinson's outstanding paper (1979) applies this argument to enlighten the nature of behaviour in the classroom. He shows how the variety in the teacher is vastly smaller than that in the class as a whole, and shows how Victorian teachers used to manage this by restricting the variety of the individual class members and thus the class as a whole (Hitler control). I hope that the outline I have given here shows the nature and substance of this whole area of argument: those who need more are referred to Glanville 1998. The important lesson is that many systems are simply not (skiing) controllable (at least without redefinition), and that even the advances of technology will never make them so.


Uncontrollable Systems

In this paper, my main interest lies in uncontrollability and its effect on our understanding of one particular medical condition–addiction. It may seem rather strange for a cybernetician, writing in a cybernetics journal, to write about the uncontrollable. After all, cybernetics is explicitly concerned with control. But to understand the range of a concept is as important as to accept and use it. And if I appear to preach a sort of anti-cybernetics, then so be it! God needs the Devil.

Before I approach addiction, however, let me recapitulate. In an earlier column (Glanville 1998) I wrote about one advantage of a system that was in principle uncontrollable (where the variety exceeded Ashby's delimitation of things material, based on Bremermann's constant). This advantage derives, as always, not from the system per se (what on earth is that?) but from the way we use this understanding to approach the system, imagining a benefit we can gain from a way of looking that derives from the analysis of uncontrollability (unmanageability). That is, from the situation.

The situation I have in mind is the following. When I am confronted by everything else, whatever that may be, I am confronted by a system of vastly larger variety than I can ever possibly hope to have, making it (by Ashby's "Law of Requisite Variety") uncontrollable. The simplest consideration shows this to be so: for I only need the smallest number of items to make up (my) everything else to reach a number that is already what Bremermann called "transcomputable"–it is beyond even the furthest flung reaches of what we can conceive might have been computed by the universe acting as a computer over all time. To be successful in my attempts to control "the universe" or indeed just a reasonable chunk of it (such as a country or even, as Robinson (1979) has so convincingly shown us, a classroom) requires that I switch from my skiing control to my Hitler control. What I do is dismiss variety: I rule it out, either by how I look at it (eg, using statistics, or an updated version of Lord Nelson's tactical blindness) or by restricting what is possible, so that variety is destroyed. At some point (possibly augmented by computers, henchmen or whatever), I may become able to control my universe.

What amazing destruction! Yet we have in the last century seen many individuals manage this at national scales–and it appears that some societies still exist that are restricted in this manner.


But there is an alternative: we can give up trying to control this universe. And the question then becomes, what do we gain from this? (Actually, the question might as well become why do we want to control: but that question is not for here.)

The answer to this question is a "variety sump." If we give up trying to control, we gain access to enormous amounts of variety which vastly exceed any variety we might ever aspire to. This gives us the richest pastures in which to graze: an endless source of variety offering us insights and understandings we would never have otherwise had. This is a source of (individual) novelty and renewal, and hence is a potential source of creativity. In this view, one way of increasing creativity is by stopping trying to control, to manage, and enjoying the insights that this rich store of variety can offer us.


The Uncontrollable

I summarise the position implicit in the arguments developed through Bremermann's constant etc, as follows.

1) there is a law concerning controllability: Ashby's Law of Requisite Variety

2) there are limits in principle to the controllable and many systems exceed these limits and so are beyond the controllable

3) being uncontrollable (unmanageable) may have positive features

But in much of what we do we don't take cognisance of this limitation, and even less that the limitation may offer advantages. Perhaps cybernetics, itself, is in this respect first amongst equals, for it espouses control (though I have my doubts about this). However, a very close second, if indeed it is second, is medicine. The whole language in which (traditional western) medicine is discussed is the language of control. I will not argue this here, stating it as given for this paper.

Then we may ask if there are medical conditions that are, in principle, uncontrollable. And the answer is that there is at least one such major condition: addiction. Using cybernetic arguments about the controllable, I shall explain why, and how this is so.



What is involved in addiction? At one level, this is obvious. Addicts obsessively take part in behaviours that both leave them unsatisfied (always wanting more) and are judged to be damaging to both them and to others. The judgement is either social (as Ruesch and Bateson (1951) insisted all judgements of insanity are), or made by the individual him/herself (as therapy requires).

What is addiction? I imagine every reader knows this, but for clarity I characterise it thus:

Addiction is a condition in which the compulsion to repeat a damaging behaviour (usually at progressively more frequent intervals) consistently exceeds the ability to stop doing so.

It is a condition associated with control (amongst other things, for instance moral degeneration, spiritual poverty, physical deterioration, mental delusion).

The substance (or action) of choice is open, the effect and the mechanism are essentially the same.

In this paper, addiction in the form of alcoholism will be discussed. Alcoholism is probably the most common form of addiction in the West, and is taken, here, to represent all forms of addiction.

There are many studies and accounts of addiction. It is not my intention in this column to discuss their merits, not do I claim the expertise to do so. This is a journal of cybernetics, and what I wish to do is to discuss part of the phenomenon known as addiction within a cybernetic framework–particularly in terms of the cybernetics of that central concept, control.

Of course, there have been cybernetic studies of addiction, too. Of these Bateson's (1971) is the best known. Bateson was deeply impressed by the success, and therefore the way of acting, of the Fellowship of Alcoholics Anonymous (AA) which Bateson thought had created a cybernetic epistemology, and which remains to this day the one consistently successful route to release from addictive behaviour. He was particularly concerned with the concept of pride exploited by AA, as he understood it. Another is the work of Graham Barnes, whose discusses alcoholism in his book "Justice Love and Wisdom" (Barnes 1994) and in revised form in Barnes (1998), but whose greater contribution lies, I believe, in his recent doctoral thesis "Psychopathology of Psychotherapy (a Cybernetic Study)" (2002) in which he discusses (amongst other things) the problem of theorising without regard to the actuality which, in the case of the work of Eric Berne, lead to the insistence that drunks who were sober should return to drinking. Barnes' general position is that a theory of psychotherapy produces a psychopathology, and not (as we have unquestioningly believed) the other way round. Other examinations have also claimed an origin and source of insight in cybernetics, including some of the stranger cults that make pretence to solving problems.


Addiction and Control

My approach here, however, comes from a different position and at a different angle. I am interested in how the notion of control can be seen actually to sustain, rather than cure, active addiction.

By far the best explanation of this process I have heard came from an alcoholic at an AA meeting in Helsinki. In my translation (and organisation) here is what this recovering alcoholic had to say:

1) If I am addicted to a substance or action, I cannot control my use of that substance (by definition).

2) Therefore, to get better (conquer my addiction), I must learn to–and practise–control.

3) The way to know I can now control my use of my addictive substance is to test my ability by trying to use alcohol in a controlled way.

4) But if I am an addict, I cannot (by definition) control my addiction: therefore I fail the test.

5) Back to step 2.

This account of what I think of as the logic of addiction makes the process extremely clear. The addict tries to control his/her habit. In order to prove that he/she has it under control, they must try use their substance/behave in a controlled manner. However, if they are addicts, they cannot do this, and the act of trying to do so just leads to further failure and increased despair. Belief in the controllability of (or the need to control) leads inevitably to a pathological, repetitive and deeply destructive behaviour. The continuing attempt to control the uncontrollable is not, as is commonly believed, a sign of weakness of mind, but of enormous courage and determination. The insanity of the addict comes from their attempt to beat their head against the brick wall, and go on doing so: there is no cure, their experience tells them they fail whenever they try to demonstrate control, yet against the odds all they continue to try to control their behaviour. This is recursive circularity at its most vicious, a double bind.


Controlling Addiction

Above, I have shown how their addiction traps the addict in their addiction in terms of control: the logic of addiction. As Alcoholics Anonymous has long pronounced, if you are an alcoholic (addict) there is no such thing as controlled drinking (use). The programme they have designed–the Twelve Step Programme–which has been successfully adapted to so many different addictive behaviours, offers recovery from active involvement in the practice of alcoholism, but not cure: the alcoholic remains an alcoholic, but gives up the fight with alcohol by "surrendering," that is, by ceasing to try to control: the addict ceases to try to control his/her use of the chosen substance or behaviour of addiction, accepting that, he/she has no power over this substance/behaviour, the use of which has made his/her life unmanageable. They cannot control their use of the substance, or behaviour. (To control it requires using it and testing their ability to control this use, which they have not been able to do, often for decades.) Their addiction is uncontrollable, and the way out is to give up the attempt to control, with its implication to test. (Contrarily, this giving up of what hurts them is what most worries practising addicts. But that is another story.) This approach entirely contradicts the approach taken by medical doctors and psychiatrists and psychotherapists, and may help explain why AA has had such success in helping addicts (alcoholics) while the medical professions seem not to have.

There are, of course, arguments concerning the source of addiction: is addiction a result of nature or of nurture? There are arguments, too, about other approaches. For instance, can it be chemically controlled (antabuse is a drug which, when mixed with alcohol, leads to extreme nausea: but drinkers will take the pill, drink and throw up, leaving them ready to start serious (proper) drinking again)? There is the hope that therapy may release the sufferer from this condition, but results so far have been at best poor. Some addicts spontaneously give up their substance of choice (apparently this is quite common amongst heroin addicts). In one case, Eric Berne's Transactional Analysis, it has been proposed that the addict plays a game: the cure is to change the game played. However, Berne's advocacy of this course of addiction lead to many thousands of TA followers who had found sobriety through AA going back to drinking, as a result of which many died: a triumph of theory over evidence, and a result that could be seen to place Berne at the head of the table of mass murderers. (See Barnes 2002 for a sober and very serious account.)

There is talk of gene splicing, but the attempt to look for the philosopher's stone of addiction, which would turn all addicts immediately into social users, has not yet been successful. Some believe it will be found eventually, but many addicts do not really want this. Their addictive nature, when understood and accepted, is too central to who they are, central to their personality and, in a surprisingly large proportion, their creativity and charm. What they might like is that some of those qualities that they lack in abundance (ability to take responsibility, lying, blaming and excusing, for instance) were more valued in our society than they are. Sadly, it seems the mood of society is quite the opposite, as we blame (and sue) more and more, though I hope I have managed to convince readers of this column that the ethical implications of second order cybernetics imply that we should be more willing to accept responsibility, and blame less (Glanville 1995). In this year of Gregory Bateson's centenary, it is also important to talk of trust: as he showed in "From Versailles to Cybernetics" 1972) politics became defiled when the deceits of war transported themselves, at Versailles, into the lies of the peace process. Since then, politicians have not held trust and honesty high, and have, in turn, not been trusted or seen as honest, themselves.


To Control or not to Control

This account of the controllable and the uncontrollable makes, at least, two points:

There are at least 2 types of control: skiing control and Hitler control, as they have been referred to here. I have indicated the difference, but not explored it in this paper.

Systems easily reach such levels of (potential) variety that they cannot be effectively controlled unless they are redefined, or (to their detriment) forced into moulds that pervert them.

Previously, I have reported two examples of uncontrollability. Following Robinson (1979), the classroom was presented as an example of a quite straightforward situation in which the variety of the controlling system (the teacher) cannot equal that of what is to be controlled (the classroom full of scholars). To control such a system requires either the sort of restrictive control (Hitler control) that removes the individual and the possibility of the individual making a contribution, or a completely different way of looking at things (such as small group work with the teacher as facilitating monitor).

I also suggested that it is possible to give up the notion of control so that excessive variety becomes a source of renewal and creativity: a way for the artist in us to thrive.

In the keynote I gave at the ASC, I discussed both these cases which I had previously introduced in this journal. Here, I have discussed at some length the previously undiscussed case of addiction, which is made worse by our attempts to control, and where gaining control does not lead to freedom. In the case of addiction (as well as that of the renewal of creativity), Hitler control is not an option because addiction is "overcome" by giving up control. Hitler control operates where we insist on retaining control even where to do so is absurd. In the case of addiction, attempts to retain control make the disease worse; in the case of the search for creativity, controlling removes at least one potential source of (personal) novelty. But in all these three cases, the result is that we limit the richness of whatever worlds we place ourselves in, albeit that, in the classroom, restriction is applied by an outside agency, whereas in the other cases it is applied by us, ourselves.

For cybernetics this may seem a strange outcome. In cybernetics, we are used to seeking to apply (skiing) control. Yet, as we have seen, this is not always possible or desirable. In this column I hope I have reminded us of some of the limits of the controllable, of when to control and when not to control. And I hope to have thrown a cybernetic light, perhaps more than just incidentally, on addiction, for it is a badly misunderstood condition, and if we can learn to better understand it we may not only treat it–and those who suffer from it–better, we may also learn something of life without control, and of the limits of controlling, which will help us lead richer lives and to face and understand some of the many things that remain elusive.

It is crucial that we know when (and when not) to use techniques and ideas. Control is not an idea that should be used everywhere, and we need to know when we should not use it. And, while the benefits of control are well known and widely accepted, the benefits of non-control aren't. In this column I have tried to look at control not as something that is universally desirable, but through understanding its limits. I hope may have thrown some light elsewhere, particularly on the nature of addiction, which turns out, in the way that AA handles it, to be like a Buddhist suffering and best handled that way. And I have attempted to show examples of how we may on occasion benefit from not trying to exercise control.


Ashby, R (1956) An Introduction to Cybernetics, London, Chapman and Hall

Ashby, WR (1964) Introductory Remarks at a Panel Discussion in Mesarovic, M (ed) Views in General Systems Theory Chichester, John Wiley and Sons

Ashby, WR and Conant, R (1970) Every Good Regulator of a System Must be a Model of that System, Int J Syst Sc Vol 1

Barnes, G (1994). Justice, Love and Wisdom. Linking Psychotherapy to Second-Order Cybernetics, Zagreb, Medicinska Naklada.

Barnes, G (1998) Hypnosis of Alcoholism. HYPNOS, Vol xxv no 4

Barnes, G (2002) Psychopathology of Psychotherapy (a Cybernetic Study), unpublished PhD Thesis, Melbourne, RMIT University

Bateson, G (1971) The Cybernetics of ‘Self:" a Theory of Alcoholism, Psychiatry vol 34 no 1

Bateson, G (1972) From Versailles to Cybernetics, in Bateson, G (1972) Steps to an Ecology of Mind, New York, Bantam

Beer, S (cited in Ashby 1964)

Bremermann, H (1962), Optimisation Through Evolution and Re-Combination in Yovits, M, Sawbi, G and Goldstein, G (eds) Self-Organising Systems Washington DC, Spartan Books

Glanville, R (1995) Chasing the Blame in Lasker, G (ed.) "Research on Progress–Advances in Interdisciplinary Studies on Systems Research and Cybernetics" Vol. 11, IIASSRC, Windsor, Ontario

Glanville, R (1998) A (Cybernetic) Musing: Variety and Creativity, Cybernetics and Human Knowing vol. 5 no 3

Glanville, R (2003) Behaving Well, in Smit, I, Wendell, W and Lasker, G (eds) "Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in AI," International Institute for Advanced Studies in Systems Research and Cybernetics, Windsor, Ontario

Glanville, R (2004) Control, Imagination and Addiction, keynote delivered at the American Society for Cybernetics Annual Meeting, Toronto, August: slides (without transitions) may be accessed at http://www.asc-cybernetics.org/2004/Glanville.pdf

Robinson, M (1979) Classroom Control: Some Cybernetic Comments on the Possible and the Impossible, Instructional Science vol 8

Ruesch, J and Bateson, G (1951). Communication. The Social Matrix of Psychiatry, New York, W.W. Norton

Weiner, N (1948) Cybernetics, Cambridge, MIT Press