values’ Entry

Second-Order Cybernetic Applications for Ethical Artificial Intelligence

The dream of artificial intelligence has been a goal for the Cybernetic Sciences since virtually the dawn of the Computer Age. A number of key approaches, such as brain modeling through neural networks, have been attempted, although scarcely enough detailed information exists about the brain to warrant such serious endeavor. In actuality, the key solution to developing convincing artificial intelligence involves an innate understanding of human communication in general. The preeminent test for AI devised by Alan Turing abstains from relying upon any direct measure of consciousness or perception in its determinations, rather strictly targeting the communicative factors underlying general human language. Assuming that the symbolic attributes of human language can be convincingly simulated on the computer, then many decades of needless effort potentially could be cut from the neural-net or perceptual approaches. Indeed, precisely such a technical innovation has been devised based upon the symbolic attributes underlying affective (or emotionally-charged) language. Clear precedents already exist with respect to chess-playing computers, which prove particularly effective for modeling the symbolisms underlying such an abstract gaming format. In similar fashion, the symbolic attributes of the English language tradition prove similarly comprehensive in scope, although several orders of magnitude more abstract in this regard. Certainly the primary economic focus of human society is mediated primarily through the symbolisms of human communication, specifying language as the most rational focal point for ongoing research.
Fortunately, a convenient shortcut to this daunting complexity of a direct language simulation has recently been proposed: directly focusing upon the motivational (or emotionally-charged) aspects of language as its guiding principle, with the remaining bulk of value-neutral language filling-in in an accessory role. Indeed, as Robert Warren Penn once insightfully wrote: “What is man but his passions?” Along similar lines, most neuroscientists consider the mind/brain complex as a vast motivational analyzer that enables the individual to flourish in harmony with the environment through the principles of instrumental conditioning. The current proposal establishes precisely such a foundation within conditioning theory; in this case, appetite in anticipation of rewards, or aversion in expectation of lenient treatment. Furthermore, when more abstract forms of affective language are viewed in the terms of an ascending interactive hierarchy of meta-perspectives as initially outlined by G. Bateson (1951) then the overall complement of the traditional groupings of virtues and values jumps neatly into focus.





Through a primary focus upon the affective aspects of human language, an economically feasible shortcut to the AI simulation of human communication finally appears within reach. A particular form of rational inquiry, traditionally known as inductive reasoning, gathers together the best available evidence, inferring the most probable conclusion from the sum total of facts. The conclusions achieved through inductive reasoning are never absolutely certain for there always remains the nagging doubt that the verdict was made in error. Indeed, the uncertainties of the natural world give inductive reasoning the clear advantage in such a problem-solving mode. According to this inductive paradigm, each of us builds a mental model of our environment over a lifetime forming a master template for our current experiences. When our expectations match our surroundings, we achieve a general sense of security. A mismatch, however, leads to a surprised reaction followed by investigative behavior.
In terms of AI, the computer would similarly be programmed with its own formal map of reality employed in an analogous detection and matching mode. Any final conclusions would necessarily rely upon probability, although statistics are one of the computer’s computational strong points. It is here that the logistics of the power hierarchy rightfully enter the picture, serving as the elementary foundation for the first inductive system dealing with motivational logic. The logical attributes of the power hierarchy are programmed directly into the computer providing a formal model of motivational behavior in general. The computer then employs this programming to infer the precise power-level at issue within a given verbal interchange. On the basis of this initial determination, the computer further calculates its own given power counter-maneuver simulating motivation within the verbal interaction.
The systematic organization of the power hierarchy allows the construction of what are termed the schematic definitions. This crucial innovation spells out (in longhand) the precise location of each virtue or value within the linguistic matrix, while preserving the correct status of respective authority and follower roles. Each definition is formally constructed along the lines of a two-stage sequential format; namely, (A) the formal recognition of the preliminary power maneuver, and (B) the current counter-maneuver now being employed, and hence, labeled. Take, for example, the representative schematic definition of justice reproduced below:

Previously, I (as your group authority) have honorably
acted in a guilty fashion towards you: countering
your (as PF) blameful treatment of me.

But now, you, (as group representative) will
justly-blame me: overruling my (as GA)
honorable sense of guilt.

Here, the honorable sense of guilt expressed by the group authority represents the preliminary power maneuver, countered by the just-blaming strategy initiated by the group representative. The preliminary power perspective represents the one-down power maneuver, while the immediate power perspective is designated as the one-up variety. Power leverage is accordingly achieved by rising to the one-up power status; namely, ascending to the next higher meta-perspectival level. The complete four part listing of schematic definitions for the virtuous mode is listed in Figs 1B, 1C, 1D, and 1E. The instinctual terminology of operant conditioning is seen to dominate at the initial levels, replaced in due fashion by the virtues, values, and ideals of the higher levels. At each succeeding level, a new term is introduced (representing the power maneuver currently under consideration). Beginning with the group authority level, the initial terms begin to drop out of the definitions, necessarily freeing up space for the current terms under consideration; (whereby maintaining a stable buffer of terms within the definitions). The respective authority and follower roles remain fixed throughout the entire span of the power hierarchy, systematically abbreviated approximately half of the time for sake of brevity in non-critical (redundant) positions. Accordingly, PA stands for personal authority, PF equals personal follower, GA stands for group authority, etc.
The systematic organization of the schematic definitions permits extreme efficiency in programming, each more advanced level building directly upon that which it supercedes (eliminating much of the associated redundancy). Through an elaborate matching procedure with the schematic definitions, the precise motivational level of communication can accurately be determined (defined as the passive-monitoring mode). This basic determination, in turn, serves as the basis for the production of a response repertoire tailored specifically to the computer (the true AI simulation mode).


All aspects considered, the most basic unit of input for the AI computer must necessarily be the sentence, for the schematic definitions are similarly given in the form of a dual sentence structure. The AI computer then employs parallel processing to determine the precise degree of correlation between the inputted (target) sentence and its respective schematic definition template. This matching procedure directly scrutinizes each of the grammatical elements within a given sentence, attempting a statistical correlation with the specifics for a given schematic definition. For instance, the tense of the verb, the plurality or person of the noun/pronoun etc. would all be scrutinized according to a pre-set diagnostic formula. Each processor would then determine the sum-total of correct matches ultimately yielding the relative probability of a match with a particular schematic definition. The processor yielding the highest overall rating is uniquely singled-out as the best match by the MCU. The master control unit achieves this result through the aid of a feedback loop, the priority of the individual microprocessors reciprocally weighted on the basis of preceding determinations. Each schematic definition is respectively composed of both past (as well as present) design components: establishing context as yet a further consideration in the matching procedure. A suitably advanced AI program would retain in a long-term storage virtually every relevant conversation with a given person or context. On this contextual basis, the master control unit then selectively “weights” the individual processors according to a preset formula, taking full advantage of both past (as well as present) conversational dynamics. Furthermore, the computer would be exquisitely sensitive to variations in human personal¬ity (just as humans are instinctively so), satisfying yet a further condition of Turing’s Test. This overall process and the basic flow chart schematic is reproduced in Fig. 1F, wherein permitting an indication of the formal dynamics at issue. In concert with the comprehensive listing of schematic definitions comprising the heart of the matching procedure, a cursory overview of the mode of operation becomes increasingly apparent.


The ultimate implementation of ethical AI should rightfully be phased-in through several distinct generations of development. The first-generation AI computer would excel in mostly routine types of monitoring applications; namely, security guard, night watchman, babysitter, etc.: where a simple “sound-the-alarm” response would be sufficient. A standard stock repertoire would undoubtedly be sufficient, featuring brief inquiries; such as who, what, when, where, why, elaborate further, etc. Situations requiring a more creative response repertoire would further necessitate the implementation of a true AI simulation mode aimed at permitting original sentence synthesis. The MCU would necessarily assume such a critical function, employing its determination of the current level of communication (presupposition) in order to activate the processor at the next higher level (entailment). This basic determination (along with the particulars of the interaction) is subsequently routed to a general-purpose sentence generator: fully equipped with the formal rules governing grammar, syntax, and phrase¬ology. Being that there are a broad range of strategies to express a given sentence meaning, a large number of potential sentences would necessarily be generated – not all equally suited to the task. Accordingly, each would be slated for subsequent feedback through the detection process, rated for their ability to best express the desired shade of meaning. Only the sentence with the highest overall rating would ultimately be selected for delivery to the speech output unit, allowing for a convincing simulation of motivational language in general. The passive monitoring mode (depicted in multi-use Fig. 1F) represents the flow chart depicting the operation of this process as well as the supportive hardware. The sequence of steps comprising the operation are depicted using consecutively numbered arrows, each numeral specifying a step in the procedure depicted in the box to which the respective arrow points. This specific format was chosen (rather than numbering the individual boxes) due to the fact that some of the boxes are assigned differing functions for the remaining AI simulation mode.
The true AI simulation employs a more sophisticated style of response repertoire through the use of a general-purpose sentence generator. A large number of sentences are necessarily generated ensuring that at least one is judged suitable following feedback through the matching procedure. The true AI agent effectively simulates an identity of its own, wherein permitting a more natural style of interaction. Fig. 1F fully illustrates this most elaborate version, representing an enhanced modification of the basic passive monitoring mode through the addition of a sentence generator and associated pathways. The passive monitoring mode runs concurrently with the AI mode. The latter only overrules the former when a computer generated response is called for. For the passive monitoring mode, the MCU predicts the next most probable response in an ongoing interaction, passing this information on to the matching procedure in order to increase monitoring accuracy. This information, in turn, can be used to synthesize responses identified as originating from the AI agent, a simulation encompassing the realm of affective language (an ethically-speaking computer). A simulation of different modes of temperament is further feasible, particularly those most compatible personalities.
In conclusion, the general AI-agent is technically defined as a recurrently structured matching-procedure based upon the schematic definitions, a process dependent upon both the content and the context of the verbal interaction. In longer narratives (such as storytelling) the meaning is spread out over an extended sentence sequence, a circumstance not always correctly comprehended by the computer. This design shortcoming is further remedied through the addition of supplementary expert systems attuned to such narrative complexities.
Attention span is a further factor sure to be enhanced within the modified AI format. The typical human mind only accommodates several given tasks at a time reminiscent of the Von Neuman bottleneck. The parallel processing capabilities of the AI-agent, however, certainly surpass such sequential limitations, reaching unheard of degrees of versatility. Indeed, a suitably advanced AI computer could theoretically process numerous conversations simultaneously, wherein maximizing available circuitry by making use of the lulls naturally occurring within general conversation. Here, multiple accounts could be accommodated, rated in terms of increasing urgency. Conversations requiring real-time parameters are assigned the highest priority, whereas more leisurely response rates are processed during free periods. This further entails a centralized CPU complex that connects end users through a standard user interface or the Internet. The bulk of processing would be transferred directly to the considerable resources of the Internet.
In terms of this speculative scenario, the comprehensive knowledge bases of the AI-agent are distributed as open source code over an extensive network of broadband servers. The end user computer need only run a stripped-down version of the AI-MCU program, where the inference engine interfaces remotely with the web knowledge base on a real-time basis. The basic groundwork for this standardized database is already in the works with respect to the recently proposed Semantic Web. The brainchild of Tim Berners-Lee (the original innovator of the World Wide Web), the Semantic Web proposes to bypass the conceptual limitations of the human-web interface. It alternately aims to implement a machine-to-machine version through standardizing the wealth of network information. In conjunction with further provisions for a built-in AI interface, the futuristic AI assistant could eventually become a feasible an ethical cybernetic reality. Of course, all of this proves equally applicable to a purely human sphere of influence as well.

Click on image to see full size version

Author is elegible for HvF Prize (aged under 35): NO

Print Friendly, PDF & Email