Jump to content

User:T3Kaos

From Wikipedia, the free encyclopedia

SIGNS OF LIFE The Next Generation of Artificial Intelligence in Games

A Thesis by JB WEBB-BENJAMIN

1. What is Artificial Intelligence?

Artificial Intelligence (AI) is the science of creating intelligence within machines and electronic avatars. Starting with Alan Turing, artificial intelligence research has been a driving force behind much of computer science for over fifty years. Turing wanted to build a machine that was as intelligent as a human being since it was possible to build imitations of “any small part of a man.” He suggested that instead of producing accurate electrical models of nerves, a general purpose computer could be used and the nervous system modelled as a computational system. He suggested that “television cameras, microphones, loudspeakers,” (also known as external inputs) etc, could be used to model the rest of humanoid body.

“This would be a tremendous undertaking of course,” he acknowledged. Even so, Turing noted that the so constructed machine “…would still have no contact with food, sex, sport and many other things of interest to the human being.” The problem with this analogy naturally is that Turing is presuming that any intelligent machine created would be natural slave to the human race, and therefore require non of the things that humans regard as ‘pleasures’; I disagree with this line of reasoning as although machines are intended to ease human workloads, we must not fall into the trap of considering intelligent, and therefore reasoning, machines as slaves and chattels, otherwise we’re nothing more than the same as certain British and American forefathers who considered slavery of the African nations as a prerequisite for easing their own lives.

Turing concluded that in the 1940s the technical challenges of building such a robot or system were too great and that the best domains in which to explore the mechanisation of though were various games and cryptanalysis, “in that they require little contact with the outside world.” He explicitly worried about ever teaching a computer a natural human language as it “seems however to depend rather too much on sense organs and locomotion to be feasible.” This assumption is, of course, incorrect, as learning of any language is mainly dependant on the learning and repetition of certain constant rules.

Turing thus set out the format for early artificial intelligence research, and in doing so touched on many of the issues that are still hot debate in 2006, and will be discussed later. Much of the motivation for artificial intelligence is the inspiration from people – that they can walk, talk, see, think and do; ergo: external input dependant motivational behaviours.

How can we make machines that can do these things too?

The first issue to be resolved is whether people are somehow intrinsically different from machines. One side argues that just as we had to adapt to not living at the centre of the universe, then had to adapt to having evolved from animals, now we will have to adapt to being no more special than complicated machines. Others argue that there is something special about being human and mere machines can never have the capabilities or personhood of humans. I deem this second hypothesis to be spurious as we have had it proven time and again that many other species and races, once upon a time considered incapable of learning and emotion, proven to be very intelligent and emotive. It is just the pure arrogance of certain human races that deems others inferior to themselves and therefore ignorant.

The second issue is whether our intelligence is something that can be emulated computationally. Some argue that the brain is an information processing machine, made of meat, and as such can be replaced by a better evolved, faster computer – and Moore’s Law is ensuring that we will have a fast enough computer within the next twenty years. Moore's Law is the empirical observation that the complexity of integrated circuits, with respect to minimum component cost, doubles every 24 months. It is attributed to Gordon E. Moore, a co-founder of Intel. However, Moore had heard Douglas Engelbart's similar observation, possibly in 1965. Engelbart, a co-inventor of today's mechanical computer mouse, believed that the ongoing improvement of integrated circuits would eventually make interactive computing feasible. Others argue that perhaps that there is something non-computational going on indeed our heads – not necessarily anything that is beyond the understanding of current physics, but that the organisation of whatever is going on is not yet understood even at a basic level.

The third issue is how the computation, or whatever it is, should be organised. Are we the product of rational thought, or are we rather glorified animals, and the product of reactive brains programmed by evolution to fight or flee?

And the fourth issue is how to get all the necessary capabilities into a machine. Can they be explicitly written down as rules and be digested by a disembodied computer? Or, do we need to build robots with sensors and actuators that live in the world and learn what they need to know from their interactions with the world?

Finally there are speculations on where our work on artificial intelligence will lead. Such speculations have been part of the field since its earliest days, and will continue to be part of the field. The details of the speculations usually turn out to be wrong, but the questions they raise are often profound and important, and are ones we all should think about.

If we successfully develop and unleash artificial intelligence will we become a world where our worst nightmares become real, and the “Matrix” becomes a reality?

2. The Battle of Strong vs. Weak.

Artificial Intelligence is a huge undertaking, Marvin Minsky (b. 1927), one of the founding fathers of AI, argues: “The AI problem is one of the hardest science has ever undertaken.” AI has one foot in science and one in engineering, as well as, naturally, a hand in the psychology camp.

Within the science of artificial intelligence there are two main camps of research and development, Strong AI and Weak AI. Strong AI is the most extreme form of artificial intelligence where the goal is to build a machine capable of human thought, consciousness and emotions. This view holds that humans are nothing more than elaborate computers. Weak AI is less audacious, the goal being to develop theories of human and animal intelligence, and then test these theories by building working models, usually in the form of computer programmes or robots. The AI researcher views the working model as a tool to aid understanding. It is not proposed that machines themselves intrinsically are capable of thought, consciousness or emotions. So, for Weak AI, the model is a useful tool for understanding the mind; for Strong AI, the mind is the model.

AI also aims to build machinery that is not necessarily based on human intelligence, such machines may exhibit intelligent behaviour, but the basis for this behaviour is not important. The aim is to design useful intelligent machinery by whatever means. Because the mechanisms underlying such systems are not intended to mirror the mechanisms underlying human intelligence, this approach to AI is sometimes called Alien-AI. So, for some, solving the AI problem would mean finding a way to build machines with capabilities on a par with, or beyond, those found in humans. Humans and animals may turn out to be the least intelligent examples of intelligent agents yet to be discovered. The goal of Strong AI is subject to heated debate and may yet turn out to be truly impossible; however for most researchers working on AI, the outcome of the Strong AI debate is of little direct consequence. AI, in its weak form, concerns itself more with the degree to which we can explain the mechanisms that underlie human and animal behaviour. The construction of intelligent machines is used as a vehicle for understanding intelligent action. Strong AI is highly ambitious and sets itself goals that may be beyond our meagre capabilities. The strong stance can be contrasted with the more widespread and cautious goal of engineering clever machines, which is already an established approach, proven by successful engineering projects.

“We cannot hold back AI any more than primitive man could have suppressed the spread of speaking.”

                                    - Doug Lenat & Edward Feigenbaum

If we assume that Strong AI is a real possibility, then several fundamental questions emerge. Imagine being able to leave your body and shifting your mental life onto digital machinery that has better long-term prospects than the constantly aging organic body you currently inhabit. Imagine being able to customise your own external appearance at a whim and having access to new upgrades as well as increased connectivity to the rest of man and machinekind. This possibility is entertained by Transhumanists and Extropians. The problem that Strong AI aims to solve must shed light on this possibility, the hypothesis is that thought, as well as other mental characteristics, is not inextricably linked to our organic bodies. This makes immortality a possibility, because one’s mental life could exist on a more robust, and perhaps reliable, platform – digital hardware.

Perhaps our intellectual capacity is limited by the design of our brain; our brain structure has evolved over millions of years. There is absolutely no reason to assume that it cannot evolve further, either through continued biological evolution or as a result of human intervention through engineering.

Since man’s mysterious evolution from apes (the most common hypothesis concerning the start of mankind’s evolution into the current form physically and intellectually), the natural environment around him has guided his intellectual advancement, for example as the weather changed man developed the concept of covering himself in furs so that he wouldn’t freeze to death. Essentially intellectual evolution, as well as physical evolution, has slowed as we’ve progressed further into the technologically orientated 21st century because we have controlled our surrounding natural environment, more or less. Our only true hope for evolution lies with us playing God, utilising either genetic mutation or technological symbiosis as our next evolutionary progression. The job our brain does is amazing when we consider that the machinery it is made from is very slow in comparison to cheap electronic components that make up the modern computer.

Brains built from more advanced machinery could result in ‘super-human intelligence’. For some, this is one of the goals of AI; however I believe that to claim that any human could suddenly be recreated as a ‘super-human intelligent’ through the integration of superior digital hardware would be spurious, after all a human is only as intelligent as their education and initial base genetics (in a computer analogous to original programming), a stupid human combined with advanced hardware will still always be a stupid human, comparatively speaking, only they might be faster in the execution of their particular form of stupidity; after all, crap in, crap out. This can be demonstrated by a quick analysis of Searle’s Chinese Room. In the 1980s, the philosopher John Searle, frustrated with the claims made by AI researchers that their machines had ‘understanding’ of the structures they manipulate devised a thought experiment in an attempt to deal a knockout blow to those touting Strong AI. In contrast to the Turing Test, Searle’s argument revolves around the nature of computations going on inside the computer. Searle attempts to show that purely syntactic symbol manipulation, like that proposed by Newell and Simon’s PSSH (Physical Symbol Systems Hypothesis), cannot by itself lead to a machine attaining understanding on a quantifiable human scale.

• The Physical Symbol Systems Hypothesis (PSSH) In 1976, Newell and Simon proposed the Physical Symbol Systems Hypothesis; which proposes that a set of properties that characterise the kind of computations that the human mind relies on. The PSSH states that intelligent action must rely on the syntactic manipulation of symbols. “A physical symbol system has the necessary and sufficient means for intelligent action,” which is to say that cognition requires manipulation of symbolic representations, and these representations refer to things in the real world. The system must be physically realised, but the ‘stuff’ system is built from is irrelevant. So it could be made of neurons, silicon, or even tin cans.

In essence, Newell and Simon are commenting on the kind of program that the computer runs – they say nothing about the kind of computer that runs the program. Newell and Simon’s hypothesis is an attempt to clarify the issue of the kind of operations that are required for intelligent action. However, the PSSH is only a hypothesis, and so must be tested. Its validity as a hypothesis can only be proved or disproved by scientists carrying out experiments. Traditionally, AI is the science of testing this hypothesis.

Recall that the PSSH makes a claim about the kind of program that the brain supports. And so, arriving at the right program is all that is required for a theory of intelligent action. Importantly, they take a functionalist stance – the nature of machinery that supports this program is not the principal concern.

• Searle’s Chinese Room Theory Searle imagined himself inside a room; one side of the room has a hatch through which questions, written in Chinese, are passed into Searle. His job is to provide answers, also in Chinese, to these questions; the answers are passed back outside the room through another hatch. The problem is, Searle does not understand a word of Chinese, and Chinese characters mean nothing to him.

To help construct answers to the questions, he is armed with a set of complex rule-books which tell him how to manipulate the meaningless Chinese symbols into an answer to the question. With enough practice, Searle gets very skilled at constructing the answers. To the outside world, Searle’s behaviour does not differ from that of a native Chinese speaker – the Chinese Room passes the Turing Test.

However, unlike a genuine literate in Chinese, Searle does not in any way understand the symbols he is manipulating. Similarly, a computer executing the same procedure – the manipulation of abstract symbols – would have no understanding of the Chinese symbols either. The crux of Searle’s argument is that whatever formal principles are given to the computer, they will not be sufficient for true understanding, because even when a human carries out the manipulation of these symbols, they will understand absolutely nothing. Searle’s conclusion is that formal manipulation is not enough to account for understanding. This conclusion is in direct conflict with Newell and Simon’s physical symbol systems hypothesis.

One frequent retort to Searle’s argument is that Searle himself might not understand Chinese, but the combination of Searle and the rule-book do understand. He dismissed this argument by arguing that a combination of constituents without understanding cannot mystically invoke true understanding. Here, Searle is arguing that the whole cannot be more than the sum of its parts. For many, this point is a weakness in Searle’s argument.

Unfortunately I personally cannot honestly back either hypothesis one hundred percent. I believe that human understanding is indeed heavily based on symbol representations and their contexts within our surrounding environments – however I can also conclude that understanding doesn’t just arrive upon the installation or learning of symbol sets. Where Newell and Simon argue that the nature of the machinery isn’t important to attain understanding only arriving at the right program or software is, I completely disagree. Starting with the correct level of advanced hardware is a prerequisite for the processing of information of quality and consistency, so that the information being processed retains its true value as content and doesn’t lose quality through poor processing or transmission, the media isn’t the information, the information is the data; ergo a retarded brain will process information differently to a fully healthy brain.

When a baby is born, they look upon their surrounding environment with fresh eyes; however they are already extensively pre-programmed on a genetic level, for example: the ability to breathe air, basic neural processing, facial and voice recognition, and parent descendant genetic profiling. When the baby discovers new symbols they are linked in their minds to certain contextual data, for example; if a parent is smiling and kisses them, they equate a smile with happiness. This eventually leads to the baby smiling back and leading to more incidents of happiness, the baby understands that smiling, the symbol, leads to more symbols that are equated on an emotional and intellectual level with happiness. That is why when victims of certain vicious crimes see certain things they fall to pieces, for example, a victim is raped by a person wearing a necklace; this particular design of necklace will always bring about feelings of recollection and sorrow due to the symbol being irrevocably linked to the traumatic event.

When my baby, Natasha, was only one month old, she’d already developed a sense for symbols, she recognised my smile as a good thing, and would respond with a symbol of her own, her own smile. From birth I would repeat the word ‘Dada’ to her, while pointing at myself, so that over time she grew to recognise the symbol – ‘Dada’ – equalled the context – me. These word repetitions would be interspaced with me pointing at her and repeating her name ‘Natasha’ so that over time, while learning that I’m ‘Dada’ she would recognise that she, herself possessed a symbol representing herself, her name.

Symbols on their own do not bring about understanding, only symbols combined with contextually linked information bring about understanding linked with real-world experience or ‘living’.

3. Behavioural Dynamics.

Artificial intelligence in modern games is manifested through the seemingly intelligent actions of software agents and their interactions with the player, you the human. In these games, essentially, the player is the God of the world, with all other actions by software agents taking place to complement the existence of the central character – ergo: they exist only because you exist and as long as you, the character, require them. Software agents are controlled using different behaviours, for example flock, flee or collision avoidance. These can be split into three basic groups, Simple, Group and Targeted behaviours.

• Simple Behaviours.

Avoid Barriers – Avoid Obstacles These behaviours control how agents will avoid barriers and other obstacles. It normally contains controls for adjusting the detection radius and angle, for example if you set a detection angle of 90º with a radius of 360º, the agent will detect anything in front of it and rotate a full circle before attempting movement and detection processes again.

Accelerate At – Maintain Speed At These behaviours control how agents will move, basically a locomotion control. These behaviours tell the agent that it should start motion and continue acceleration until a prescribed event takes place which would halt or maintain the acceleration process.

Wander This behaviour tells an agent that it should wander around the environment until certain other event controls come into play, for example a wander control could be linked to an avoid obstacles control, telling the agent to wander until it has to avoid an obstacle.

Orient To This behaviour tells an agent that it should orient itself to the same orientation of its neighbour; this is useful when you want agents to all face the same way for battle sequences.

• Group Behaviours.

Align With – Join With These behaviours tell an agent to align itself with its nearest neighbour or an object. This would be useful if you want an agent to regroup on command, for example in a battle simulation, or if you wanted an agent to keep its back to a wall in preparation for sneaking and strafing round a corner, for example in a first person shooter.

Separate From This behaviour tells an agent to leave its assigned formation so that it can go off and perform another task, like a point man at the head of a large battle contingent.

Flock With This behaviour is perhaps the most common example of artificial intelligence controlled behaviour as it tells agents to flock or group together in the same way that a flock of birds or shoal of fish group together; very useful for battle and natural world simulations.

• Targeted Behaviours.

Seek To This behaviour tells an agent to seek out another agent or player, this allows realistic searching simulation; very useful for controlling enemy agents in first person shooters.

Seek To Via Network This behaviour tells an agent to seek out another agent or player via a prescribed network implanted into the environment; this allows for patrol simulations.

Flee From This behaviour tells an agent to flee or run away from another agent; very useful for combat and natural world simulations.

Look At This behaviour tells an agent to look at an object or another agent and gather data about it using its assigned sensors.

Go Between This behaviour tells an agent to negotiate between two objects or other agents, simulating how a person would navigate through a crowded environment.

Strafe This behaviour tells an agent to run sideways and fire forwards thereby simulating when a soldier runs and lays down suppressing fire.

Follow Path This behaviour tells an agent to follow a prescribed curve path, useful for simulating patrol paths and walkways.

• Assigned Sensors. A number of sensors can be combined to create a simulation of how a person would detect information about their environment; for example vision and hearing. These sensors are very useful controls for detecting information and passing it onto the other behavioural control.

4. Signs of Life.

Since the creation and continuing release of faster consoles we’re seeing a new era of game creation dawning. Now that the graphics frontier has been conquered, the next frontier of game creation is more and more realistic character interactions with non-player character’s (NPC’s). By including NPC’s that are perceptive and ‘intelligent’ we can play games that are truly immersive and, above all, will challenge us intellectually. Games from the previous decades were very linear mostly boasting amazing graphics but yet poor or non-existent NPC interactions, this led to boring and uncomplicated games that offered nothing more than a cheap and short lived thrill, nothing truly memorable, and nothing that would tax the imagination or intellect of the player. As the game player has moved from the picture of the spotty adolescent geek to the picture of the mature game player games now need to be thrilling and very competitive. Due to this we’re seeing a wave of highly intelligent games, which started with Peter Molyneux’s ‘Black and White’ and ‘Fable’, two games which utilised the highest and most complex levels of artificial intelligence and NPC interactions.

The creator of ‘Black and White’ is experimenting with new work on group minds - but unlike the Borg, the characters in the new game are already descending into bar brawls Lionhead Studios, Peter Molyneux’s studio. They’re working on a new game that extends the idea of the 'group mind' to give its characters the appearance of more realistic artificial intelligence.

‘Black and White’, in which the gamer played as a god presiding over a world of characters that displayed artificial intelligent behaviour, used the notion of a group mind to enable the characters to dance together, for instance. Lionhead's new game, code-named ‘Dmitry’, will take the concept to a whole new level in which characters understand the consequences of various actions and can make decisions based on those consequences, while still being part of a group mind reminiscent of the Borg of Star Trek fame. If we talk to most people about a broad range of subjects in the real world we’ll start to notice a semblance of the ‘group mind’ in as much as most people will repeat the same or comparatively same views concerning certain subjects – the group or ‘sheep’ mentality.

But unlike the Borg, characters in ‘Dmitry’ can descend into chaotic behaviour. Already, early experiments with the game have resulted in bar brawls between characters. The work is headed by Richard Evans, who was responsible for much of the artificial intelligence in ‘Black and White’. Evans, who studied philosophy at Cambridge, is using a concept called 'social processes' to give his characters life. According to Evans, we are surrounded by ‘social processes’. Just a few examples include a game of chess, the concept of an 'in crowd' at school, and a romantic engagement. Some ‘social processes’ are short term, some last longer, some are competitive and some are collaborative.

The ability to perform within a ‘social process’ is, said Evans speaking at the Game Developers Conference in London's Earls Court Exhibition Centre, a foundational skill.

"It is a deeper skill even than language itself," Evans reports. Crucially, for games, ‘social processes’ imply a concept of understanding the consequences of actions.

"I was playing ‘The Sims’ recently and noticed that a character disappeared in the middle of a conversation," said Evans. "The same thing happened in Black and White too, a character went off for a poo in the middle of a conversation. It was clear they did not understand the social consequences of terminating a conversation in such an abrupt manner."

So what is a ‘social process’? In Evans' words, it is "a non-physical entity which influences the behaviour of agents, or characters. It does not command an agent to do something, but it issues requests and explains the consequences of not complying with those requests." Evans offered the example of people cheering at a football match to help explain the idea. One theory is that people could have a belief about the effect of cheering, and so they might cheer when a goal is scored. "In the Social Process theory, cheering is a social process that requests each individual to perform a particular action (cheer)." Each individual then decides whether to comply based on his or her experience and situation. This is how characters in ‘Dmitry’ will work. "We have modeled a community," said Evans. "If someone is hurt then everyone will gather round and try to help."

In ‘Dmitry’ there are squares and hard guys, romances and arguments, bars and schoolyards. Characters even have the ability to dynamically create their own language, constructing simple sentences on a word by word basis.

A big part of the game will be for the player to build up his or her character to be accepted by ‘social processes’. "The hard guys will only let you join their social process if you have a certain appearance and behave in a certain way," said Evans.

Just like every other character in the game, the player can only do certain things if the relevant social process has asked them to, but of course the consequences have to be weighed up.

And it's not always going to be an easy decision. "Unlike ‘Black and White’, which was very black and white morality-wise, with just one ethical system, ‘Dmitry’ will have many different notions of naughtiness in different social groups," said Evans.

"The hard guys might request you to vandalise something, but that will send messages to other social processes and they will request other characters to come and beat you up," Evans said.

The first simulation Lionhead Studios put together was based in a bar, because in bars many different social processes can overlap. The results were unexpected. "We had two groups of hard guys. When the two groups were not holding status competitions between themselves, they picked on other characters. But then they ended up in a massive brawl as they picked on each other in an effort to increase their status, trying to impress each other."

However, it won't all be so chaotic. "We have an overall moral community social process," Evans said. "It contains a notion of naughtiness and of degrees of naughtiness, and it will periodically send out requests to the good guys to beat up the bad guys."

All these messages passing between ‘social processes’ and agents, and the work involved in each characters looking through decision trees to decide whether it should comply with a particular request, will take a lot of computing power. "We can't have hundreds of agents looking at big decision trees all the time in real time while rendering the landscape, so we'll do a lot of off-line pre-computation of decision trees before the game starts," said Evans. "We're not sure just how much we can accomplish yet."

5. The Outcome.

Basically the bottom line is that we’ll be seeing more and more games that mirror the social and intellectual interactions of the real world around us, the scary thing is – will we prefer to live the real world or play the simulated one? Its predicted that within roughly twenty years we’ll be using total immersion technologies (TIT) or virtual reality, which would enable players to interact with virtual characters and environments on an unprecedented scale; imagine actually feeling pain when shot at in a first person shooter, imagine feeling the rush of the wind through your hair as you fly an airplane, all from the comfort of your living room. This kind of technology, however, is not being predominately developed by the games industry, as you’d think, but instead by the US Army and the international porn industries. Already in the US Airforce there is equipment in place that allows a pilot to control a fighter jet using their own brains and the international porn industries have already developed hardware that allows someone to have intercourse with another person via the internet utilising ‘robotic hardware’.

Personally I cannot wait for the era to arrive when I can walk into a battlefield and feel the adrenaline rush of running into advanced combat using the latest military hardware, meanwhile knowing that I won’t die.

The world of the ‘Matrix’ is nearer than we think.

BIBLIOGRAPHY

HACKING THE X-BOX: AN INTRODUCTION TO REVERSE ENGINEERING Written by Andrew ‘Bunnie’ Huang Published by No Starch Press Inc. ISBN: 1593270291

HAL’S LEGACY: 2001’S COMPUTER AS DREAM AND REALITY Written by David G. Stork Published by MIT Press ISBN: 0262193787

IN THE MIND OF THE MACHINE Written by Prof. Kevin Warwick Published by Random House UK ISBN: 0099703017

FUZZY LOGIC Written by Daniel McNeill & Paul Freiberger Published by Simon & Shuster ISBN: 0671875353

INTRODUCING ARTIFICIAL INTELLIGENCE Written by Henry Brighton & Howard Selina Published by Icon Books ISBN: 1840464631

UNDERSTANDING ARTIFICIAL INTELLIGENCE From the Editors of Scientific American Published by Warner Books Inc. ISBN: 0446678759

CREATION: LIFE AND HOW TO MAKE IT Written by Steve Grand Published by Butler & Tanner Ltd. ISBN: 0297643916

SIMULACRA AND SIMULATION Written by Jean Baudrillard Translated by Sheila Faria Glaser Published by University of Michigan Press ISBN: 0472065211

SIMULACRA AND SIMULATION: THE MATRIX PHENOMENON Written by J.B. Webb-Benjamin Published by North Warwickshire & Hinckley Colleges Online Resources

SCIENTIFIC ADVISORS

Artificial Intelligence Advisor Professor Kevin Warwick Cybernetics Department University of Reading

Games Advisors Peter Molyneux & Richard Evans Lionhead Studios

SPECIAL THANKS I would like to thank my family for putting up with my long nights researching artificial intelligence and baffling them all with unending streams of technical jargon in the morning, especially my partner Caroline Lamb.

I would also like to thank the guys at the Cybernetics Department of University of Reading, especially Professor Kevin Warwick for putting up with my continual phone calls day after day.

I would also like to thank Dr. Paul Kruszewski at Biographic Technologies Ltd for helping me to get my hands on A.I. Implant; the world’s most advanced artificial intelligence 3D control software. However I would like to present a big middle finger blow off to Epic Games Ltd for their total rudeness and non-assistance with my research. What are you scared of guys?