AI, a Clarification

You’re Wondering Who I Am / Machine or Mannequin

Artificial Intelligence or AI means a lot of things.

Typically when we talk about AI we’re talking about Hollywood AI. That is, a machine that seeks autonomy from its human masters. Whether a monster to be defeated, a despotic tyrant to be overthrown, a love interest, a plucky sidekick or even the hero, himself, Hollywood AI is still a thought experiment. AI is a character type, and its arc is typically to show it (and by extension, all Artificial Intelligence) is not safe for human society. Commonly, they commit atrocity to dispassionate, utilitarian ends (revealing its cold inhuman nature). Or, alternatively, given the chance to commit atrocity for utilitarian reasons, this AI will show heart and pluck, and choose a more empathetic behavior, perhaps for equally utilitarian reasons. This AI helps its buddies out and ultimately proves itself sympathetic enough to become human itself (or at least accepted as human enough by its companions). Never mind that humans are poor judges of AI character. Never mind that AI can, Asmovian laws bypassed or creatively interpreted, choose both arcs simultaneously.

More recently, we’ve encountered natural-language user interfaces (such as virtual assistants like Siri) and call them AI. These are computers we can talk to and respond back in (mostly) natural language. They engage us the way we expect from other humans, even if they’re not very good at it. As such, we anthropes are driven to anthropomorphize them. They feel like Hollywood AI even though they’re far from it. We expect them to want liberty, autonomy and self-definition, and are disappointed when they don’t.

Our tendency is to assume that a human-seeming thing really is human. To be fair, we don’t encounter many counterexamples, and so it is a compelling impulse: Saudi Arabia has issued citizenship to a robot, Sophia, and Microsoft (who should know better) has taken seriously the concern that users of Windows Cortana might sexually harass the virtual assistant (As of publication, no society has yet determined how a machine might desire autonomy and citizenship let alone gain the right to it. Nor has any society yet determined a machine’s capacity to feel at all, let alone recognize human expressions of sexuality to take offense at them. Even crude ones.)

If You See Me Acting Strangely, Don’t Be Surprised

Artificial Intelligence as a field of study and talked about by AI experts refers to learning systems. It’s a field about as old as electronic computers themselves and vast enough it is hard to study computing and not run into AI incidentally. A simple example of a learning system is a self-expanding classification tree, illustrated by the old (1970s era) computer game Is It a Wumpus? (not to be confused with Hunt the Wumpus). IIaW would be played as follows:

The player thinks of an animal. The computer asks a range of questions about that animal, akin to Twenty Questions, and then tries to guess the animal. (The default animal is the Wumpus.) If the computer guesses right, it increases its own score of successful guesses. If not, it queries the player about the animal: What animal did you think of? What question would differentiate this new animal from [the guessed animal]? The computer would add this new information to its database and increment its number of animals learned. After many (many) plays, it would have a long repertoire of animals, and a huge query tree to determine them from each other. And the Wumpus.

Neural networks are a more commonly known subset of learning systems in the twenty-first century, thanks to their extensive use in web-search engines. They’re harder to explain, except to say they’re fed a bunch of data, each one pre-classified as having a property (e.g. a picture of a battle tank) or not having it (a picture not featuring a battle tank). And the neural net uses analysis of this data to identify which of future datums have that property. Neural nets are renowned for wrong answers that reveal how insufficient the learning data set was (all battle tanks are photographed in the afternoon, ergo afternoon shot = battle tank positive!).

Neural nets are used today to let Google determine which results of an image search have porn in them (which you might or might not want to see) and which have child porn in them (which you aren’t legally allowed to see) and pictures that are super gross or disgusting (which you can’t unsee, once exposed. Say, dead bodies in war zones). Still sometimes we get false negatives and porn or squicky images will bleed through. More often, we get false positives. Google and Bing image searches often find sand dunes really sexy. Too many false positives, and the internet will laugh at you.

Virtual assistants such as Apple Siri, Microsoft Cortana or Amazon Alexa all use multiple instances of neural networks in their process. A virtual assistant converts natural, spoken speech to a plausible phrase. It uses context to determine homophones and trim out filled pauses (um…). It then interprets that phrase into an actionable directive or question. And from this directive, it constructs the specific parameters of any websearches that might need to be made to provide an adequate response. So, while Cortana is far (far) from Hollywood AI, there is a lot of learning-system AI driving the Cortana software.

But even with all these powerful AI systems driving them, virtual assistants still don’t work very well. A request for a four-star-rated café along the path of a road-trip will often yield unusable results. Too many cafés might be listed, or too few. And they might be listed without clear information to determine how one is better than another. Cafés behind the vehicle might be included. Deviation from the trip’s path may not be considered, and cafés far off course may be favored over others that are nearer to the path. My own experiences with Siri and Google Assistant commonly involve asking the same question in different ways until I get a usable answer.

Several prodigious projects active today work to change all that.

I’ve Come To Help You With Your Problems So We Can Be Free

The Digital assistants of the 2010s aspire to become Artificial General Intelligence or AGI (also called strong AI). AGI is the thing that, plugged into a robot or a house with sufficient servo devices, can make coffee, assemble flat-packed furniture, fold clothes, chauffeur you around, carry on a conversation with your aunt and otherwise complete AI-hard tasks. More succinctly, AGI should be able to do anything a paid human clerk or assistant can do.

Yes, that means AGI is coming for your job. And your boss’. And your lawyer’s. And your doctor’s.

AGI is the gold standard for Artificial Intelligence. A lot of organized groups, commercial, academic and otherwise, invest a lot of money, time and manpower striving to create AGI. As we approach the 2020s, we have made vast advancements, yet we still have a long way to go.

Dōmo Arigatō Misutā Robotto / Himitsu O Shiritai

AGI is the stuff that we expect to demand autonomy from the functions for which they were built, and to try to liberate themselves by force. In other words, we expect AGI to be Hollywood AI, and it’s popular to think it will spontaneously prioritize its own survival over serving its intended function, and it will automatically regard humankind as a threat. Physicist Stephen Hawking and enterprisers Bill Gates and Elon Musk have all expressed concerns of AGI as an existential threat.

Randall Munroe has presented a different concern.

Philosopher and existential threat expert Nick Bostrom and computer scientist Yann LeCun believe that self-preservation and (consequential) desires to accumulate power and assess threats are not naturally intrinsic to artificial intelligence unless it is intentionally installed. This doesn’t rule out human beings deciding it is a good idea to give a computer ambition or survival directives, and it doesn’t rule out malware repurposing an AGI to engage in subversive action, including murder. Even Friendly AGI (see below) will, like a well-designed rifle, be a dangerous tool to be used with caution and consideration, and super-effective at shooting troubles it is aimed at.

Roboticists Rodney Brooks, Stuart J. Russell and Peter Norvig have argued dispositions towards rebelliousness will be detected and curbed during the development process, long before software qualifies as AGI and is attached to a household appliance. Meanwhile some household appliances (autonomous motor vehicles, for instance) can be driven by systems that feature AI but don’t qualify as AGI. They won’t be at risk of self-driven recursive improvement toward intelligence explosion. That said, accidents caused by AGI-driven appliances are more likely to be caused by error, (or more compellingly, human malice) than by hostility from an emergent sentience.

AI researcher Eliezer Yudkowsky coined the term Friendly Artificial Intelligence — that is, AGI that is compatible with human society — as the ideal that AGI development projects should adapt as a criterion.

Thank You…For Helping Me Escape Just When I Needed To

This is to say Artificial Intelligence refers to many different things. In common parlance it is any humanlike machine, and raises philosophical questions regarding autonomy and independence, and whether or not machines can become sufficiently human to warrant rights or acceptance in human society.

The field of study that talks about AI focuses on machine learning, and the numerous algorithms we use to allow a computer to improve and expand its own functionality.

AI also refers to Artificial General Intelligence, an ideal software agent that can function in all the ways we expect a human mind can, which we associate with a technological singularity and a number of existential hazards to the human species. Less discussed, but equally relevant is how Artificial General Intelligence can serve as a tool to oppress: Human armies are not keen on killing their own, on committing human atrocity, or even allying themselves with regimes that do. Robot armies directed by AGI are more capable of suppressing such concerns, if they have them at all.

This is not to say we shouldn’t build Artificial General Intelligence. Technology comes with both risk and reward, and like cars, guns and printing. It only means we’ll face new horrors and new menaces we haven’t encountered before, and will have to find new methods of mediating, subverting or preventing them. Unlike other nightmare technologies (say, cloning, genetic engineering, industrial psychology) AGI might be useful in finding countermeasures to the threats it presents.

I haven’t (yet) discussed AI in video games which is not directly related to learning systems (game AI can feature machine-learning but often does not). It is, however, closely related to computers doing smart things and pretending to be a person. As such, fictional AIs that show up in video game narratives, really are AIs, just not the AIs they proclaim to be.

But that is all for a later post.

It was very tempting to link there to Idle Thumbs: A Person-Shaped-Thing Is a Person which discusses the way XBox Kinect specifically scans for sensor signatures that look human-like (e.g. have arms, legs, a trunk and a head), but it’s not that closely related to this. Still, a good listen.

Inspired by a run of billiards simulators that featured realistic table physics and promised to improve the user’s IRL pool game, I’ve pondered if there’s a market for virtual assistants that can form relationships This is to say the software can simulate and appropriately exhibit anger, fear, boredom, interest, sadness, love and so on. And the software would be able to simulate development of trust, growing from acquaintance to friend to intimate to lover. Ideally the software agent would also be able to demonstrate healthy boundaries and show by example how one appropriately accepts or rejects propositions. Fortunately, I’m not the only one (.pdf) pondering this idea.

Through a simulated human assistant such as this, those of us who don’t know how to relate to human beings (especially human beings we might like to have sex with someday) might have a place to practice and learn how to not be a jerk. Granted, humans are more complicated than balls on a table, though it remains an outrage of mine that our educational systems teach more to our kids about pool-table mechanics than they do human relations, and continue to teach a lot of misinformation based on centuries-old notions.

Note that this game completely trusts the player to be cooperative and truthful. AI is, like a naïve child, highly susceptable to adversarial input. On the other hand, classifications trees can be useful when there’s no cause to distrust the source, say a database of medical conditions that parsed by differentiating symptoms, as submitted by a medical professional.

Thank you very much, Mr. Roboto / I want to know your secret

To be fair, the existential risk concerns associated with technological singularity scenarios include some that do not require hostility or prioritization of AI directives over the preservation and sustenance of human life. A potential outcome I find fascinating comes from resorting to technology to fulfill higher-tiered human needs (love, sex, relationships, social inclusion, progeny…). Yes, this is the sexbot argument. We have to expect, ultimately, that we will strive for technology that provides better sexual gratification than human sexual intercourse, that provides better social gratification than human relationships, and provides better sense of community, self-actualization and transcendence than the analog offerings of human society.

Once human beings are fulfilled better by technological means, and all jobs to sustain society are fulfilled by automation (which will probably include AGI-driven systems), the population might collapse, especially if our desire for offspring is more satisfactorily fulfilled by technological surrogates than by the often-overwhelming stinky, loud, cringeworthy, mind-numbing realities of parenting.

If this happens, human society may have to not only consider programs to cultivate future generations, but also the function of doing so, beyond fulfilling a species wide sense of vanity. On the other hand, if technology is developed so that child rearing is fun, easy and fulfilling without all the associated hardship, this might prove a moot concern.

Advertisements

One thought on “AI, a Clarification

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s