Lab 22 - I, robot?

It hasn't taken very long for computers to make great strides in the realm of artificial intelligence, or AI. In just a few decades, the electronic generation of computing machinery has gone from calculating answers to complex equations to powering robots that get more lifelike every day. Taking a look at the Massachusetts Institute of Technology's Personal Robots Group's projects, or Honda's Asimo robot, we can see that today's robot builders are putting increasingly sophisticated AI software into human-like robot packages, also called androids.

These two parts to androids, the software and the hardware, often don't have the same level of sophistication in the same package. The software attempts to replicate our human judgements and responses, like IBM's Watson or Deep Blue. On the other hand, the hardware attempts to replicate our bodies - either by focusing on mobility and functionality, like Asimo, or by modeling our expressive features, like Hiroshi Ishiguro Laboratory's Geminoid F.

To consider:

Exploring the relationships between people and computers

In this twenty-second lab, we'll visit a web-based version of ELIZA and perform our own version of a Turing test, then compare existing androids to imagined ones, taking particular note of a phenomenon called the "uncanny valley". As you explore, review your answers from the To consider: section above. Have you changed your mind about any of them?

Learning more

  1. The interface is a plain one and the underlying script isn't very sophisticated, but the ELIZA we are about to visit allows us to interact with it through a web browser. First, go to http://www-ai.ijs.si/eliza/eliza.html. You should see this form:

    • Taking your cues from ELIZA's message text, type your response into the textbox below and click on the Submit button to respond.
    • Do you think you can make ELIZA upset or angry? What do you think would happen if you typed in "You stink!"? Try it.
    • Human beings often say the same thing multiple times in the same conversation. Sometimes we phrase things differently, but sometimes we say them exactly the same way. What happens when you put the same response in more than once?
    • Remember that ELIZA is meant to behave a little bit like a psychoanalyst, encouraging you to answer your own questions by asking more questions. Can you name particular characteristics of her responses that match that behavior, or give specific examples of that type of response?
    • Without knowing how the code for ELIZA is written, can you think of some ways you might be able to improve the responses the program gives? Do you think writing this kind of code would be easy to do, or difficult to do?

  2. You might have more fun with a chatterbot named Cleverbot. Be aware that Cleverbot learns its responses from the people interacting with it, who may or may not be as polite as you are. Now that you know what to expect, why not try talking to Cleverbot?

    • In the Cleverbot interface, you'll click on the Think About It! button to enter your responses. Alternately, you can have Cleverbot talk to itself by clicking on the Think For Me! button without entering anything in the text box. Use the Thoughts So Far button to see the whole interaction, see a detailed log of the interaction, or email it to yourself or someone else. Make a note of the first thing you enter into the text box.
    • How is talking to Cleverbot different from talking to Eliza? Give specific examples. Which one do you think is more "lifelike"?
    • Why do you think Cleverbot's developers built Cleverbot?
    • Start a new converstion with Cleverbot, using the same thing you entered into the text box the first time. Did Cleverbot's response change? Do you think it would change if you tried the same thing a week (or a month) from now? Why or why not?

  3. Time to visit the uncanny valley, a place where imitation is not the sincerest form of flattery. The uncanny valley gets its name from the dip in the graph you see below, from Wikimedia Commons user Smurrayinchester:

    One of the recent and more well-known examples of this phenomenon doesn't have anything to do with androids, but clearly illustrates this idea. The 2004 computer-animated film The Polar Express had some bad reviews associated with its less-than-human animated cast. Watch The Polar Express' trailer and see for yourself:

    What do you think? Is there something "missing" from the animation? What would you change to make the animation less "uncanny"?

  4. Now let's take a look at some androids making a similar attempt at natural human interactions and expressions. First, here's Geminoid F in her acting debut:

    This is the HRP-4C, a " humanoid robot" from Japan's National Institute of Advanced Industrial Science and Technology, in her singing debut:

    Finally, here's the Geminoid DK, created by Professor Henrik Scharfe of Aalborg University, being put through his mechanical paces to form a variety of expressions in quick succession:

    After reviewing the videos, do you have a better sense of where the uncanny valley is for you? Are you able to put your finger on what things keep particular animated characters or humanoid robots in that valley? Which android do you think was the least uncanny? The most?

Moving on

If looking at these robots and androids has interested you in trying out robotics for yourself, you can start small by making your own bristlebot, thanks to the Bristlebot tutorial from Evil Mad Scientist Laboratories. If you'd like a more advanced look at robotics, check out some of the resources at letsmakerobots.com an online community of robot builders that has videos and tutorials to help you get started.