![About the Author MARK STEPHEN MEADOWS IS AN AMERICAN AUTHOR illustrator - photo 1](/uploads/posts/book/407344/Images/bck_fig_003.jpg)
About the Author
MARK STEPHEN MEADOWS IS AN AMERICAN AUTHOR, illustrator, inventor, and public speaker. He has also designed digital humans, built virtual worlds, founded three companies relating to artificial intelligence or virtual worlds, and is the coinventor of nearly a dozen applications or patents related to such technologies. This is his fourth book.
![Acknowledgments THANKS TO DAVID FUGATE OF LAUNCHBOOKS Literary Agency who - photo 3](/uploads/posts/book/407344/Images/bck_fig_002.jpg)
Acknowledgments
THANKS TO DAVID FUGATE, OF LAUNCHBOOKS Literary Agency, who helped me get this book going (after it had been brewing for nearly ten years), and to Keith Wallman, editor at Lyons Press, for his excellent editorial help, patience, detailed comments, and for going far across enemy lines to help with rights acquisitions.
IN TOKYO, SINCERE THANKS TO EDO-SAN, OF PINK Tentacle, for his advice, many photos, and help in hunting robots. To Mariko Aoki, Hiro Hirukawa, and Kazuhito Yokoi of AIST, for their multiple demonstrations, kindness, goodwill, and generous attentions. To Hiroshi Ishiguro, for his tour of the Geminoid Project, his time, photos, and marvelous insights. Thanks for allowing me to throw the book at you. Thanks to Ilona Straub of ATR (especially for waiting for me at the train station), and to Masako Hayakawa and Masae Nakamura, for helping to arrange all of that. Thanks to R. Steven Rainwater, for his references and great advice; to Craig Mod, for hours of helpful discussions, and for helping me get found, again, in Tokyo; to Nami Katagiri, for her many-lucky pointers and very broad surveys of Tokyos information landscapes. Thank you to Jack Sagara, Kazu Okabe, and the entire Motoman team; to David Marx of neomarxisme.com, for Tokyo insights in Piss Alley (I owe you a drink, David); to Nemer Velazquez and Faisal Yazadi, for their tireless help in lining up the Cyberglove tour; to Yoshiyuki Sankai and Fumi Takeuchi of Cyberdyne, for their most excellent presentation, patience, photos, and diligence; and to Gen Kanai of Mozilla for his marvelous advice, understanding of the machine, and sushi tour.
IN PARIS, GRANDS REMERCIEMENTS ETIENNE AMATO for his research help and broad perspectives (en voil, un autre pour ton tagre). Merci Bruno Maisonnier, Bastien Parent, Natanel Dukan, and Catherine Cebe pour le bonheur Aldebaran; to Stphane Doncieux, for his tour of virtual flocking behaviors and leaning systems; to Peter Ford Dominey, for his generous time on the phone. Merci Vronique Perdereau and Pierre-Yves Oudeyer, Jean-Arcady Meyer, and others in the Paris area for their significant advice, contributions, and the occasional verre de rouge; to Thierry Chaminade for his great notes on the Uncanny Valley and marvelous perspectives; and to Marie-Franoise, for the desk space when it was most needed! En fin, a big thank-you to Amlie for her help with research, schedules, and, most of all, her valuable reflections on the questions surrounding morality, technology, safety, and quality of life.
IN LOS ANGELES, THANKS GO TO A. J. PERALTA FOR the magic time in the Magic Kingdom, and for becoming one of ASIMOs biggest fans with me. Anne Balsamo, the Good Doctor and author of Technologies of the Gendered Body: Reading Cyborg Women, for her input, read-throughs, and contextual references. Thanks to Julian Bleecker (designer at Nokia and nearfuture laboratory.com) for his helpful kickoff notes; to Souris Hong-Porretta, of hustlerofculture.com, for dialing me into alternate realities and letting me play with her Roomba; and to Carlos Battilana, for transitory lodging and steaks to keep me on my road (and for letting me run Souriss Roomba under his sofa).
IN THE INTERNETS, THANKS TO KIRSTY BOYLE OF karakuri.info, who was a ton of help on a ton of topics, in a tonly manner, and to David Levy for his feedback, ongoing dialogue, and opinions on our futures. Thanks to the entire Fried DNA Crew, for their ongoing ribbing and pointers to great robots; to Rich Walker, Jean-Baptiste Moreau, and Marina Levina of Boston Dynamics, for photos, interviews, and advice. Thanks to Karl F. MacDorman, for many hours of work together, and for being nice enough to field my belligerent questions; Dom Savage, for the book and hours of attempted resurrections; Phil Hall, for interviews with his chatbots; and Sandro Mussa-Ivaldi, for information on hybrots and emerging research. Thanks to John Nolan and Kevin Warwick, for insights, photos, and references; to William Kowalski, for structural advice; to James Auger, for the tour of his carnivorous robot zoo; and to the many dozens of other people who I inadvertently neglected to list here.
![Appendix Continued from chapter 7 on AI language processing and semantic - photo 4](/uploads/posts/book/407344/Images/bck_fig_001.jpg)
Appendix
Continued from chapter 7, on AI, language processing, and semantic scraping:
BUILDING AND USING A LANGUAGE ENGINE IS complicated work. The simplest versions of these tools (about as complex as two tin cans connected with a string) are chat engines. These have traditionally created linguistic interfaces by taking a phrase thats expected and telling the system to use a ready-made response. The rule goes, If someone asks you Question A, give them Answer A.
Although these rules come in handy, they also break easily. What happens is that someone will ask the chatbot a question that wasnt anticipated, and the system just shrugs and says it doesnt understand. So chatbots dont work well because theyre inflexible, have poor memory, and they cant keep track of where the conversation is headed. Theyre brittle, prejudiced, and simplistic, and reflect their authors subconscious (meaning they tend to have suppressed and subconscious hunger for pizza and Catherine Zeta-Jones).
The best way to improve on the traditional chatbot approach is to loosen the reins so that the system isnt working with specific phrases, but with general concepts that have redundant cues in them to help build specific understandings. Language engines allow the system to determine the best response via semantic lookups, or means of linking a question with a response, and tying that question-response cycle into a larger context, both within the conversation, and within a larger worldview of common sense, as well.
The basic method (now a bit more complex than a telephone system) can be summed up in five steps.
First, you need tools to isolate grammar, parts of speech, word patterns, phrases, grammatical mood, turn-taking opportunities in the dialogue, and tools that look for repeating words in the text. Unfortunately, this makes the writing rather ugly to read.
(S (NP Sentences)
(VP get
(VP parsed and broken
(PP into
(NP (NP parts)
(PP of
(NP speech))))))
.)
Semantic-analysis tools review these bodies of text and catalog text strings, frequencies, and the probability (or likelihood) of words that appear, get reworded, and reappear. It also looks for more general recurring patterns and tries to build a context for it all.
Second, we then take a large set of datathe bigger the better, in factand something that someone has written, hopefully in the first person. This gets scraped and then analyzed by these semantic tools to connect various text strings, and generate patterns of ideas. What were looking for is material thats specific to this individualwords that only they would use, or peculiar phrases that crop up from time to time. Something weve mentioned before as an authors fingerprint.