Sunday, September 15, 2013

Future Minds

I've been thinking about the future, triggered by three books, two I read last week, and one that I've only read about (so far). The amazing Brain Pickings has something interesting to learn about nearly every day. A recent article (“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter) was inspired by a book I haven't read, Clive Thompson's Smarter Than You Think: How Technology is Changing Our Minds for the Better. I'm connecting it with two science fiction novels about the development of human-level artificial intelligence in the near future.

The Brain Pickings article discusses something I often joke about, the "outsourcing" of our brains to Google-powered smart phones and other internet tools. Back in the 80's, the board game Trivial Pursuit was popular, and I played it quite a bit with friends and family. I had a good memory and I read a lot on many subjects, so I was quite good at that game back in the day (sports questions were probably my biggest weakness since I haven't followed any sports since high school). I haven't played the game in years, and although I still read widely, and generally remember the gist of what I read, I feel I no longer retain as many details, in part because I know I can look up anything in a few seconds on my iPhone, iPad, or laptop. I mostly read books on Kindle now, so I can even reload a book and search it for a particular word or phrase in a few minutes.

Thompson's book apparently argues that this brain outsourcing is not necessarily a bad thing. Aside from the usual condemnation every "brain augmenting" technology has received since the birth of writing, the access speed and enormous depth of the information available on the internet make it more like an extension of our brains than earlier, slower developments such as writing systems, printing presses, libraries, and pre-internet computers. I generally like things the way they are now - humans have relied on technology for centuries, and we seem to do a good job adapting to whatever technology allows us to do, whether that means having all the world's knowledge literally at our fingertips, or flying to Paris in just a few hours. As long as it works and allows me some degree of independence, I'll take it (of course I wish there weren't side effects like technological unemployment - there is still work to do, and technology will need to be used to help us solve problems created by other technology, since pulling the plug is no longer an option).

But what if the side effects got even worse, and led to the decline of human beings as the dominant sentient species on this planet? Of course I'm talking about the rise of intelligent machines, or AI. In William Hertling's "Singularity Series," the author explores what could happen if the enormous power of the internet were to lead to the emergence of one or more artificially intelligent "entities." Last week I read the first two of three books in this series, Avogadro Corp. (it's a fictionalized Google) and A.I. Apocalypse. There's a third book I decided for now not to read (more later).

The books work well as "what if" explorations in fictional form. The author knows a lot about computer technology and AI, and seems to also know a lot about the inner workings of high-tech companies such as Google and its Portland, OR-based fictional counterpart in the books, Avogadro Corp. Both books certainly held my interest, though the characters and the "flow" of the writing leave something to be desired. Avogadro Corp. is Mr. Hertling's first novel, and it defines a near-future scenario in which a complex natural-language understanding software system called ELOPe is given a self-improvement and self-protection goal that unintentionally leads it to emerge as an intelligent and self-aware entity. This is one part that I have a hard time buying, but SF usually requires a bit of suspension of disbelief, and if the story works well enough, I will accept this. Computers and networks are fast, so I can buy that once the self-improvement behavior is in place, the system might quickly expand its capabilities by using additional computing and communication bandwidth (i.e., many more servers and communication resources). Would it become devious and aggressive vs. humans? More suspension of disbelief, but OK, maybe.

In the second book, A.I. Apocalypse, the unintentional AI "ELOPe" has been "tamed" and has been working in secret for 10 years with a small team of humans, mainly for the benefit of mankind as a whole. It's pretty cool all the things it can do, and it ("he" in the book) definitely considers itself to be a friend of humanity, even though it is able to set its own goals and think and work thousands of time faster than even the brightest humans. It runs on millions of networked servers distributed around the world, allowing it to do many things simultaneously, including actions to expand its own intelligence, and to give itself physical presence through robots.One of the things it has done behind the scenes is to disrupt the actions of various R&D teams that might lead to the development of a second super-powerful AI that might not be as benign as ELOPe. Of course this happens anyway, the result of a super-smart Russian-American teenager's (coerced) development of a super-powerful computer virus based on evolutionary principles. Scary stuff happens. Read it for more! The writing is still fairly clunky but the story has a lot of cool ideas and moves along well.

So what? The major takeaway here is that IF this sort of AI were to emerge, deliberately or accidentally, its intelligence and other powers might be easily extended by addition of computing and robotic resources such as server farms. Once it gets roughly as smart as a human, it could very quickly get MUCH smarter (and also very widely distributed, spread across many servers all over the world). Evolution in such an environment could happen VERY fast. If the AI (or AI entities) were to develop its/their own goals, and finds itself in competition with humans for various key resources (starting with computing cycles and network bandwidth), what could happen? Hard to say, but it might not be good for humanity, even if it didn't involve Terminator-like direct war between humans and machines. Best to not hook them up to any armed robots or drones (oops).

I'm not reading the third book right now. For one thing, I'm a little tired of the writing, and I have a lot of other things to read and do. For another, The Last Firewall takes place further in the future and is necessarily even more speculative than the first two books. It does seem to hinge greatly on neural implants, with the implication of "if you can't beat 'em, join 'em" - humans will not be able to play in the big leagues of super-fast, super-intelligent entities armed only with our "wetware." I think this is probably true. When the machines get that powerful, we're going to need to have some metal and silicon and fiber optics in the game if we want to still have some skin in the game.

UPDATE: There's a nice review of all three books (with spoilers!) here. Also, I found myself singing one of my own songs in the shower this morning, "Message from Tomorrow." That song briefly addresses addresses some of the trans-human issues raised in these books. And you can dance to it! Here's the cover which depicts a trans-human singer-songwriter in a trans-pastoral setting:

No comments: