Friday, February 24, 2017

On Being a Cassandra: AI

On this blog, I've been accused of being a Cassandra, a Debbie downer, and a real stick in the mud.

I'll admit that my tone is dark, and that I'm completely lacking in optimism of any kind. My counter is this: there really is a lot to worry about. (safety is an illusion, the only solace is the void, etc.)

I'm not going to claim that I'm not a little neurotic and prone to alarmism. But the optimism bias is a very real thing, with the vast majority of people consistently overestimating things like their odds of success, underestimating how long unpleasant tasks will take, and generally ignoring unpleasant things if it's possible to do so. But I would imagine that (if it's in fact a real thing) I suffer from depressive realism, which is what they call it when a depressed person is able to operate without that bias. In fact, I'd say it's hard to be pessimistic enough.

So with that charming fact in mind, I'd like to start making a list of bad things I'm worried about. This post will be the first in a series of me ranting incoherently about problems.

Artificial Intelligence: I've written about this one before. The danger from A.I. comes in two major varieties; one near and one far. The near one I've written about in "A Futurist View on the Welfare State". We're already starting to see the effects of this, but there are more and worse in the future.

The far view is more speculative, the sci-fi Skynet situation, where an unfriendly A.I. does bad things. This one appears a lot in books and TV, which is unfortunate for two reasons:
1. It causes people to disregard it, the same way they disregard wizards and aliens as entertaining but not real.
2. The realistic situations aren't as exciting as the stories, so people get bored and tune out.

Intelligent systems are not people. Moral reasoning is not well understood, terrifyingly vague, and impossible to effectively implement. A superintelligent A.I. tasked with eliminating cancer  might happily develop new surgical techniques and treatment options, until it figured out a way to safely and reliably end all life on earth, thereby ensuring that no cancer cells could ever form.

If this seems unrealistic to you, you don't understand how computers work. A recent A.I. was designed to play Nintendo games as well as possible. It got very good at Mario, because that game is beatable. When it was introduced to Tetris, it went as far as it could, and then paused (Tetris has been mathematically proven to be unbeatable). Pausing the game forever, so you can't technically lose, is exactly the kind of solution that Artificial Intelligences come up with when faced with a difficult to solve problem. If there is an easy path to take, they will take it.

The problem is that almost any goal, if pursued with superhuman intelligence, could easily result in outcomes that humans would consider bad. As anyone who has interacted with them knows, computers are very literal, and will do what you say, not what you mean.
Mickey did not realize the danger of poorly defined utility functions
There are some organizations doing research into this problem, but it is very very difficult, if not  insolvable. Our computers are not yet as advanced as Mickey's brooms (they knew right away how to fill up the buckets without being taught!) but they are no less single-minded in pursuit of their goals.

To conclude, there doesn't need to be any malice involved (though the dangers of someone using A.I. for nefarious purposes should need no explanation), and the A.I. doesn't even need to be all that intelligent, to develop major problems.

No comments:

Post a Comment