I’m kind of harping on this A.I. thing, but I read an article that helped put my Neuralink
post (On the Connections Between Brains and Computers) in perspective — and not in a good way. The self-selected summary is
as follows:
- Artificial Intelligence cannot be stopped.
- Initiatives (cl)aiming to "stop AI" will either fail to slow or actively hasten it.
- Attempting to subtly influence the development of AI is a waste of time.
- Other people have already figured out 1, 2, & 3 and [have] chosen not to tell anyone.
Obviously,
not everyone in the field is as concerned with destructive A.I. scenarios, but
this isn’t the place to split hairs; a wise man cautioned us to “avoid
world-ending assumptions”. While the reasoning flies a bit closer to Pascal’s wager,
then I’d prefer, it’s still good advice. Many of the optimistic assumptions and
predictions about A.I. are the kind of assumption that if it’s wrong (or wrong
in the right sort of way), that’s it. Near-instant extinction for planet earth.
Or at least death to Sarah Connor |
It’s an embarrassing lapse, but I did not think much about
how the very people who already know all the stuff I’m learning would behave. I
wasn’t thinking enough steps ahead. Seen in this context, Neuralink isn’t an
exciting new tech venture so much as a desperate hope to mitigate an
unavoidable disaster.
I don’t really have anywhere fancy I’m going with this, but controlling
A.I. is extremely difficult. They are, in many ways, faster, smarter, and more
resourceful than we are, and the disparity will only grow with time. Even the
simple A.I. we have now are more than capable of surprising their creators with
unforeseen loopholes and undesirable outcomes (like the Tetris-playing A.I.
that paused the game in order to not lose). Adding to this complication is the
fact that neural networks learn in ways that are difficult to track. They’re
complicated. Not as complicated as brains, but they're still black boxes — not even the people operating Alpha-Go know exactly how it decides on the
strategies it uses.
This is fine for Go, but less fine when A.I. is running
everything in society. Much like editing, it is nearly impossible to find every
mistake. Humans are imperfect; there is a crack in everything. We’re still
quite a ways off from an Artificial General Intelligence, but humans are
error-prone. Eventually, someone will slip up, and if it’s bad enough, that’s
it (again). We've already outsourced prison sentencing to A.I. and the fact that these systems are likely to make better predictions than the people doesn't mean there aren't unaccountable errors. While the first errors are likely going to be more benign (potentially unjust prison sentencing), the later ones will be far worse.
But circling back to number 4, this isn’t conjecture. For
instance, Elon Musk has stated publicly that the existential threat from A.I.
is one of the main motivators for SpaceX. Likewise, Neuralink is an awful,
awkward, messy fix to the problem of A.I., but it’s at least theoretically
sound. In this case, capitulation, resignation, and surrender look like spending
a billion dollars to computerize our brains before the computers don’t need us
anymore.
Postscript: I don’t want this to be interpreted as a ringing
endorsement of Mr. Musk, he is certainly a man with many faults (as is readily
evidenced by SpaceX’s labor lawsuits). Certainly, some of Silicon Valley’s more
reprehensible practices are hard to justify, though I imagine it’s easier to
sleep at night if you imagine yourself as working tirelessly to save humanity (Another
checkmark in his favor is his recent resignation from Trump’s advisory council,
since Trump’s withdrawal from the Paris accord made it clear that there’s no
point trying to steer that ship).
Edit: Added a few more links. Also of note: optimism towards the future of AI seems to be limited to non-experts.
Edit: Added a few more links. Also of note: optimism towards the future of AI seems to be limited to non-experts.
No comments:
Post a Comment