Thought Series: The Other Side of Fear

NUMBERS & NERVES

Thought Series provides actionable ideas and anchors for reflection on your life or your work.

Technology is meant to complement us, not dominate us. If automation is where people developing technology can take us and what they want to accomplish, it strikes me that we as human beings need to lean in to our humanity and what psychologist Carl Rogers referred to it as our ‘human-beingness’ even more. In order to remain as relevant as possible, we need to develop the skills that robots cannot simulate, poll dancing aside.

Critical thinking, ethics, and policy will be very important to our future. We need to regain some of the knowledge we have lost in our pursuit to become one with The Machine of the Industrial Revolution. The basis of how we understand (ourselves and others and the universe), therefore, lies in the anatomy of the brain and its capacity to cope with complex human reactions such as intelligence, thinking, and learning.

Master wood turner Eric Hollenbeck put it this way, “It’s like the Train of Society, going down the track, is scooping up more and more information— ‘scooping more, ‘scooping more, ‘scooping more—at an impressive speed. For some reason, it can only hold so much. This forces the person in the caboose to start throwing information off, as fast as he can, making room for the information coming on in the front. The problem is, we are throwing off the information it took us twenty-five thousand years to glean.”

Rather than being completely replaced, jobs are going to be reinvented. Our jobs are merely bundles of different tasks. Some (or many) of those tasks will be automated. But like evolving from the typewriter to the computer, or going to the library to now using a search engine, technology will basically redefine the kinds of things that we do and how we do them. (Librarians, by the way, are still better than a search engine because they are better at forming good questions.)

At some point in the future, we will learn that even something as human as creativity is actually fairly mechanical. There will be, I believe, an algorithm for creativity. But robots are going to be creative in a different way than humans are. For instance, a robot’s attempt at comedy or dance would be different than a human’s. They will never intrinsically understand what it means to “be human” in the way that we do. Even with such deep intelligence at their disposal, they will never do things exactly like we do them, and there is tremendous value in this difference of perspective, of skill, and of execution.

The convergence between man and machine has become adopted by people of every walk of life, from the poorest farmer to the richest billionaire. The relationship we have with machines has spread widely, been adopted quickly, and evolved to an unprecedented level of intimacy. No longer for the super curious dancing on the fringe of early adoption, the web/internet/computer is now part of mainstream society.

Is that panic, or excitement you are feeling?

THE FUTURE ISN’T DYSTOPIAN

Since the robots are here, and here to stay, we shouldn’t be fighting them. If we do, we’ll lose (if our national math/science scores are any indication). We should be figuring out how to work with them.

In 1997, the first big challenge to human exceptionalism was the IBM Deep Blue, who beat the reigning chess master at the time, Gary Kasparov. And when Kasparov lost, some thought this was the end of chess. Who’s going to play competitively because computers are always going to win? But that didn’t happen.

Playing against computers actually increased the extent to which chess became popular. And, on average, the best players became better playing against the artificial minds. Technology raised their game. Even Kasparov, who lost, speculated on the unfairness of being matched to a database that had access to every single chess move ever. So he invented a new, freestyle chess league, where you can play any way you want. You can play as an AI or you can play as a human or you can play as a team of AI and humans.

In the past couple of years the best chess player on the planet is not an AI. And it’s not a human. It’s the team that Kasparov refers to centaurs; it’s the team of humans and AI. They are complementary. AIs and humans think differently. This is reflected in other disciplines. The world’s best medical diagnostician is not Watson, or a human doctor. It’s the team of Watson plus a doctor.

This idea of teaming, or collaborating with something that can be creative, make decisions, and develop consciousness (different than ours) requires us to learn to develop more self-awareness, increase our autonomy, and make better decisions. We are running on a different substrate, and it’s not a zero-sum game.

There is inherent beauty in this symmetry between machines and humans. That, if humans get to gain more awareness of themselves and gain mastery in something unique to them, we can work with machines to tackle something even greater.

At its essence, artificial intelligence is math and data. Math and data have rules. What is difficult about the problems that need to be solved today is that deep neural networks of the brain have a multidimensional space where there is no “sense” to be made. Or, at least we are still unable to make sense of the rules at play. At a certain point, we just don’t know what it all means (yet).

If you relate to the metaphor of the brain in terms of a computer and the way that it receives, processes and stores information, you can appreciate that incoming information is acted upon by a series of processing systems. Each of these systems accepts, rejects or transforms the information in some way, resulting in some form of response.

Where there is a difference between the computer and the brain is in the type of processing of which each is capable. Computers are only capable of processing one bit of information at a time before moving on to the next bit, whereas the brain often engages in a multitude of bits of information simultaneously. There is also an issue about predictability, with the computer always reacting to the same input in exactly the same manner, whereas the brain may be subjected to emotional or environmental pressures that cause differences in reaction.

In short, we just don’t know the rules of the human brain. Therein lies the great fear of and opportunity for humankind: learning, guiding, or controlling these rules.