It’s pretty hard not to have heard the recent news about Google’s AlphaGo AI beating Lee Sedol at the Chinese board game, Go. It’s a huge step forward in AI research because Go is not a game that can be cracked through pure calculating power. Unlike chess, where the number of possible moves in a game can be realistically calculated by a computer, Go is incredibly complex, with a baffling number of possible scenarios available at any given time. This is such an important achievement because it seems like AlphaGo used something previously thought to be unique to humans: intuition.
It’s easy to get scared by the implications of this AI achievement and start conjuring up possible futures wherein humanity is rendered obsolete and AIs take over the world, but there are important positives as well. Not only is there great potential for AI to improve our world and to help us do things that we couldn’t do before, we humans can actually benefit by learning from this AI success.
In any sort of vaguely strategic game the players are faced with a series of decisions that usually results in one of those players achieving the goals of the game and winning. This is true for Go, chess, Monopoly, etc. Even in games that have an element of luck, like Monopoly, the best players are the ones who can recognise that the decisions that they make throughout the game are the difference between winning and losing.
What has this got to do with anything? Bear with me. One thing that analysts have noted from the games in which AlphaGo emerged victorious is that it was consistently making decisions that opened up more options in the following turns.
It turns out there’s something profound in this that has applications beyond gaming. Alex Wissner-Gross, a computer scientist and physicist, said that intelligence more generally “is a force that acts so as to maximise future freedom of action […] in short, intelligence doesn’t like to get trapped”, and he goes on to call it “a physical process that resists future confinement” (you can find his full Ted Talk here).
This is something that we can apply to our own lives. Decisions don’t come around exclusively in games; every day all of us make decisions that lead to either positive or negative outcomes. If we try to optimise those decisions, it logically follows that we’re going to be better off. In following AlphaGo’s lead, we can theoretically not just improve at gaming, but improve in the real life decisions that we make.
So what does that look like? The basic approach to take is to choose the option that keeps the most options open further down the line. Don’t back yourself into a corner.
Let’s say that you’re looking for a new job. The easy route could be to look at the kind of roles that you’re comfortable with in companies that are familiar to you, in the hope that, at best, you’ll get a more interesting role than before, with a bit more responsibility and a slightly higher salary. But what if you use AlphaGo’s techniques? Rather than taking the safe road, why not start researching and applying for positions outside of your comfort zone? Look for roles in start-ups, or companies that you’re less familiar with. Maybe even look at unfamiliar roles! If you do this then regardless of whether you get those jobs or not you’ll be learning new information and skills in the process of applying that you would never have learned otherwise, and you’ll be better positioned for your future career.
This doesn’t just apply to job-seeking. Any area of life that you would normally approach with some degree of logic can theoretically be improved. Think about it – there can be no overall downside to making the best decision in any given situation.
I might be taking this too far, but I think it’s something worth considering. It’s easy to be all ‘doom and gloom’ when it comes to AI, but I think there are tremendous benefits from it in the present.
For now, I think we have to be humble enough to say that even in this early stage of its development, we can learn a lot from artificial intelligence.