Now that cars are driving themselves, computers buy and sell stocks on their own, and most machines are created by machines, the question of artificial intelligence, the logical next step, is on a lot of people’s minds. The classification of artificial intelligence (AI) as friend or foe has been hotly debated, and honestly no one knows for sure. Let us explore some of these issues.

First of all, the threshhold of true AI is not well defined. Many software companies advertise that they have developed smart products with "AI" included, but in most cases they are simply using AI as a buzzword and really mean "complex algorithm." For many, AI often means true consciousness. The "Turing Test" is often used as a defining line for true AI. Alan Turing, who is often credited with the creation of the modern computer in the 1950s, created a simple yet hard to prove standard. The Turing Test basically says that for a machine to have true AI, a human should not be able to tell whether or not they are interacting with a machine or a human. Other definitions of AI include self awareness, emotions, hopes, dreams, motivations and everything else we hold dear as "being human."

Regardless of your definition, the advantages of true AI are fairly obvious. Wouldn’t be great if we just put an army of robots together to cure cancer? What about growing, preparing, and serving food for all the hungry in the world? An AI machine would have no need for sleep or rest. It could be an independent machine, like a robot, or it could be a massive interconnection machine of sensors and parts. Crime, fires and medical emergencies could handled or even prevented in fractions of a second without the need of a human to react and make decisions.

So far it sounds like a great utopia we are heading toward with AI providing us with leisure, health and peace. However, there are some who don’t see things that way. Stephen Hawking has recently be quoted as saying AI could be the "worst event in the history of our civilization." Elon Musk stated on Twitter that "Competition for AI superiority at national level most likely cause of WW3." He has even invested a significant amount of money to study the issue of AI. Why all the fear over AI?

The doomsayers point out that in order for the utopian scenario to work, we have to assume artificial intelligence will want to help us. What if an AI being decides we are not worth the effort? Or worse, what if they decide that we are a nuisance and will need to be exterminated. There are plenty of Sci-Fi stories about machines taking over and casting aside humanity. If you boil it down, all intelligence from humans to insects revolve around self interest. What will a robot get out of curing cancer? After all, wouldn’t we be creating a hugely intelligent, immensely powerful slave that could take all of our power at a moment's notice?

I get the concerns, however I tend to be on the optimist side. Here is why. Co-operation is the ultimate self interest. Humans have had difficulties with co-operation because not only do interests conflict, but from a lack of trust. As we have culturally evolved, the concept of zero sum, the strategy of us or them, has diminished. We have the power to destroy the world multiple times over, but haven’t. I would like to think that part of being intelligent is not only self preservation, but also preservation of others even if they are a very diverse form of "life."

Brian Boyer is the managing partner of Web Pyro (http://?www.webpyro.com) located in Wooster.