| 2015 | 2016 | 

 

2015 


 

robotGoogle, Siri, and JARVIS- What do these three things have in common?

By: Kimi Gerstner

“Hang on, let me Google it.” “Siri, what’s the score of the Oregon State game?” “Look it up on your smart phone!” What do all these questions have in common? They all utilize a form of technology known as Artificial Intelligence. AI encompasses a lot more than just that robo-maid you dream will eventually clean your house for you. Siri, Google and smart phones are all types of AI, just a little different than science fiction imagined.

The image that pops into the public’s mind when they hear the term “AI” is an evil robot that will take over humanity. Even before technology became highly popular, people were worried about “AIs taking over the world.” Science fiction has countless stories and books about what will happen is robots are given brains to think for themselves. I-Robot, by Isaac Asimov, is one example of artificial intelligence (Asimov). In the Terminator movies and in Avengers 2, there are wicked portrayals of AI. But on the other hand, JARVIS from Iron Man is the respectable and beneficial version of artificial intelligence.

Today’s AI is less exciting than that in the movies. According to the article “Autonomous Driver Based on an Intelligent System of Decision-Making,” there are many different types of robots including wheeled, walking and crawling robots (Czubenko et al). The purpose of AI is to improve the lives of humans and assist in dangerous situations (Czubenko).

So what is AI? Artificial Intelligence is defined as robots, even small ones like phones, having the ability to think like humans, and in some cases act like humans Japan recently invented a new advanced robot known as ASIMO. ASIMO, which stands for Advanced Step in Innovated Mobility, is a four foot tall humanoid who can walk, talk, run, and can detect moving objects and measure distance and direction. ASIMO can hear, interpret, respond and recognize up to 10 different people. This is a huge step in technology; going from Google inferring and searching results based on key words to putting that same technology into a walking, talking being. ASIMO is vastly different from the image science fiction movies and books created of Artificial Intelligence (Obringer).

Several of the concerns regarding AI stem from unrealistic depictions in science fiction. Teresa Heffernan, author of the article "The Post-Apocalyptic Imaginary: Science, Fiction, and the Death Drive," states that “science fiction is quickly becoming science fact,” and that people are starting to trust science fiction and just expecting it to come true, causing global fear. However, one of the major limitations of AI, even with ASIMO, is that no robot has passed the Turing test, which is designed to determine the level of artificial intelligence.

The Turing test, a human regulated test, asks a series of questions and the answers are judged as human or robot. This test is not entirely accurate because humans decide whether the person or computer being questioned is human or machine. In fact, computers have never been mistaken for humans, but humans are often mistaken for computers based on their answers (Heffernan). Additionally, the Turing test does not give much information on how to improve artificial intelligence to make it more similar to humans. It is a yes or no test (yes being human, no being computer). The same questions are asked repeatedly, so computers can search on data bases for better answers. Computers “learn” to memorize answers instead of thinking, similar to standardized testing. Passing the Turing test is equivalent to thinking independently. Artificial Intelligence’s limitation is that is cannot “think” independently, it can only follow orders. Solving that problem will create “true” AI (Proudfoot).

Conspiracists say technology and robots will be the end of humanity, and I don’t agree with them. I believe technology is necessary and can assist and improve today’s economy if used correctly. We have already crossed the no-turning-back point concerning the use of technology, so humanity can either embrace that fact and continue making improvements to the technology we already have, or we can deny it and live in the past. Is AI the next step in technological advancements? “Hang on, I’ll ask Google.”

  1. Asimov, I. (2004). I, robot (Bantam hardcover ed.). New York: Bantam Books.
  2. Czubenko, Michał, Zdzisław Kowalczuk, and Andrew Ordys. "Autonomous Driver Based on an Intelligent System of Decision-Making." Cogn Comput Cognitive Computation 7.5 (2015): 569-81.Oregon State Library. Web. 3 Nov. 2015. <https://access.library.oregonstate.edu/pdf/857380.pdf>.
  3. Heffernan, Teresa. "The Post-Apocalyptic Imaginary: Science, Fiction, and the Death Drive." English Studies in Africa 58.2 (2015): 66-79. Oregon State Library. Web. 3 Nov. 2015. <https://access.library.oregonstate.edu/pdf/857379.pdf>.
  4. Obringer, L., & Strickland, J. (2015). How ASIMO Works. Retrieved November 23, 2015, from <http://science.howstuffworks.com/asimo.htm>.
  5. Proudfoot, D. (2011, January 21). Anthropomorphism and AI: Turingʼs much misunderstood ... Retrieved November 23, 2015, from <http://www.sciencedirect.com/science/article/pii/S000437021100018X>.

Image result for artificial intelligenceArtificial intelligence is defined as the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Examples of artificial intelligence include but are not limited to: iRobot, Siri, self-driving cars, drones, and self-repairing hardware. Artificial intelligence is emerging rapidly in our ever-advancing world. People’s opinions and perceptions of artificial intelligence are very narrow minded and not fully developed due to the influence of the media and common entertainment such as movies and TV shows.

AI could potentially shape the next generation of our world and generations to come. Although artificial intelligence will not come into effect until 2025, it will most likely take over a lot of blue and white collar workers leaving a lot of people out of work (Heimlich 1). Anticipating Artificial Intelligence states, “It is crucial that progress in technology is matched by solid, well-funded research to anticipate the scenarios it could bring about, and to study possible political and economic reforms that will allow those usurped by machinery to contribute to society.” This being said, it will be very important to account for the jobs lost to AI and find other areas for people to find work in.

AI, more specifically in robots, is very intrinsically curious. This means that the robots are seeking to find out how to act by going through experiences with humans and using the reactions and outcomes to better shape itself, much like humans (Lehman 1). AI is modeled off of human intelligence, and if people aren’t afraid of other people taking over the world then we shouldn’t be afraid that AI will.

1. Lehman, Joel, Jeff Clune, and Sebastian Risi. "An Anarchy of Methods: Current Trends in How Intelligence Is Abstracted in AI." IEEE Xplore. IEEE, 11 Dec. 2014. Web. 22 Oct. 2016.

2. "Anticipating Artificial Intelligence." Nature. N.p., 26 Apr. 2016. Web. 22 Oct. 2016. 3. Heimlich, Russell. "Search." Pew Research Center. N.p., 16 Dec. 2012. Web. 05 Dec. 2016.


Image result for artificial intelligenceAI Advancement : Pros/Cons, Are we there yet?

By: Benjamin Hamilton

Artificial intelligence, the concept of creating a machine with general intelligence, or the ability to learn and come up with new outcomes, without the crutch of human programming, could be right around the corner.

We have all seen the movies that depict a robot gaining sentience and destroying modern civilization as we know it, but luckily for us, that’ll have to wait for now. In the past few years we have seen a massive surge in AI research and advancement, as seen with the primitive AIs that we now have in our smartphones, self-driving cars, and the many small machine intelligence ventures that google is currently working on. With all these new impressive, but small and fairly benign programs that we are starting to see however, it becomes easy to forget the big picture when it comes to AI advancement. In the article “How AI might affect life in 2030”, from the Stanford school of engineering we are told that “We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life” due to the prediction of having robot companions who can make our lives much easier, by cutting down on boring menial tasks that we have to do, such as cooking and cleaning, and improving our economy by mechanizing the workforce, and creating a job environment in which  robot workers can efficiently cut down on marginal costs such as human error and work around the clock without breaks. These same advancements also have large drawbacks  that we will have to work around however, such as what we will then have to do with our time, having robots to cater to our every need, and what the job market will look like for us humans, when robots have largely taken over many different jobs, whether skilled or unskilled.

This is not the only precaution we have to look out for when advancing to the wonders of artificial intelligence, however. In an article entitled “Why we Should Think About the Threat of Artificial Intelligence” We see a comparison of the drive of self preservation that all beings with general intelligence have on Earth, to the possibility of AI inheriting it, “the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence”. This quote shows us that it would only make sense for an intelligent enough AI to develop its own sense of self-preservation, which could be dangerous for us humans. After all, this machine fires at a much faster rate than our own biological synapses, and doesn’t sleep, giving it much more time to think about what might happen when you decide to turn it off.

Like I said before however, we are fairly far from this, and it looks like we’ll have some time to figure it out. Scientists are still struggling to work out many different forms of general intelligence that are essential to a thinking mind, for example, creativity. When computer scientists put AI through tests of creativity we see large advances in explorational creativity such as a AIs tasked to create music, but not so much in other examples of creativity, such as combinational creativity or transformational creativity. This shows us that in essence, AI originality is only just beginning.

Some scientists argue how we setup and model the experiments could be wrong, as explained in the scientific article “Evolving Artificial Intelligence”. In this article it is argued that “The majority of research in artificial intelligence has been devoted to modeling the symptoms of intelligent behavior as we observe them in ourselves. Investigation into the causative factors of intelligence have been passed over in order to more rapidly obtain the immediate consequences of intelligence”, basically showing that we have favored modeling these experiments in a way that we end up with computer programs with outstanding performance, but with very limited ways to apply it. It might be more beneficial to let such AI develop by adapting to many different environments, such as all other forms of general intelligence have developed on Earth, through processes that are evolutionary in their nature.

I am excited for what we will see in the future regarding artificial intelligence, whether good or bad, and I think it will be very interesting to see how it shapes our lives in the future, for this technology, now that we know it is achievable, will inevitably come in time. I hope we find our way around the cons of AI advancement in the time before this technology actually reaches us, but only time will tell.

References

Scientific articles:

Boden, Margaret A. "Creativity and artificial intelligence." Artificial Intelligence 103.1 (1998): 347-356.

Fogel, David Bruce. "Evolving artificial intelligence." (1992).

Popular Science articles:

Marcus, Gary. "Why We Should Think About the Threat of Artificial Intelligence." The New Yorker. N.p., 16 July 2014. Web. 9 Dec. 2016.

Stanford School of Engineering. "How AI might affect urban life in 2030." ScienceDaily. ScienceDaily, 1 September 2016. <www.sciencedaily.com/releases/2016/09/160901092825.htm>.

Universidad Carlos III de Madrid - Oficina de Información Científica. "Artificial Intelligence for improving data processing." ScienceDaily. ScienceDaily, 11 April 2011. <www.sciencedaily.com/releases/2011/04/110411083750.htm>.

Wladawsky-Berger, Irving. "How Will AI Impact How We Work, Live and Play?" The Wall Street Journal. Dow Jones & Company, 09 Dec. 2016. Web. 10 Dec. 2016.


Image result for computer programArtificial intelligence is a topic that has been shrouded in misconception for years. For many people the idea of artificial intelligence is intertwined with apocalyptic scenarios, futuristic spaceships, and danger. But the reality is that artificial intelligence already surrounds us. It’s incorporated into our phones, our computers, our cars and our homes. And while it’s unlikely to attack us anytime soon, AI does pose some danger to our everyday lives.

To understand AI, you first must understand how it functions. Contrary to popular belief, artificial intelligence can exist without artificial sentience.Artificial intelligence is simply a computer program designed to do things that are usually thought of as requiring human intelligence. So a computer can be artificially intelligent without feeling or thinking, as a human might. The most basic difference between any computer program and artificial intelligence is in the programming. A basic computer program, such as Microsoft Word, has a very defined path from Input->Computation->Output. I press the “P” key, the computer understands that I pressed the “P” key, the program displays the pixels that represent the letter “P” on the screen. However, an artificially intelligent Microsoft Word might be able to hear me saying the letter “P”, but would then have to run a series of logic and pattern based computing programs on the letter it heard before displaying and also saying out loud the letter “P”. While this is a more complicated approach from a programming perspective, it offers many advantages for the consumer in terms of ease or range of activity. 

The iPhone 4s was one of the first phones to offer a “digital assistant” in the form of Siri. This proved to be a popular feature and is now included in most high-end, flagship smartphones and even in computers and car dashboards. These sorts of “digital assistants” are a form of AI. Smarthome devices, such as thermostats that adjust temperature when you ask them to, are another example. The technology is also being developed and is already in use for many commercial and medical purposes. Hospitals are developing smart robots which can interact with autistic, diabetic, or otherwise medically impaired child. This could serve the purpose of treating, educating, or comforting a child on their condition. Cars which could drive themselves are being developed. So are virtual workers for places like McDonalds and Burger king. Artificial intelligence is everywhere, and it’s use is only going to increase.

One of the most popular misconceptions about AI is that it will “come to life” in the form of a Terminator or Skynet and pose a threat to humans. However, artificial sentience would be required to “motivate” an AI to nefarious means, and artificial sentience still seems to be far from a reality. That’s not to say that AI is all good; the use of the technology can be dangerous, too. As automated workers and drivers become a reality, the need for humans at these positions is decreasing. The loss of that many jobs is a very dangerous possibility for the global economy. Another danger is in the use of artificially intelligent weapons. A “smartbomb” or automated drone has the potential to be very deadly. Terrorist groups who got their hands on these technologies could be much more devastating than they are today.

As the prominence of AI grows, the research done on it grows as well. In the scientific community, it has been successfully used to show how a machine can beat a human in a series of challenges, such as Atari games. If a computer can replace and even improve on what a human could do in simple games of strategy, it could be used for more complex roles. Google has been pioneering research into AI to diagnose medical conditions and possible solutions, and to search for alien planets. More research is also being given to how the AI should be programmed to value and make decisions. For example, if self driving cars are presented with the trolley problem(where there is the option to allow a trolley to kill five people, or to intervene and kill one person) how should they respond? Some argue that the car should always save the most life possible, while others argue that the car should always value its passengers first.

While movies and science fiction may make artificial intelligence seem scary, the reality is much different. AI has many uses in everyday life, and may soon be vital in the medical field. The real danger lies in the future of automation, and how a global economy can deal with the elimination of so many jobs. While the technology has come a long way, it has a long way to go and several ethical questions to answer along the way.

Bibliography:

http://time.com/3973500/elon-musk-stephen-hawking-ai-weapons/?iid=sr-link10

http://time.com/4569585/ai-robots-fears/?iid=sr-link4

http://www.popsci.com/google-is-using-deep-mind-to-spot-disease-early

https://www.ncbi.nlm.nih.gov/pubmed/27905083

https://www.ncbi.nlm.nih.gov/pubmed/27881212

https://arxiv.org/abs/1207.4708