"No exceptional circumstances whatsoever, whether a state of war or a threat of war, internal political instability, or any other public emergency, may be invoked as a justification of torture." -- U.N. Convention Against Torture
If you would have asked me in high school if I knew what artificial intelligence was, I would have guessed simply, with one word, a computer. That was, of course, until I had watched the popular ‘90 ’s movie, “The Matrix”. In this movie, machines were created to teach themselves, to self-learn. In a very short time, they were able to match and surpass the intelligence of humans. The machines started to use humans to power themselves and this drove the rest of humans underground to wage a war on machines. I loved the movie, however, it leaves me with thinking about this question: could this be happening to us right now? We have Artificial Intelligence (AI), however, is it AI that can conquer humankind? Despite warnings from some industry experts, AI is not in the place where it could overtake humankind, because of the inability to act independently, it’s lack of sophistication and its current inability to obtain common sense.
Unlike humans, AI has the inability to act independently. This is very important if we are going to have machines that will assess a problem, then understand how the problem can be solved, including ways that it could have been more efficient during a previous attempt and then correct its behavior accordingly. In the article, AI Dangers: Imagined and Real, one of the arguments that Devdatt Dubhashi and Shalom Lappin mention that is used to give warning of the surpassing of human intelligence is Moore’s Law (Dubhashi & Lappin, 2017). Moore’s Law basically says that the growth of hardware and processing power will double every two years, however, the authors also state that there is neglect to include the understanding of the “constraints of space, time, and the laws of physics.” (Dubhashi & Lappin, 2017). Dubhashi and Lappin seem to agree that there have been “dramatic increase in processing power” and “improvements” in AI systems concerning “speech recognition, visual object recognition, object recognition”, however, this isn’t even close to where it needs to be for independent action (Dubhashi & Lappin, 2017). It is within this context that the authors, adamantly note that even though there have been advances made, these “advances have been in a supervised setting” and that “there are large amounts of training data.” (Dubhashi & Lappin, 2017). One of the major areas of concern in regard to AI is when it comes to military uses. For that reason, it is important to receive a word from Steven Walker, the Director at DARPA. In an interview with Peter Holley, a technology reporter with The Washington Post, Mr. Walker notes that “agency researchers” want to build “human-machine” partnerships, but also says that the defense department doesn’t see “machines doing anything by themselves” (Holley, 2018). Director Walker made these statements in the context of a serious controversy underway concerning “military’s use of AI”. In spite of these reassuring statements, Google pulled out of what was going to be known as “Project Maven” due to a signed petition put together by its employees protesting the company’s activities dealing with the military (Holley, 2018).
Another area of lack in the discussion of present-day AI is its lack of sophistication. Dubhashi and Lappin, as noted earlier, agree that, while there have been advances made, that the “advances made have all been in narrow and specialized tasks” (Dubhashi & Lappin, 2017). The authors go on to state that the progress in AI hasn’t been “exponential” or “meaningful”. The issue seems to be that growth is “irregular” and seems to come in “unanticipated spurts” (Dubhashi & Lappin, 2017). Dubhashi and Lappin state that there is a biology-based approach using nano-technology to map out the human brain and that could help build a model of the brain for more sophisticated AI thinking. However, they argue that just because we can develop a model of the brain, doesn’t mean we know how it thinks, which is the concern (Dubhashi & Lappin, 2017).
Moving on to our next point is one that seems to be also very important. The point of common sense. When we say someone has a lack of “common sense”, it is always a negative comment. Why? It is because common sense is something that is expected of human beings. Is common sense so natural for machines though? That seems to be one of the big issues with AI that has researchers stumped. Dubhashi and Lappin, in their article, give an illustration from a book by a mathematician and a philosopher, of a thought experiment involving paper clips. Follow along with me. A machine is created to make paper clips. It only has this “narrowly defined goal”. However, the issue is that the machine could develop sub-goals that could help with efficiency. If the machine does this and sees humans as part of the equation to help make paper clips then the machines could take over humans to help with achieving the sub-goal of efficiency. Dubhashi and Lappin argue that this illustration from the thought experiment “betray a striking lack of engagement with the present state of technology in AI” (Dubhashi & Lappin, 2017). In addition to this, referring back to the advancements that have been made, the authors are adamant when they state that the advancements are in “narrow” and “specialized tasks”, and not in “generalized learning”. With generalized learning, it requires “complex reasoning” (Dubhashi & Lappin, 2017).
Steven Walker, the Director of DARPA has a bit to say about the lack of common sense in AI. In his interview, he gives an example of a cat and a suitcase. AI might be able to recognize the “image of the cat sitting on a suitcase, but the machine still can’t understand that the cat could be placed inside the suitcase”, and furthermore that it wouldn’t be a good idea to do that anyway (Holley, 2018). Steven goes on to say that giving machines “common sense” is the next place that DARPA researchers are headed (Holley, 2018).
With all this being discussed, I think that we can clearly see that we can be assured that we won’t wake up tomorrow being escorted to prisons by machine partners. Like Dubhashi and Lappin pointed out, it would be a “fanciful jump” to think that machines could go from the type of learning that they have now to super intelligence (Dubhashi & Lappin, 2017). In Peter Holley’s interview with Steven Walker, Mr. Holley points out that there are major warnings from well-known people like Elon Musk, Bill Gates and even Stephen Hawking, who gave a warning, “Humanity is wandering into dangerous territory in its seemingly blind pursuit of AI” (Holley, 2018). Even so, according to Walker, AI is still “unpredictable”, needs to become a lot more “sophisticated” and “a very fragile capability” (Holley, 2018). While Peter Holley wrote an article based on an interview with the director of DARPA, the article written by Dubhashi and Lappin carried more weight. The team was able to analyze the current concerns with AI and respond appropriately based off of their own research. Director Walker, in his interview, had to refer to his “researchers” (Holly, 2018). Dubhashi and Lappin had first-hand knowledge of the research and this came across in the studies they used to prove their points and overcome arguments against the AI movement. It seemed that Director Walker used the “because they said so” approach. All in all, unless machines become more sophisticated, independent and can develop common sense, I will sleep a lot easier at night.
References
Dubhashi, D., & Lappin, S. (2017, February). AI Dangers: Imagined and Real. Communications of the ACM, 60(2), 43-45. doi:10.1145/2953876
Holley, P. (2018, December 6). DARPA head on AI dangers: 'It's not one of those things that keeps me up at night'. The Washington Post.
Cast your vote on this article 10 - Highest, 1 - Lowest
Comments: Published: 7 comments.
HackThisSite is the collective work of the HackThisSite staff, licensed under a CC BY-NC license.
We ask that you inform us upon sharing or distributing.