Driving question: "Does the convergence of artificial intelligence and weaponry create a threat for humanity?"
Conflict is composed of opposing forces.
Artificial Intelligence is composed of opposing forces with one being the call for banning militarized A.I, and the other accepting the use of A.I., and to continue to research it and develop it. Elon Musk and other A.I. experts call for a ban as it is stated in the article made by David Z. Morris, “One hundred and sixteen roboticists and A.I. researchers, including SpaceX founder Elon Musk and Google Deepmind co-founder Mustafa Suleyman, have signed a letter to the United Nations calling for strict oversight of autonomous weapons, a.k.a. “killer robots.” The letter describes the risks of robotic weaponry in dire term, it also warns that failure to act swiftly will lead on to “arms race” towards killer robots. The opposing side is to develop A.I. It has been stated by Evan Ackerman that “There’s been continual development of technologies that allow us to engage our enemies while minimizing our own risk, and what with the ballistic and cruise missiles that we’ve had for the last half century
Conflict may be natural or man-made.
Weaponized Artificial Intelligence is a man-made conflict because humankind wants power, and we want control. Humans can use A.I. to get power, even at the cost of human lives. Then, A.I. can be used in war to gain the upper hand because countries do not care how dangerous a weapon is if it helps protect their country and win the war. Some research by Will Knight states “automated weapons could conceivably help reduce unwanted casualties in some situations, since they would be less prone to error, fatigue, or emotion than human combatants.” This would allow more night attacks and will be able to hit target accurately with less training. Since artificial intelligence was first coined by John McCarthy in 1965, when he held the first academic conference on the subject. A.I. researchers have been able advance and create more objects of war. As technology advances, the threat becomes more dangerous, and will increase the need for power. Evan Ackerman states, ”There will be some sort of arms race that will lead to the rapid advancement and propagation of things like autonomous “armed quadcopters,” eventually resulting in technology that is accessible to anyone if they want to build a weaponized drone” to show that the countries might go into a race for power, or a race for an upper hand in war.
Conflict may be intentional or unintentional.
The use of A.I. as weapons in war is intentional. However, the resulting ethical conflicts that surround the weaponization of A.I. may not be intentional. In war A.I. is used to pilot unmanned vehicles protecting the lives of allied forces, but also endangering the lives of opposing forces. Since A.I. are not fully humans, they may still make mistakes in judgment or can be hacked to endanger their own side. From the article, “AI (Artificial Intelligence),” it states that, “A.I. systems have a sense of self, have consciousness” which could mean that they might endanger people in orders to accomplish its purpose. Another quote from the same article describes an A.I. robot that currently can do certain actions, but cannot use memory. It states, “Deep Blue can identify pieces on the chess board and make predictions, but it has no memory and cannot use past experiences to inform future ones,” which means that A.I. still has more to go in development. In wartime, humans use their memory and their ethical judgments to make decisions, but A.I. may not have that capacity. Overtime, A.I. may become something that may help and hurt us in times of war. At the Association for the Advancement of Artificial Intelligence in 2015, Evan Eckerman stated, ”banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil” showing that weaponized A.I. is intentional.
Conflict may allow for synthesis and change.
A.I. creates a threat to humanity because the technology used to create it opens new doors of possibilities to be helpful or hurtful. In war, humans are the go to weapon but at the risk of our own lives, so technology is starting to take that role. According to an article from Anthony cuthbertson, he states that “A.I. makes it possible for machines to learn from, adjust to new inputs, and perform human-like tasks.” This shows that technology is starting to match or maybe even surpass the intellect of humans which would make them an excellent war weapon, and although they are getting close to our intellect, they will follow orders without question, no fear for their lives. Also an article known as “We should not ban ,‘Killer Robots,’here’s why,” the author states that ”there will be some sort of arms race that will lead to the rapid advancement and propagation of things like autonomous “armed quadcopters,” eventually resulting in technology that’s accessible to anyone if they want to build a weaponized drone.” This shows that people, war, and technology all come together to create weaponized A.I. and this will create a definite threat to humanity especially when everybody is given access to it .
Changes: the transition from man to man combat to robot vs robot
Conflict is progressive.
A.I. keeps changing as technology continually progresses and improves. “Is Artificial intelligence dangerous” an article by R.L. Adams states ,“AI will clearly pave the way for a heightened speed of progress” showing that progress is making changes to the way we live our lives, and the way people fight their battles in social life or in war. A.I. is progressive because as the years pass they all work together like how they use the internet and robots to create special aircrafts/drones, as stated from the article, “Military Robots: Armed, but How Dangerous?” using drone aircraft or bomb disposal robots, raising the prospect that those actions could be automated.” Furthermore, A.I. technology is seen more and more in people’s everyday lives such as through their smart homes, smart phones, smart speakers, and help like Siri or Alexa. Therefore, using A.I. and its role in our lives is becoming more a part of the world. It is with no question that these technologies are advancing even more through the military, where militarized or weaponized A.I. can be developed overtime. This would contribute to the conflict continuing being progressive.
LANGUAGE OF THE DISCIPLINE
Global- This can affect our globe positively or negatively due to our nation’s leader’s decisions. Evidence to back up the negative side is stated in our SLR responses on a question. The question was, “How do you think artificial intelligence may impact the human race over time?” and our feedback was that out of all the choices 31.8% of their responses was negative meaning that A.I. will impact humans negatively. Our other side of the parallel is how it will help our world. Some research that shows this is from the article published by Andy Patrizo in 2017 that mentions ,“A.I. allows for more intricate process automation, which increases productivity of resources and takes repetitive, boring labor off the shoulders of humans. They can focus on creative tasks instead" and this can relate to the positive side because it can mundane task.
Community- This can relate to our community because it has two sides with one being the fact that our military is using A.I. for weaponry while on the other side our community is using it to deliver packages, film video or capture photos, create security in our home systems, listen to requests for playing music or searching information, or talking to other apps. The military side is discussed in an article by Robert W. William who mentions, “It could provide unpredictable and adaptive adversaries for training fighter pilots.” This summarizes that the military is using A.I. to help train the fighter pilots to make them visualize what it will be like to be in the air and flying around. Our second side revolves on our community using A.I. to send or deliver. Some research that proves this is found in the article made by Murray Newlands who mentions, “As of now, chatbots can be integrated into numerous applications, which allows them to receive payments and check on orders.” This is just one way that A.I. is part of our community and the way that we live using apps, helpers, smart phones, and smart homes.
Personally - A.I. affects us personally because we see A.I. technology all around us from the many devices that people have in their homes or classrooms that utilize A.I. It also affects us because when we are older, we may live in a world where the military uses A.I. which as shown in our research can help or harm us. Evan Ackerman states ”There will be some sort of arms race that will lead to the rapid advancement and propagation of things like autonomous “armed quadcopters,” eventually resulting in technology that’s accessible to anyone if they want to build a weaponized drone,” which will allow random people with unknown intentions to use potentially dangerous and deadly weaponry for unethical purposes that could affect the world my peers and I will live in.
Artificial intelligence has the potential to threatens our society and impact it in a dangerous way. As stated in the article, ”We should not ban Killer Robots, here’s why” the author notes, “you see something sinister on the horizon when there’s so much simultaneous potential for positive progress.” This shows that progress also comes with a draw back and if not balanced, technology would not help us. It may endanger our advancements in war and also our race. This impacts us and is a threat to humanity because the more weapons there are, the more greed there is to get them. Furthermore, the military can now engage in conflict from an even more discreet location as expressed by the quote “Military technology has advanced to allow actions to be taken remotely, for example using drone aircraft or bomb disposal robots, raising the prospect that those actions could be automated,” as stated in ¨Military Robots: Armed, but How Dangerous?¨ This shows that through time, the impact of weaponized A.I. can negatively impact how war is fought and how humans are affected. It creates ethical conflicts because the technology that may help us progress can harm our race, and the act of war is not longer in person but detached from the dangers of war, which creates a disconnect between people and the wars they fight.
Questions we still have are:
STUDENT-LED RESEARCH - OUR METHOD
The method that we chose was to create a survey because we can see the results that we got from the community and how they can relate to it. We asked six questions and our results show how 45 middle school students, who will grow up and live in a future where A.I. is more part of their lives, thought about the weaponization of A.I. One flaw or shortcoming of our method is that it doesn’t reflect how most people feel because we only received respondents from a small group of middle school students who do not represent the rest of the population. However, getting their opinions helps us understand our conflict within our own community of peers.
According to the data, shown above both Negative and Neutral got similar responses with 31.1%, while 20% of people chose that it will impact it positively.
Analysis: Out of the 45 responses that we got, only 4.4% (2 people) fully supported the use of A.I. in weapons. Then there was only 6.7% (3 people) that didn’t support it at all, while 48.9%(22 people) felt neutral.
Analysis:We learned that 6.7% (3 people) rely on artificial intelligence which is . Then we learned that 22.2% (10 people) don’t rely on AI at all which is .
Analysis: The most responses that we got was in the neutral section with a 53.3% witch shows that people don’t really care what the government does. The second most voted choice was that 31.1% of people believe that this will help our military.
Analysis: The most selected choice was the second which stated that A.I. will provide… to show that they believed that it will provide more accuracy in times of war. The second choice they selected was the 4th one which they think that it will start or make wars worse.
Analysis: Lastly, the most feedback we received was that most people didn’t know but 35.6% of people thought that our military should do more research into this and 24.4% of people thought we should not go into this territory.
USE THESE BELOW OR WRITE NEW ANALYSIS UNDER Each oF YOUR SIX QUESTIONS!