Driving question: "Does the convergence of artificial intelligence and weaponry create a threat for humanity?"
Conflict is composed of opposing forces.
Artificial Intelligence is composed of opposing forces with one being the call for banning militarized A.I, and the other accepting the use of A.I., and to continue to research it and develop it. Elon Musk and other A.I. experts call for a ban as it is stated in the article made by David Z. Morris, “One hundred and sixteen roboticists and A.I. researchers, including SpaceX founder Elon Musk and Google Deepmind co-founder Mustafa Suleyman, have signed a letter to the United Nations calling for strict oversight of autonomous weapons, a.k.a. “killer robots.” The letter describes the risks of robotic weaponry in dire term, it also warns that failure to act swiftly will lead on to “arms race” towards killer robots. The opposing side is to develop A.I. It has been stated by Evan Ackerman that “There’s been continual development of technologies that allow us to engage our enemies while minimizing our own risk, and what with the ballistic and cruise missiles that we’ve had for the last half century
Conflict may be natural or man-made.
Weaponized Artificial Intelligence is a man-made conflict because humankind wants power, and we want control. Humans can use A.I. to get power, even at the cost of human lives. Then, A.I. can be used in war to gain the upper hand because countries do not care how dangerous a weapon is if it helps protect their country and win the war. Some research by Will Knight states “automated weapons could conceivably help reduce unwanted casualties in some situations, since they would be less prone to error, fatigue, or emotion than human combatants.” This would allow more night attacks and will be able to hit target accurately with less training. Since artificial intelligence was first coined by John McCarthy in 1965, when he held the first academic conference on the subject. A.I. researchers have been able advance and create more objects of war. As technology advances, the threat becomes more dangerous, and will increase the need for power. Evan Ackerman states, ”There will be some sort of arms race that will lead to the rapid advancement and propagation of things like autonomous “armed quadcopters,” eventually resulting in technology that is accessible to anyone if they want to build a weaponized drone” to show that the countries might go into a race for power, or a race for an upper hand in war.
Big idea Conflict may be intentional or unintentional.
The use of A.I. as weapons in war is intentional. However, the resulting ethical conflicts that surround the weaponization of A.I. may not be intentional. In war A.I. is used to pilot unmanned vehicles protecting the lives of allied forces, but also endangering the lives of opposing forces. Since A.I. are not fully humans, they may still make mistakes in judgment or can be hacked to endanger their own side. From the article, “AI (Artificial Intelligence),” it states that, “A.I. systems have a sense of self, have consciousness” which could mean that they might endanger people in orders to accomplish its purpose. Another quote from the same article describes an A.I. robot that currently can do certain actions, but cannot use memory. It states, “Deep Blue can identify pieces on the chess board and make predictions, but it has no memory and cannot use past experiences to inform future ones,” which means that A.I. still has more to go in development. In wartime, humans use their memory and their ethical judgments to make decisions, but A.I. may not have that capacity. Overtime, A.I. may become something that may help and hurt us in times of war. At the Association for the Advancement of Artificial Intelligence in 2015, Evan Eckerman stated, ”banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil” showing that weaponized A.I. is intentional.
Intentional
Creating reactive machines that analyze possible moves and it chooses it strategic moves.
Integrating self-awareness into A.I.
Giving weapons to A.I.
Using A.I. technology in self-manned vehicles and planes.
Conflict may allow for synthesis and change.
A.I. creates a threat to humanity because the technology used to create it opens new doors of possibilities to be helpful or hurtful. In war, humans are the go to weapon but at the risk of our own lives, so technology is starting to take that role. According to an article from Anthony cuthbertson, he states that “A.I. makes it possible for machines to learn from, adjust to new inputs, and perform human-like tasks.” This shows that technology is starting to match or maybe even surpass the intellect of humans which would make them an excellent war weapon, and although they are getting close to our intellect, they will follow orders without question, no fear for their lives. Also an article known as “We should not ban ,‘Killer Robots,’here’s why,” the author states that ”there will be some sort of arms race that will lead to the rapid advancement and propagation of things like autonomous “armed quadcopters,” eventually resulting in technology that’s accessible to anyone if they want to build a weaponized drone.” This shows that people, war, and technology all come together to create weaponized A.I. and this will create a definite threat to humanity especially when everybody is given access to it .
Changes: the transition from man to man combat to robot vs robot
Conflict is progressive.
A.I. keeps changing as technology continually progresses and improves. “Is Artificial intelligence dangerous” an article by R.L. Adams states ,“AI will clearly pave the way for a heightened speed of progress” showing that progress is making changes to the way we live our lives, and the way people fight their battles in social life or in war. A.I. is progressive because as the years pass they all work together like how they use the internet and robots to create special aircrafts/drones, as stated from the article, “Military Robots: Armed, but How Dangerous?” using drone aircraft or bomb disposal robots, raising the prospect that those actions could be automated.” Furthermore, A.I. technology is seen more and more in people’s everyday lives such as through their smart homes, smart phones, smart speakers, and help like Siri or Alexa. Therefore, using A.I. and its role in our lives is becoming more a part of the world. It is with no question that these technologies are advancing even more through the military, where militarized or weaponized A.I. can be developed overtime. This would contribute to the conflict continuing being progressive.
LANGUAGE OF THE DISCIPLINE
Artificial intelligence is when a machine is programmed to think like a human. An article published by Nick Heath in February 12, 2018, mentions, “artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.”
Weaponry can be defined as weapons or other things to harm people as it states in the Dictionary, ”weapons regarded collectively.”
PARALLELS Global- This can affect our globe positively or negatively due to our nation’s leader’s decisions. Evidence to back up the negative side is stated in our SLR responses on a question. The question was, “How do you think artificial intelligence may impact the human race over time?” and our feedback was that out of all the choices 31.8% of their responses was negative meaning that A.I. will impact humans negatively. Our other side of the parallel is how it will help our world. Some research that shows this is from the article published by Andy Patrizo in 2017 that mentions ,“A.I. allows for more intricate process automation, which increases productivity of resources and takes repetitive, boring labor off the shoulders of humans. They can focus on creative tasks instead" and this can relate to the positive side because it can mundane task.
Community- This can relate to our community because it has two sides with one being the fact that our military is using A.I. for weaponry while on the other side our community is using it to deliver packages, film video or capture photos, create security in our home systems, listen to requests for playing music or searching information, or talking to other apps. The military side is discussed in an article by Robert W. William who mentions, “It could provide unpredictable and adaptive adversaries for training fighter pilots.” This summarizes that the military is using A.I. to help train the fighter pilots to make them visualize what it will be like to be in the air and flying around. Our second side revolves on our community using A.I. to send or deliver. Some research that proves this is found in the article made by Murray Newlands who mentions, “As of now, chatbots can be integrated into numerous applications, which allows them to receive payments and check on orders.” This is just one way that A.I. is part of our community and the way that we live using apps, helpers, smart phones, and smart homes.
Personally -A.I. affects us personally because we see A.I. technology all around us from the many devices that people have in their homes or classrooms that utilize A.I. It also affects us because when we are older, we may live in a world where the military uses A.I. which as shown in our research can help or harm us. Evan Ackerman states ”There will be some sort of arms race that will lead to the rapid advancement and propagation of things like autonomous “armed quadcopters,” eventually resulting in technology that’s accessible to anyone if they want to build a weaponized drone,” which will allow random people with unknown intentions to use potentially dangerous and deadly weaponry for unethical purposes that could affect the world my peers and I will live in.
IMPACT Artificial intelligence has the potential to threatens our society and impact it in a dangerous way. As stated in the article, ”We should not ban Killer Robots, here’s why” the author notes, “you see something sinister on the horizon when there’s so much simultaneous potential for positive progress.” This shows that progress also comes with a draw back and if not balanced, technology would not help us. It may endanger our advancements in war and also our race. This impacts us and is a threat to humanity because the more weapons there are, the more greed there is to get them. Furthermore, the military can now engage in conflict from an even more discreet location as expressed by the quote “Military technology has advanced to allow actions to be taken remotely, for example using drone aircraft or bomb disposal robots, raising the prospect that those actions could be automated,” as stated in ¨Military Robots: Armed, but How Dangerous?¨ This shows that through time, the impact of weaponized A.I. can negatively impact how war is fought and how humans are affected. It creates ethical conflicts because the technology that may help us progress can harm our race, and the act of war is not longer in person but detached from the dangers of war, which creates a disconnect between people and the wars they fight.
Unanswered questions
Questions we still have are:
Is an artificial intelligence arms race inevitable?
How will this impact our future?
Will the reliance on A.I. affect the way humans think?
Will this create an advancement with the way humans make their choices with their problems?
STUDENT-LED RESEARCH - OUR METHOD
The method that we chose was to create a survey because we can see the results that we got from the community and how they can relate to it. We asked six questions and our results show how 45 middle school students, who will grow up and live in a future where A.I. is more part of their lives, thought about the weaponization of A.I. One flaw or shortcoming of our method is that it doesn’t reflect how most people feel because we only received respondents from a small group of middle school students who do not represent the rest of the population. However, getting their opinions helps us understand our conflict within our own community of peers.
Analysis: According to the data, shown above both Negative and Neutral got similar responses with 31.1%, while 20% of people chose that it will impact it positively.
Analysis: Out of the 45 responses that we got, only 4.4% (2 people) fully supported the use of A.I. in weapons. Then there was only 6.7% (3 people) that didn’t support it at all, while 48.9%(22 people) felt neutral.
Analysis:We learned that 6.7% (3 people) rely on artificial intelligence which is . Then we learned that 22.2% (10 people) don’t rely on AI at all which is .
Analysis: The most responses that we got was in the neutral section with a 53.3% witch shows that people don’t really care what the government does. The second most voted choice was that 31.1% of people believe that this will help our military.
Analysis: The most selected choice was the second which stated that A.I. will provide… to show that they believed that it will provide more accuracy in times of war. The second choice they selected was the 4th one which they think that it will start or make wars worse.
Analysis: Lastly, the most feedback we received was that most people didn’t know but 35.6% of people thought that our military should do more research into this and 24.4% of people thought we should not go into this territory.
USE THESE BELOW OR WRITE NEW ANALYSIS UNDER Each oF YOUR SIX QUESTIONS!
After we reviewed our responses we discovered that both negatively and neutral(31.8%) had the same vote for how the A.I may impact the human race over time.
Our second question is from “How do you feel about the government using technological innovations like artificial intelligence within weapons?” and we didn’t get much because most of them chose neutral.(50%)
Our third question is from “ How much do you rely on artificial intelligence in your daily life? Artificial intelligence can include Siri, Alexa, Cortana, etc…” and we learned that 20.5% don’t use most of these, and 6.8% of people rely on A.I.
Our fourth question is, “ Do you think that artificial intelligence can help our military or harm or military?” and the most responses that we got was a neutral with 52.3% and with a 31.7% of people thinking that it will help our military.
Our fifth question is, “ What are your concerns about using artificial intelligence in weaponry during times of war?” and the most responses we got were for “A.I will provide more accuracy in times of war” with a 77.3%
Our sixth and last question that we asked was, “Do you believe that our military should invest more into researching and exploring the weaponization of artificial intelligence?” and the feedback that we received wasn’t to great because 40.9% chose I don’t know but 34.1% chose that our military should do more research on weaponization on artificial intelligence.
How do you think artificial intelligence may impact the human race over time?
Positively Negatively Neutral I don’t know 2. How do you feel about the government using technological innovations like artificial intelligence within weapons?
(On google forms, make the answer choices into a “linear scale” with 1 being “Do not support” and 5 being “Fully support”
3.) How much do you rely on artificial intelligence in your daily life? Artificial intelligence can include Siri, Alexa, Cortana, etc…
On google forms, make the answer choices into a “linear scale” with 1 being “I don’t use A.I.” and 5 being “I always rely on A.I.”
4.) Do you think that artificial intelligence can help our military or harm or military?
Help
Harm
Neutral
I don’t know
5.) What are your concerns about using artificial intelligence in weaponry during times of war? Check all that apply. *use checkboxes on google form
I do not have any concerns about artificial intelligence in weaponry
A.I. will provide more accuracy in times of war
A.I. will provide more accuracy in times of war
A.I. override human empathy during times of war
A.I. will make better or wiser decisions during times of war.
A.I. may possibly start or make wars worse.
A.I. will not impact the way wars are fought.
6. Do you believe that our military should invest more into researching and exploring the weaponization of artificial intelligence? Yes, our military should do more research into this No, we should not go into this territory. I don’t know.
reflection 1: Most of the articles that we have researched are primarily about the war and how it relates to it being man- made with technology.Also, during my research I have discovered all the new weapons the military are using this year and how it operates.However we did have to change our driving question into a new one which is” Does W. Tech repel against humans with their contribution to advanced weapons.”Then I hope to find more information on what they are gonna do about it and how they will resolve it.Lastly, in my opinion I believe that our collaboration is making me ask more questions better and I don’t believe that we have anything working.If something isn’t working we will try to fix it.
reflection 2:
We are planning on doing a survey so then we can receive more accurate feedback.the easy part about this was the links and the article tags, but the hard part was getting the driving question approved.One of the few things we have to do is figure out more better survey questions.Then I believe that our group is doing fine with our project.
reflection 3: I learned that their are a lot of weapons but I mainly learned that AI ( artificial Intelligence) is the capability of a machine to imitate intelligent human behavior. I learned that I can share my ideas with the creative piece and the board design and I feel more comfortable asking questions. We were planning on doing a camouflage background and with a computer screen reading our driving question.
Reflection 4:Our driving question was changed to a new one which is, “Does the convergence of artificial intelligence and weaponry create a threat for humanity? The revisions that we made were mainly on the isd and how we had to use more clear textual evidence. I learned that if I want to present in front of people, I have to have a clear loud voice.I learned that we can create more better things that require creativity with 2 or more people because you collaborate your ideas.