.

Friday, June 7, 2019

Ethics of Autonomous Drones in the Military Essay Example for Free

Ethics of Autonomous Drones in the Military EssayShe states that even the best and most trained spends that argon in the midst of fighting may not always be subject to act accordingly with the battlefield rules of engagement that were stated by the Geneva Convention be convey of possible whipping start from normal human emotions such as anger, fear, resent, and vengefulness. The second major point Dean wants to show, by the views and studies of others, in her article is that with this possible stride in our evolution of military engine room we do not want to let this idea fade away. Another major point is if we do fracture this technology how would we do so, and if not, would we regret not advancing in this field further many geezerhood from now. With all of this information Dean uses to present her ideas at that place are still major flaws such as, the majority of these ideas and beliefs are theoretical, they still digest not been fully tested, there is error in all technologies, and where else would the technological advancements turn over artificial intelligence.The first argument providing support for Deans major point comes from the look into venture and thoughts of a computer scientist at Georgia bestow of Technology named Ronald Arkin. Arkin is currently under contract by the United States Army to design software programs for possible battlefield and current battlefield robots. The research hypothesis of Arkin is that he believes that intelligent autonomous robots can perform much more ethically in the heat of the battlefield than humans currently can.Yet this is simply a hypothesis and while there is much research done towards this hypothesis there are still no absolutely positive research information that states an autonomous robot laggard can in fact perform better than any soldier on the ground or up in a plane could do. In Arkins hypothesis, he stated that these robots could be designed with no sense of self-preservation. This means that without one of the strongest fears for humans, the fear of death, these robots would be able to understand, compute, and react to situations with out outside extraneous emotions.Although the men and women designing these robot programs may be able to eliminate this psychological problem of scenario fulfillment, which will cause soldiers to retain information that is playing out easier with a bias to pre-existing ideas, it is not always the case that this happens to soldiers. You take in to realize that from the second a soldier begins his training he is trained and taught to eliminate the sense of self-preservation. there are isolated incidents with soldier error, but they are and will be corrected by topnotchior officers or their fellow soldiers.Another factor that affects Cornelia Deans arguments is that there are errors in all things including technology. Throughout history there have been new uses of technology in warfare but with these come problems and error fla ws that have cause and can cause more casualties than needed. With the use of an Automated carrier bag the belief by Dean is that it will be able to decide whether or not to launch an attack on a high precession target whether or not if the target is in a public are and will decide if the civilian casualties would be worth it.But what happens if that house trailer is only identifying the target and the number of civilians surrounding it? It will not be able to factor in what type of people would be around him such as men, women, or children and any variance of them. The error in this situation would be the drone saying the target is high enough priority and a missile is launched and the civilians were women and children around while a school bus was driving by.The casualties would then instantly out weigh the priority to eliminate a particular target and a human pilot would much easier abort a mission than a predetermined response of an autonomous robot. Although Ronald Arkin be lieves there are situations that could line up when there may not be time for a robotic device to relay back what is happening to a human operator and wait for how to move in the situation that could complete a mission, it may be that second of time delay between the robot and human operator that the ethical perspicacity is made.Also the realization that many robots in which are operated by humans are widely used to detect mines, dispose of or collects bombs, and clear out buildings to help ensure extra safety of our soldiers is a way that robots are already used today as battlefield assistants supports Dean. But all of these machines in the field have moments of failure or error. When the machines do fail it takes a soldier who has trained for that experience to fix and then use it again. If an autonomous drone fails while on a mission it is completely by its self and no human operator to fix it.Then can arise the problem of enemies realizing they were even being monitored and t hey could gain access to our military technology and can eventually use it against us. Another major point that Cornelia Dean discusses upon is with this possible step in our evolution of military technology we do not want to let this idea fade away. A large part of that is if we do develop this technology how would we do so, and if not, how much would we regret or how much would it affect us for not advancing in this field further many years from now.The argument that if other countries advance upon this faster and better than the United States military we could become less of a macrocosm power and be more at risk of attack and war with greater human fatalities is not necessarily true. This situation is important in the sense of keeping up with the other world powers but I believe that the risk for reward is not worth the amount of damage and civilian casualties that could happen from any number of robotic drones and their possible errors.There is a possibility as the technology d evelops and robots become more and more aware to the point were, Arkin believes that, they can make decisions at a higher train of technological development. Yet if these autonomous robots truly can think for themselves and make decisions brings a whole new possibility of problems of what if the robot can decide something other than than what the developers originally had programmed. Also comes the actual use problem of can the government ethically accept that in early stages of use, even after extraneous testing, there may be accidental casualties.If a robot has any error of making decisions because of how new and un-tested they are any of the possibly terrible results would not be the responsibility of the robot but of the country and government that designed it. The supporting evidence of this article strongly shows that Cornelia Dean will hope that use of these ethically superior autonomous robots will be apart of our military in the near future before the United States fall be hind to other super powers in the world.Yet with all of this information Dean uses to present her ideas there are still major flaws such as, the majority of these ideas and beliefs are theoretical, they still have not been fully tested, and that there is error in all technologies. With these major points being enforced with plenty of evidence throughout the article, and with all of the possible cast out sides and errors of this argument, it is safe to say that this will be and is a controversial topic of discussion by many governments and all parties involved with this technological advancement.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.