Ethics for robots

Watch the video and read the article Morals and the Machine at

As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.

As that happens, they will be presented with ethical dilemmas. Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? Such questions have led to the emergence of the field of “machine ethics”, which aims to give machines the ability to make such choices appropriately—in other words, to tell right from wrong.


10 thoughts on “Ethics for robots

  1. Sam B says:

    Assignment 8: Read the article watch the video at Also read

    Give an example of a moral dilemma a computer or robot might face. Say how you think the issue might be resolved.

    Does the question of ethical right and wrong for computers make sense? Why/why not?

    500 words in this comment thread or on our forum. DUE Wed. April 2, 2014 by noon.

    • Jitesh Vyas says:

      Technology is becoming extremely sophisticated and Robots are included in this movement. Traditionally robots were created to replace simple or dangerous human tasks. Increasingly complex artificial intelligence is now allowing robots to venture outside the controlled environments of a lab and into places with endless variables like war zones. Sherwin Yu’s article on Yale Scientific presents a situation where a robot caregiver is must give medication to an elderly person, but the person refuses. The robot only knows as much as what is programmed within and that is based on the knowledge and input of its creators. In this situation every possible outcome cannot be predicted or prepared for by programming it. Drawing parallels to human life, we too only know as much as our past experiences and learnings tell us. We make decisions based on what we know and we make assumptions for what we do not know. Humans have a mix of rationality and emotions in their decision-making. I think decisions should be rational, and when assuming things, we can be logical about what we choose to do, but I think reason should also be valued. Reason requires a bit of emotional intelligence and computers or robots cannot necessarily know what comes next or how to act. Sometimes emotions are needed to be responsive to situations. Though machine learning is improving too, it cannot be fast enough to respond in situations like the one Yu mentions.
      Another example of a moral dilemma a computer might face is euthanasia. If a patient in a hospital is suffering and the plug needs to be pulled, can or should a robot replace the job of a doctor/ family in this situation? Euthanasia is a complex topic and it has a lot of considerations. Ultimately, it is a decision. I believe robots would be great with rational decision making in theory as utilitarian choice models can be programmed. Though it is difficult to understand facts, they can be mathematically programmed to consider and interpret only fact and make decisions based on that. While this may be useful in certain situations, with euthanasia there are emotional considerations and that is hard to quantify or understand for a computer program, no matter how sophisticated. As Yu writes, for functioning in a society of humans, robots require emotions, social intelligence, empathy, and consciousness. No matter how lifelike robots get and where they are placed, there are limitations to how far math can go to truly replicate a human’s judgement/ responsiveness.
      The question of ethical right and wrong for computers does not make sense because the limitations are predetermined by the programmers. Robots can only respond to cases they understand and while some may understand thousands of situations in life, there are inconceivable cases that would render a robot useless. The ethical right and wrong falls in to the hands of the programmers. How they choose to direct the robot in each situation is what the robot will do so they do not have much control over the decision.
      There are still so many ambiguous cases in the world where robots would have no place because emotions are an inevitability of working with humans.

  2. Sam Horton says:

    In the article Morals And The Machine in The Economist, and in the article Machine Morality: Computing Right And Wrong, by Sherwin Yu, ethical issues are discussed that pose a threat to the advancement of robots. These ethical issues change as technology changes, and society and the government will have to work hard to find a universal approach to solving such ethical dilemmas. A moral dilemma discussed in the article is fueled by the recent advancement in drive-less cars. Drive-less cars have been proposed to have the potential of saving more the 50% of lives that car accidents currently cause today. The moral dilemma of course is if one of these drive-less cars had to make the decision between two potentially life threatening actions to make in the scenario of an accident. In addition to this, another moral dilemma would be that surrounding drunk drivers. If the human driver of the drive-less vehicle was intoxicated, and an accident occurred, laws surrounding drunk driving and laws put in place around drive-less vehicles would clash. The question would arise as to who was driving the vehicle, the human driver, or the robotic driver. In the article Morals and the Machine, the idea proposed around a black box being placed within the vehicle could be a viable solution, but a costly and potentially unnecessary one. Such issues could be resolved if a set of laws surrounding AI was implemented before the release of AI products such as drive-less vehicles. Without such proper laws implemented on AI products worldwide and appropriated to different countries where cultural beliefs clash, moral dilemmas could cost society millions of dollars, and potentially cause such AI products that could potentially be beneficial to society to cease its progression. As discussed in the Morals And The Machine article, three laws of robotics are proposed that would begin the process that would lead to laws being proposed and implemented. The first law is to determine who is at fault in an incidence, which would to some degree help in solving the moral dilemma of drunk drivers. The second would be that robots judgment must conform to the judgment the majority of the population would make. This second law of robotics can be troublesome though if such a decision that the majority of the population would make is in fact an incorrect action. And the third law is that those engineers producing such AI must work with Ethicists to determine the extent of what these AI machines can determine. The question of ethical right and wrong for computers therefore does make sense as it may one day reach a point where computers make most of the decisions that humans make today. As we continuously progress with advancements in robotics exponentially growing, we must plan for what may be regarded as impossible today in regards to robotics so that when we do reach that level of AI, we have properly prepared for it.

  3. Spencer Page says:

    With the far-reaching innovations taking place the robotics world, coupled with the dramatic speed of development, it is not surprising that modern-day robots are becoming so intelligent and lifelike. In fact, it is not difficult to imagine a world where robots are completely autonomous. This realization brings to mind moral and ethical issues that are difficult to generally reconcile. For example, the use of robots in warfare has both positive and negative elements. Robots can be used for surveillance and to deliver weapons, making them incredibly useful on many levels, not the least of which is preserving human life when they are used in place of humans for dangerous missions (e.g. diffusing a bomb). In addition, because robots are equipped with intelligence technology and cameras, the amount of data that they are sending back to their human operators far surpasses the information that can be consumed and then transferred from human to human. Because of this, it can be argued that perhaps the robot is in a better position (in some circumstances) to be making decisions about whether or not to fire, since they have the capability of weighing so much information and then acting accordingly. However, a robot may encounter a moral dilemma when they are making life or death decisions in unpredictable circumstances, such as sending a bomb to a target, which happens to coincide with the location where innocent civilians are hiding. Without the benefit of true human emotion and judgment, and in the absence of ultimate accountability/moral agency, a robot should not be making decisions like this.

    The easiest way to resolve this type of dilemma is to avoid having a robot involved in this type of decision-making, but that may not be in society’s best interest because there is an ethical case to be made for using robots instead of humans in some situations. Therefore, legal clarity is required, as well as clarity within the robots themselves, via black boxes that records everything that happens so that the events can be reconstructed and understood from a moral and ethical perspective, after the fact. It is important that robots embody the right ethics (those that a human would have in the same situation) in order to be granted complete autonomy and moral agency.

    As for the ethical question of right and wrong for computers, I do not believe that it makes sense. While there are clearly instances where the opposite can be argued, it is my feeling that in the absence of human emotion, and good judgment, robots are not equipped to be entirely responsible for their decision. Robots must be able to adhere to the basic rights of humans, including privacy, identity and autonomy, which in practice, seems like an impossibility. It goes without saying that in our rapidly advancing technology industry, it is inevitable that robots will make their way into our every day lives in a much bigger way, however putting a robot in the power of making moral or ethical decisions is laden with too much risk as I don’t believe that robots can ever genuinely act like, and respect, real human beings.

  4. Eric Pattara says:

    Robots, the term used to describe pseudo-conscious computers or machines, have been used by mankind to increase efficiency with production lines, monitor computer systems, clean around the house, etc. since their design. In many ways, we (in our current state of being at least) would be lost without them. Many robots are used for the purpose of supplementing manual labour with their own, proving to be more accurate in measurements and design and overall faster in their processes than any person would normally be. There are, however, branches in the robotics field that go beyond that of your everyday Roomba, and could potentially deal with difficult ethical dilemmas. It is with these types of robots that we must ask where to draw the line with their responsibility and whom could be held liable for any actions that take place which has negative (especially harmful) consequences.
    One such dilemma that comes to mind when looking at robots is that of their use in the military. In the military, there are a number of uses that a robot could have; these could range from delivering equipment to providing a full-on assault across enemy lines. Due to the autonomic nature of these devices, assigning them to take part in sensitive operations could be potentially problematic, as they will act in accordance to their programming, and only successful interference could alter their objectives. For example, if a military robot were programmed to deliver weapons to a location, and for some reason or another it was intercepted by locals, this could add significant risk to those living in the area. Or if, perhaps, a computer was being used to drive a tank or some other military vehicle around and was required to take a specific route to get to a destination, the computer will see it as its objective to get there by all means possible, regardless of what it may destroy or kill in the process. The main responsibility of theses ethical situations lies in those programming the machines, as they are not (at least not yet) capable of complete artificial intelligence and thus could not be held responsible for anything they may do while working. Even the HAL 9000 example from 2001: A Space Odyssey, mentioned in the article, Morals and the Machine, is flawed in the sense that if it were possible to program the HAL system to complete the mission by any means possible, it would have also been possible to create a failsafe protocol to ensure the safety of the crew, meaning that the error was in the person who created HAL.
    I personally don’t believe that computers and robots should be considered ethical beings, due to the fact that their actions are dictated by their designers, however indirect it may be. The responsibility lies in the engineers creating these sophisticated robots, and the ethical dilemmas which the robot could face should be taken into account during the programming period, to ensure adequate and acceptable behaviour.

  5. Jonathan Ing says:

    As robots become more capable of higher-order thinking and more integrated into society, questions regarding the ethics surrounding their actions must be answered. One example of advanced robots on the horizon is the driverless car, which navigates using a computer and is said to be able to save up to a million lives annually if traditional human-operated vehicles are replaced with the driverless alternatives. An example of an ethical dilemma for a driverless car is a situation where the robot must either swerve to avoid hitting pedestrians on the road or continue to drive through the passengers to avoid steering into oncoming traffic and in turn, save the passengers inside the car at the expense of the pedestrians.

    The ethical dilemma is in the decision by the robot in determining who deserves to live; i.e. should the robotic vehicle prioritize the safety of the passengers or the pedestrians? The robot may be programmed in a few different ways. The first is the case where the robot always acts to protect the human being(s) inside the car, which would more closely mirror human driving habits and also make car owners more comfortable with the idea of driverless cars. Clearly, this option would seem to appeal to a profit-seeking corporation and its customers who drive their cars. Another option would be to program the robot to make the “value-maximizing” choice for society, wherein the robot is essentially assigning values to the lives of each party, predicting probabilities of survival, and making a decision that would maximize the expected value of lives saved. This option seems to benefit society as a whole, as the car would theoretically select the scenario that preserves the most value to society without bias. Personally, I believe that the case of a “smart robot” that can always make a decision that benefits society the most would be the most ethical design. However, I would not be surprised if manufacturers choose to protect the car’s passengers, as this would increase revenues for the company.

    The ethical right and wrong for computers is an important topic of discussion as robots begin to act more like humans. As they become more integrated into society, the actions of the robots could have severe negative impacts on the humans they are meant to serve and the robots’ abilities to comply with ethical codes will play a role in determining their usability. In the example of the driverless car, the ethics around the decision on who to protect will be debated whenever a collision occurs. However, the onus of minimizing the negative ethical consequences of robots’ actions should fall on the people who engineer them, as they have control over how the robot behaves and learns how to make ethical decisions. A panel of ethicists and engineers should collaborate to create guidelines for the engineering of robots such that they follow an agreed-upon code of ethics, to the best of the ability of the technology at hand. This will ensure that robot ethical guidelines are created in an informed manner and applied consistently.

  6. Lily K says:

    Let us assume that robots were programmed to obey the Three Laws of Robotics as illustrated in the short stories of Isaac Asimov. Without full moral agency, it is quite plausible that the robot would be faced with a moral dilemma if/when a combination of these three rules were to overlap or contradict one another. Even without the Three Laws of Robotics, the most advanced machines today only have operational morality. This means that their actions are entirely programmed by the humans involved in their design, thus placing robots far from full moral agency. However, even without full moral agency, these robots can still do much more good than harm.

    For example, imagine if your robotic car could differentiate between a child and a squirrel. If a squirrel runs in front of your robotic car on a busy street, your vehicle could calculate the logical course of action by running over the squirrel in order to avoid any further accidents with other vehicles around you; thus protecting you and other drivers alike. However, if a child were to step in front of your robotic car, your vehicle would swerve in order to avoid hitting the child. In these examples, the robotic car has the ability to make the correct decision and avoid potentially fatal accidents. Considering that robots are not fully moral agents, moral dilemmas such as the example above can be resolved considering three laws. According to the ‘Morals and the machine’ video, the first step is to determine who would be at fault in case of an accident. Who would be responsible? Would is be the designer, the programmer, the manufacturer, or operator’s fault if the robotic car makes an error or causes an accident? The second step would be to program the robot in the most ethical way possible across cultures. Although there are cross-cultural differences in terms of what is or is not acceptable in a society, the programmers must attempt to find an ethical system that would be universally accepted. In this case, most cultures would agree that saving the life of a child is more important than saving the life of a squirrel. Lastly, the third step would be to collaborate with various professionals such as engineers, ethicists, lawyers, and policymakers in order to ensure that different areas of concern would be addressed. By working together, these professionals could help to ensure that these robots are created to function in the most ethical way possible.

    When analyzing the ethical right or wrong for computers, I believe that the answer currently remains in a grey area. I believe that we have the potential to reach our ultimate goal of creating an ethical computer, however that goal can only become achieved once there is a 0% chance of human error. As noted by Wallach, AMA’s will need both a top-down and bottom-up approach in order to achieve full moral agency. As robots possess greater autonomy and increase their ethical sensitivity, we as their creators, must ensure that the robot is programmed to provide more good than evil for society as a whole. We must ensure that highest functioning robot can make moral and ethical judgments without direct instructions from humans.

    To conclude, I believe that embedding a robot with an ethical system would be a rather difficult task considering the fact that we as humans already struggle with our questions of ‘what is right and what is wrong’. If humans make unethical decisions, what makes us think that robots can do better? But on the other hand, that in itself can be the answer. Mankind’s greatest power can also be our greatest weakness. When humans act on impulse, their actions are a direct result of their own personal reactions; sadness, fear, anger, etc. Robots however, can be programmed to act regardless of their own will. Therefore, robots may in fact have the potential to become greater ethical ‘beings’ than humans!

  7. Aaron Macrae says:

    The Economist video discusses about robots being increasingly used in everyday life, on the battlefields, on the road and in hospitals. They mention how the advancement of computing technology enables improvement in robot technology. Robot technology is used to protect individuals in all cases, from hospitals and elderly car to saving soldiers lives in modern warfare. In the video they talk about how in many situations a robot would have to face moral dilemma when it in an under pressure situation. Potential for civilian casualties is a large topic of discussion, will the robot make the right ethical choice and if so, how will it know what to choose?
    Robot technology seems to be developing with the hope that it can increase protection and civilian safety. However, there may be danger in inviting robots to protect our everyday lives. Robots have to be programed to have ethical choice; this is something very hard for the programmers to develop. An ethical issue discussed about in the video is for a robot to experience a malfunction in a self-driving car. Say a self-driving robotic car is driving and there is a chicken and a young child in front of them but they have to swerve one way. Every individual would know it is much better to hit the chicken than take that child’s life. The car on the other hand might not be able to tell the difference between a chicken and a small child and potentially take the wrong life.
    To solve an issue like this, self-driving cars need to be programed to recognize humans of all ages over other life forms so that a human life is not taken in the process. Similarly as the self-driving cars start to be introduced into our roadways it is essential to have some type of ethical programing on board and working correctly. I’m sure if there was a glitch in the program and a life was taken in the process than uproar of unhappy citizens would occur demanding answers.
    The question of ethical right and wrong for computers’ does make sense. The example I presented above which discussed the ethical implications of a self-driving car making the choice to hit a young child or swerve out of the way and hit a chicken. This case is mostly framed around what other actions could have been taken to not hit the child and keep the driver safe. It is understandable that a human’s moral judgment would be acquired through their personal experiences. Humans cannot hold multiple sets of moral judgments… but can robots? If they can, ethical dilemmas could be resolved in ways that better reflect the diversity in moral judgments of the population it protects. If it cannot, we are at risk of being governed by a particular set of moral judgments prescribed and controlled by a particular ideology/people.
    As robotics continue to enter civilian life on a more regular basis, it is important to question what judgments robotics and their programing make for us. Developers have to discover how ethics will be learned and practiced by the robots to ensure the public of a safe environment.

  8. aaronrush says:

    The moral dilemma that robots might face which I would like to talk about relates to driverless cards. Driverless cars are becoming much closer to being a reality, and are even already out on the roads being tested by big companies such as Google. The ethical dilemma these robots will face is that for day to day driving, these robotic cars will need to make decisions, some even being life and death decisions.
    With every day driving being taken over by robots, they will need to be programmed to adapt to situations which may even cause harm to the passengers in the car. A good example is if a car slows to a halt rapidly in front of another car with passengers, with the car knowing it does not have time to stop, does it shift to a different lane which runs the risk of running into another car, or try to stop abruptly but with the certainty of either not making it, or being slammed from behind. When robotic cars are on the road, they will need to take into account human error as well, with some cars being driven by humans, and some cars being driven by people.
This is where the question of ethical right and wrong for computers makes sense. When the robot is controlling the car, and there is a chance of injury to the people in the car, as well as those around the car, the question is how the robotic car will be programmed to act. Will it’s main duty be to the passengers in the car, or those around the car. Not only that, but who is liable for the decisions the car makes, the passengers, or the company that built the car. These pressing issues as well as many other issues will start to come to light as we see more and more about driverless cars.
    It’s clear however, that the issue of ethical right and wrong for computers makes sense. Robots will not only have to make decisions that are logical, but will cross into making decisions that are ethical. The question will come up how these robots will make decisions, and can they be programmed to make ethical decisions such as saving the people around the car versus those inside the car, if there are more people surrounding the car. I suspect in the coming years, this issue as many others related to robot ethics will come to light.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: