Q1: For your ethics-related term project (see “Course Information” tab for details): Let us continue to develop it step by step over the semester so that it will be manageable rather than a crunch at the end, as follows. Write up 349 words or more (per person if a group project) if your project is a writing project. If it is not a writing project, do work on the project equivalent in effort to writing 349 words or more, and explain specifically what you did (in much less than 349 words!), giving examples (code, for example) if that makes sense. Put this in your blog, labeling it consistently per the example template.
Answer:
To begin with I want to answer the questions and potential ethical problems previously presented.
1. "Giving robots emotions could be viewed as unethical by some." This is certainly true, as giving a machine the ability to feel emotions is giving it the ability to feel BOTH negative and positive emotions. One could argue that it is unethical to inflict pain on another, and I would think that giving something the ability to feel pain in the first place would be of a similar nature, if not more heinous. However, one should never stop at the deontological method of analysis. The fact is that creating new synthetic lifeforms could have a plethora of benefits for mankind. At the least we would have a new race of people capable of working harder than a human could and would have less wants than a human. This could lead to the automation of everything and result in humans no longer needing to work. This easily could fall into slavery and will be touched on in one of the later questions.
2. "Robots with emotions would essentially be new lifeforms, but many would still view them as just machines." This is an unfortunate truth. Humans face discrimination from their fellow man for simply being of a different color, gender, etc. Being of an entirely different species AND then not even being fully organic would lead to discrimination.
3. "Playing God (creating "life") could be viewed as unethical." This could only be viewed this way under virtue ethics and perhaps the deontological method depending on how it was done. If done without inflicting harm onto another lifeform it would satisfy the deontological method and the utilitarian method thus, "playing God" is only unethical to those who only utilize virtue ethics or prioritize virtue ethics, like those who are religious for example.
4. "Would it be ethical to bring sentient "life" into this world knowing that discrimination would follow? Possibly even slavery?" This definitely would not satisfy deontology, as the journey would be a painful one for the new lifeform. The results are also potentially awful, so a utilitarian would have to take special steps to ensure the new lifeforms were protected from discrimination as much as possible to ensure the benefits outweighed the negatives. It would be absolutely necessary to ensure the new lifeforms would not be treated as anything less than equitable or they could come to see humanity as an enemy. Notice I specified equitable and not equal, this is an important specification. Machines for example would be capable of more than a human being and could be expected to work more and in harder conditions than a human without it becoming exploitation. An organic lifeform could be capable of less than a human and thus, less should be expected from the creature to ensure equity is maintained.
5. "Is it ethical to add organic parts to a robot?" I would argue that to simply give them organic parts would not be unethical. It would be no different than giving a human a pacemaker or prosthetic limb in my opinion, but instead of fixing something that is damaged or working improperly it would enhance parts that are already there. Humans are already working on augmenting ourselves via gene manipulation. Since the augmentation of mankind would stand to benefit mankind as a whole, I must argue that it is ethical to give organic parts to a robot.
6. "This could lead to AI being able to process and feel pain. Is this ethical?" The answer to this question is no different than my answer that I gave on question number 1 just applied to physical pain instead of emotional pain.
7. "Is it ethical to add mechanical and technological parts to an organic being?" I would simply point out that humans already do this with pacemakers and prosthetics. I highly doubt anyone is willing to make the argument that pacemakers and prosthetics are unethical considering they save and help the lives of millions.
8. "Is it ethical to pursue immortality? What could be the possible consequences?" The pursuit of immortality only violates virtue ethics outright and could violate deontology if the process were one that inflicted harm on others. However, if harm is not inflicted on anyone then it would satisfy deontology and utilitarianism. The benefits (the utility) of living forever would likely cause a utilitarianist to always say the pursuit of immortality is ethical. Only those who prioritize virtue ethics would consider the pursuit of immortality unethical thus, primarily those who follow some sort of strict code like the religious.
9. "Does this satisfy the utilitarian method? Do the ends justify the means?" I think the augmentation of machines and humans both satisfy the utilitarian method. Taking this to the extent of creating a new lifeform still would satisfy the utilitarian method, as the result is a larger population to continue the growth of society.
10. "Does this satisfy the deontological method? Is the journey one riddled with immorality?" This journey very well could be one riddled with immorality if special care is not taken to ensure that the journey is a safe and pleasant one for everyone involved. However, it does not outright violate deontology if the special care is taken to ensure the new lifeforms are integrated into society in a safe and proper way. I would say it can satisfy deontology but is not guaranteed to.
11. "What are potential consequences to bridging the gap between technology and biology?" The primary issue to be faced is we could end up creating a new enemy instead of creating an ally that benefits mankind. If discrimination is not properly prevented, then the new lifeforms would use their newfound emotions to hate us and would likely be of no use to society at this point. This unethical outcome must be avoided through special protections put in place. Humans could also augment ourselves to a point that we are no longer recognizable as human. This is a common fear presented in science fiction. It could be argued that this outcome is not ethical because while we may be capable of more in the augmented state, we cannot benefit MANkind if we are no longer recognizable as huMAN. In this case there is no utility for mankind because there is no longer a mankind.
Q2: Explain what needs to be done next on the project. Put this in your blog, labeling it consistently per the example template.
Answer:
Next, I need to define what special protections could be put in place to ensure that discrimination does not occur to the new synthetic lifeforms and I need to try to define at what point an individual has lost their humanity.