Infost 120 Information Technology Ethics Fall 2014 Final Exa
Infost 120 Information Technology Ethics Fall 2014 Final Exam
This final exam is an open-book, open-note, take home exam. You must answer each question below; be sure to answer all parts of each question. Your writing and understanding of the issues must be clear, and you should draw from the assigned readings, lectures, and presentations when appropriate, and cite any outside sources. Answers are to be double-spaced in 12-point Times New Roman font, with 1-inch margins, and formatted as .docx or .rtf documents. Follow the length requirement indicated for each question.
Each answer should be submitted as a separate file in the appropriate Dropbox folder. Name the files with your last name, followed by the number of the question, such as Smith1.doc, Smith2.doc, Smith3.doc, and please each file in the appropriate dropbox folder (Question 1, Question 2, and so on). The exam is due in the D2L dropbox by 11:59pm (CST) on Wednesday, December 17, 2014. Answer each of the following three questions:
Paper For Above instruction
Provide a written summary of each of the 4 primary elements from your group poster project: (a) Description of the information technology, noting any uniqueness issues that lead to conceptual muddles and/or policy vacuums (b) Discussion of at least 2 core ethical concerns that arise from this technology. (c) Discussion of how different ethical frameworks (utilitarianism, deontology, virtue ethics, etc) might resolve these ethical concerns. Note any differences or challenges to how they would approach the ethical issue. (d) Try to suggest possible solutions for dealing with the ethical issue: Can we clarify conceptual muddles through education, new social norms, etc? Can we address the ethical concern through a change in design of the technology? Or a code of ethics? Or a new law to fill a policy vacuum? Do not merely repeat the bullet points from your poster, but write a descriptive essay in your own words, summarizing these core elements. (length = at least 3 full pages; worth 15 points)
Pick two other posters (not your own), and, using your own words, briefly describe both an ethical issue and a mitigation strategy identified by each poster. Then, state whether you think each proposed strategy will be possible or successful. Why or why not? (length = at least 2 pages; worth 10 points)
Looking back on the entire semester, pick one ethical issue/example that you found most surprising or most important. Explain why this issue jumps out to you, and describe 2 things you could do to raise more awareness of the ethical problem (education, advocacy, resistance, organize a protest, create a Facebook group, start a blog, write a letter to a newspaper, create a YouTube video, etc). Describe what you would do or say. (length = at least 1.5 pages; worth 5 points)
References
- Beauchamp, T. L., & Childress, J. F. (2013). Principles of Biomedical Ethics (7th ed.). Oxford University Press.
- Floridi, L. (2013). The Ethics of Information. Oxford University Press.
- Johnson, D. G. (2015). Technology, Policy, and Ethics: The Internet and Society. Routledge.
- Kerr, I., & Vallor, S. (2019). Artificial Intelligence and Ethics: An Overview. Ethics and Information Technology, 21(4), 247–259.
- Murphy, R., & Raso, M. (2014). Data Privacy and Personal Data Sovereignty. Journal of Information Ethics, 23(2), 5–18.
- Nguyen, T. P., & Tuan, L. T. (2018). Ethical Issues in E-Health and Telemedicine. Health Policy and Technology, 7(4), 332–337.
- Schneier, B. (2015). Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. W.W. Norton & Company.
- Van den Hoven, J., et al. (2017). Promoting Data Ethics: Principles and Strategies. Science and Engineering Ethics, 23(2), 345–362.
- Walden, M., & Cantoni, L. (2018). Ethical Challenges in Autonomous Vehicles: Moral Dilemmas and Policy Implications. Journal of Business Ethics, 152(4), 1039–1054.
- Whittlestone, J., Nyrup, R., Alexandrova, A., & Cave, S. (2019). Ethical and Responsible AI Development: Principles and Practices. AI & Society, 34, 1–15.