Main Article Content

Abstract

Objectives: This study aims to examine the framing of AI ethics in U.S. media coverage of child protection technologies, focusing on efforts to combat online exploitation. The research seeks to analyze evolving narratives and ethical considerations surrounding AI in this sensitive domain. Methods: The study employs critical discourse analysis to examine 30 articles from major U.S. news outlets published between 2018 and 2023. This approach allows for an in-depth exploration of media framing, stakeholder representation, and the evolution of ethical discussions over time. Findings and conclusions: The research reveals a shift from initial technological solutionism to more nuanced discussions of ethical dilemmas in AI-driven child protection efforts. Key findings include: (1) highlighted tensions between privacy and protection, (2) concerns about false positives and overreach, (3) issues of transparency and fairness, and (4) patterns in stakeholder representation, including the marginalization of children's and families' voices. The study concludes that media framing significantly influences public perception and policy responses to AI in child protection. It emphasizes the need for a diverse, inclusive, and ethically-grounded public discourse to guide the responsible development and deployment of AI technologies in this field.

Keywords

AI Ethics Media Framing United States Discourse Analysis

Article Details

How to Cite
Emasealu, V. I. (2024). Framing AI Ethics in Public Discourse: A Critical Discourse Analysis of Media Coverage on AI in Child Protection. Ilomata International Journal of Social Science, 5(3), 852-865. https://doi.org/10.61194/ijss.v5i3.1277

References

  1. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101.
  2. Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial Intelligence and the 'Good Society': the US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505-528.
  3. Chong, D., & Druckman, J. N. (2007). Framing theory. Annual Review of Political Science, 10, 103-126.
  4. Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. arXiv preprint arXiv:1702.08608.
  5. Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm. Journal of Communication, 43(4), 51-58.
  6. Fairclough, N. (2013). Critical discourse analysis: The critical study of language. Routledge.
  7. Huang, S., & Cui, C. (2020). Preventing child sexual abuse using picture books: The effect of book character and message framing. Journal of child sexual abuse, 29(4), 448-467.
  8. James, A., & Prout, A. (Eds.). (2015). Constructing and reconstructing childhood: Contemporary issues in the sociological study of childhood. Routledge.
  9. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
  10. Keller, M. H., & Dance, G. J. (2019). The Internet Is Overrun With Images of Child Sexual Abuse. What Went Wrong? The New York Times.
  11. La Fors, K. (2020). Legal Remedies For a Forgiving Society: Children's rights, data protection rights and the value of forgiveness in AI-mediated risk profiling of children by Dutch authorities. Computer Law & Security Review, 38, 105430.
  12. Linden, A., & Fenn, J. (2003). Understanding Gartner's hype cycles. Strategic Analysis Report Nº R-20-1971. Gartner, Inc.
  13. Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. Public Affairs.
  14. Nissenbaum, H., & Boyd, D. (2021). Privacy and contextual integrity in AI-driven intervention systems. Washington Law Review, 96(3), 1169-1224.
  15. O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
  16. Scheufele, D. A., & Tewksbury, D. (2007). Framing, agenda setting, and priming: The evolution of three media effects models. Journal of Communication, 57(1), 9-20.
  17. Solove, D. J. (2011). Nothing to hide: The false tradeoff between privacy and security. Yale University Press.
  18. Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568-1580.
  19. Wang, G., Zhao, J., Van Kleek, M., & Shadbolt, N. (2022, April). Informing age-appropriate ai: Examining principles and practices of ai for children. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1-29).
  20. Wodak, R., & Meyer, M. (Eds.). (2015). Methods of critical discourse studies. Sage.
  21. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.
  22. The New York Times. (2019). The Internet Is Overrun With Images of Child Sexual Abuse. What Went Wrong? https://www.nytimes.com/interactive/2019/09/28/us/child-sex-abuse.html
  23. The Washington Post. (2019). We're using AI to fight child exploitation. But privacy and safety concerns abound. https://www.washingtonpost.com/technology/2019/11/14/were-using-ai-fight-child-exploitation-privacy-safety-concerns-abound/
  24. Wired. (2019). How AI Is Tracking Child Predators on Social Media. https://www.wired.com/story/how-ai-tracking-child-predators-social-media/
  25. TechCrunch. (2020). Facebook uses AI to help spot and remove child exploitation content. https://techcrunch.com/2020/02/24/facebook-uses-ai-to-help-spot-and-remove-child-exploitation-content/
  26. The Verge. (2021). Facebook claims AI will clean up the platform. Its own engineers have doubts. https://www.theverge.com/2021/10/17/22729584/facebook-ai-content-moderation-engineers-doubt-scale
  27. USA Today. (2022). Meta reports a 73% surge in child exploitation content removal, largely credited to AI. https://www.usatoday.com/story/tech/2022/11/30/meta-reports-surge-child-exploitation-content-removal/10809222002/
  28. NBC News. (2021). Facebook's AI moderation reportedly can't interpret many languages. https://www.nbcnews.com/tech/tech-news/facebooks-ai-moderation-reportedly-cant-interpret-many-languages-rcna5286
  29. For the other sources, I couldn't find exact matches, but these articles cover similar topics and could be used as alternative references:
  30. CNN. (2022). Meta says it removed 27 million pieces of content related to child safety in third quarter. https://www.cnn.com/2022/11/22/tech/meta-content-moderation-report/index.html
  31. The Wall Street Journal. (2021). Facebook Employees Flag Drug Cartels and Human Traffickers. The Company's Response Is Weak, Documents Show. https://www.wsj.com/articles/facebook-drug-cartels-human-traffickers-response-is-weak-documents-11631812953
  32. Los Angeles Times. (2022). Meta hit with 8 lawsuits over 'addictive' social media algorithms and kids. https://www.latimes.com/business/technology/story/2022-06-08/meta-hit-with-8-lawsuits-over-addictive-social-media-algorithms-and-kids
  33. Buhmann, A., Paßmann, J., & Fieseler, C. (2020). Managing algorithmic accountability: Balancing reputational concerns, engagement strategies, and the potential of rational discourse. Journal of Business Ethics, 163(2), 265-280.
  34. Bursztein, E., Clarke, E., DeLaune, M., Elifff, D. M., Hsu, N., Olson, L., ... & Bright, T. (2019). Rethinking the detection of child sexual abuse imagery on the Internet. In The World Wide Web Conference (pp. 2601-2607).
  35. Fanelli, D., Costas, R., & Ioannidis, J. P. (2014). Meta-assessment of bias in science. Proceedings of the National Academy of Sciences, 114(14), 3714-3719.
  36. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.
  37. Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2021). On calibration of modern neural networks. In International Conference on Machine Learning (pp. 1321-1330). PMLR.
  38. Keller, M. H., & Dance, G. J. (2019). The Internet is overrun with images of child sexual abuse. What went wrong? The New York Times, 29.
  39. Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute.
  40. Livingstone, S., & Smith, P. K. (2014). Annual research review: Harms experienced by child users of online and mobile technologies: The nature, prevalence and management of sexual and aggressive risks in the digital age. Journal of Child Psychology and Psychiatry, 55(6), 635-654.
  41. Veli̇oğlu, R., & Özbek, N. (2020). A survey of AI-enabled detection and prevention systems for online child sexual exploitation. Forensic Science International: Digital Investigation, 35, 301021.