ALGORITHMIC BIAS IN MEDIA CONTENT DISTRIBUTION AND ITS INFLUENCE ON MEDIA CONSUMPTION: IMPLICATIONS FOR DIVERSITY, EQUITY, AND INCLUSION (DEI)

Author: Chizorom Ebosie Okoronkwo

ABSTRACT

In today’s digital age, algorithms play a pivotal role in shaping media content distribution, which may possibly influence what content individuals are exposed to. Consequently, this may have implications for diversity, equity, and inclusion (DEI). Hence, this review analyzes algorithmic bias in media material distribution and its impact on media consumption and the implications for diversity, equity, and inclusion (DEI). The study concludes that algorithm bias limits the visibility of underprivileged groups and perpetuates current social injustices, posing serious problems for media distribution. Moreover, there are risks and opportunities associated with the development of artificial intelligence (AI) and machine learning in tackling algorithmic inequities. Furthermore, there is a need for collaborative efforts among different stakeholders (engineers, policymakers, and media platforms) in creating a more inclusive and equitable algorithms in order to ensure that media distribution systems promote fairness and diversity.

REFERENCES

  • Adeyemi, I. O. (2017). An empirical study on the traits of information literacy level among senior secondary students in Ilorin, Nigeria. Library Philosophy & Practice (e-journal), paper no. 1587, 1-22.
  • Adeyemi, I. O. (2023a). Assessment of justice, equity, diversity, and inclusion (JEDI) initiatives in public libraries: Perspectives from a public library in a developing country. In Perspectives on Justice, Equity, Diversity, and Inclusion in Libraries (pp. 187-197). IGI Global.
  • Adeyemi, I. O. (2023b). Knowledge sharing practices among social media marketers and the significance for business sustainability. In Cases on Enhancing Business Sustainability Through Knowledge Management Systems (pp. 121-134). IGI Global.
  • Ali, W., & Hassoun, M. (2019). Artificial intelligence and automated journalism: Contemporary challenges and new opportunities. International Journal of Media, Journalism and Mass Communications, 5(1), 40-49.
  • Andrus, M., Spitzer, E., Brown, J., & Xiang, A. (2021). What we can’t measure, we can’t understand: Challenges to demographic data procurement in the pursuit of fairness. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 249-260).
  • Aquino, Y. S. J. (2023). Making decisions: Bias in artificial intelligence and data driven diagnostic tools. Australian Journal of General Practice, 52(7), 439-442.
  • Ausat, A. M. A. (2023). The role of social media in shaping public opinion and its influence on economic decisions. Technology and Society Perspectives, 1(1), 35-44.
  • Balayn, A., Lofi, C., & Houben, G. J. (2021). Managing bias and unfairness in data for decision support: A survey of machine learning and data engineering approaches to identify and mitigate bias and unfairness within data management and analytics systems. The VLDB Journal, 30(5), 739-768.
  • Balkin, J. M. (2017). Free speech in the algorithmic society: Big data, private governance, and new school speech regulation. UC Davies Law Review, 51, 1149-1167.
  • Bandy, J. (2021). Problematic machine behavior: A systematic literature review of algorithm audits. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1-34.
  • Barretto, D., LaChance, J., Burton, E., & Liao, S. N. (2021). Exploring why underrepresented students are less likely to study machine learning and artificial intelligence. In Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1 (pp. 457-463).
  • Belenguer, L. (2022). AI bias: Exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and Ethics, 2(4), 771-787.
  • Bozdag, E., & Van Den Hoven, J. (2015). Breaking the filter bubble: democracy and design. Ethics and Information Technology, 17, 249-265.
  • Brewer, P. R., & Gross, K. (2005). Values, framing, and citizens’ thoughts about policy issues: Effects on content and quantity. Political Psychology, 26(6), 929-948.
  • Brough, M., Literat, I., & Ikin, A. (2020). “Good social media?”: Underrepresented youth perspectives on the ethical and equitable design of social media platforms. Social Media+ Society, 6(2), 1-20.
  • Calice, M. N., Bao, L., Freiling, I., Howell, E., Xenos, M. A., Yang, S., … & Scheufele, D. A. (2023). Polarized platforms? How partisanship shapes perceptions of “algorithmic news bias”. New Media & Society, 25(11), 2833-2854.
  • Cavusoglu, L., & Atik, D. (2023). Extending the diversity conversation: Fashion consumption experiences of underrepresented and underserved women. Journal of Consumer Affairs, 57(1), 387-417.
  • Cheng, L., Varshney, K. R., & Liu, H. (2021). Socially responsible ai algorithms: Issues, purposes, and challenges. Journal of Artificial Intelligence Research, 71, 1137-1181.
  • Cohen, J. N. (2018). Exploring echo-systems: How algorithms shape immersive media environments. Journal of Media Literacy Education, 10(2), 139-151.
  • Daneshjou, R., Smith, M. P., Sun, M. D., Rotemberg, V., & Zou, J. (2021). Lack of transparency and potential bias in artificial intelligence data sets and algorithms: a scoping review. JAMA Dermatology, 157(11), 1362-1369.
  • David, C. C., Osorio, M. J., Bunquin, J. B., San Pascual, M. R., & Cabonce, A. B. (2023). Social pressures against criticizing the government: social media, network homogeneity, and majority views. ASOG WORKING PAPER 23-002. Ateneo School of Government.
  • De‐Arteaga, M., Feuerriegel, S., & Saar‐Tsechansky, M. (2022). Algorithmic fairness in business analytics: Directions for research and practice. Production and Operations Management, 31(10), 3749-3770.
  • Diakopoulos, N. (2020). Accountability, transparency, and algorithms. The Oxford handbook of ethics of AI, 17(4), 197-221.
  • Drage, E., & Mackereth, K. (2022). Does AI debias recruitment? Race, gender, and AI’s “eradication of difference”. Philosophy & Technology, 35(4), 89.
  • Fernández Fernández, J. L. (Eds.). (2022). Ethical considerations regarding biases in algorithms. Globethics.net
  • Gal, M. S., & Elkin-Koren, N. (2016). Algorithmic consumers. Harvard Journal of Law & Tech., 30, 309.
  • Garimella, K., De Francisci Morales, G., Gionis, A., & Mathioudakis, M. (2018, April). Political discourse on social media: Echo chambers, gatekeepers, and the price of bipartisanship. In Proceedings of the 2018 world wide web conference (pp. 913-922).
  • Gorwa, R., & Ash, T. G. (2020). Democratic transparency in the platform society. Social media and democracy: The state of the field and prospects for reform, 286-312.
  • Greene, C. (2019). Effects of news media bias and social media algorithms on political polarization (Master’s thesis, Iowa State University).
  • Griffin, R. (2023). Public and private power in social media governance: multistakeholderism, the rule of law and democratic accountability. Transnational Legal Theory, 14(1), 46-89.
  • Harambam, J., Helberger, N., & Van Hoboken, J. (2018). Democratizing algorithmic news recommenders: how to materialize voice in a technologically saturated media ecosystem. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180088.
  • Herzog, L. (2021). Algorithmic bias and access to opportunities. In The Oxford Handbook of Digital Ethics. Oxford: Oxford Academic.
  • Interian, R., G. Marzo, R., Mendoza, I., & Ribeiro, C. C. (2023). Network polarization, filter bubbles, and echo chambers: An annotated review of measures and reduction methods. International Transactions in Operational Research, 30(6), 3122-3158.
  • Iyer, A. (2022). Understanding advantaged groups’ opposition to diversity, equity, and inclusion (DEI) policies: The role of perceived threat. Social and Personality Psychology Compass, 16(5), 1-16.
  • Kay, J. B. (2020). Gender, media and voice: Communicative injustice and public speech. Springer Nature.
  • Kitchens, B., Johnson, S. L., & Gray, P. (2020). Understanding echo chambers and filter bubbles: The impact of social media on diversification and partisan shifts in news consumption. MIS Quarterly, 44(4), 12-28.
  • Kleanthous, S., Kasinidou, M., Barlas, P., & Otterbacher, J. (2022). Perception of fairness in algorithmic decisions: future developers’ perspective. Patterns, 3(1), 1-20.
  • Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems, 31(3), 388-409.
  • Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens versus the internet: Confronting digital challenges with cognitive tools. Psychological Science in the Public Interest, 21(3), 103-156.
  • Lu, D., Ruan, B., Lee, M., Yilmaz, Y., & Chan, T. M. (2021). Good practices in harnessing social media for scholarly discourse, knowledge translation, and education. Perspectives on Medical Education, 10, 23-32.
  • McCray, W. P. (2020). Making art work: How Cold War engineers and artists forged a new creative culture. MIT Press.
  • Nachbar, T. B. (2020). Algorithmic fairness, algorithmic discrimination. Florida State University Law Review, 48, 509-532.
  • Nazer, L. H., Zatarah, R., Waldrip, S., Ke, J. X. C., Moukheiber, M., Khanna, A. K., … & Mathur, P. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health, 2(6), e0000278.
  • Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns, 2(10), 234-251.
  • Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E., … & Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3), e1356-e1371.
  • Padmanaban, H. (2024). Revolutionizing regulatory reporting through AI/ML: Approaches for enhanced compliance and efficiency. Journal of Artificial Intelligence General science, 2(1), 71-90.
  • Peña-Alcántara, A. A. (2022). A Subject Based Methodology for Measuring Interclass Bias in Facial Recognition Verification Systems’ (Doctoral dissertation, Massachusetts Institute of Technology).
  • Plough, A. L. (2022). Necessary Conversations: Understanding Racism As a Barrier to Achieving Health Equity (Vol. 6). Oxford University Press.
  • Rader, E., Cotter, K., & Cho, J. (2018). Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-13).
  • Ragnedda, M., & Ragnedda, M. (2020). New digital inequalities. algorithms divide. Enhancing Digital Equity: Connecting the Digital Underclass, 61-83.
  • Reisach, U. (2021). The responsibility of social media in times of societal and political manipulation. European Journal of Operational Research, 291(3), 906-917.
  • Reviglio, U., & Agosti, C. (2020). Thinking outside the black-box: The case for “algorithmic sovereignty” in social media. Social Media+ Society, 6(2), 1-12.
  • Rogers, R. (2021). Media Feedback: Our Lives in Loops. Rowman & Littlefield.
  • Schelenz, L. (2021, June). Diversity-aware recommendations for social justice? exploring user diversity and fairness in recommender systems. In Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (pp. 404-410).
  • Schwartz, R., Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a standard for identifying and managing bias in artificial intelligence (Vol. 3, p. 00). US Department of Commerce, National Institute of Standards and Technology.
  • Shaffer, K. (2019). Data versus democracy: How big data algorithms shape opinions and alter the course of history. Apress.
  • Shen, H., DeVos, A., Eslami, M., & Holstein, K. (2021). Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-29.
  • Shu, K., Sliva, A., Wang, S., Tang, J., & Liu, H. (2017). Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter, 19(1), 22-36.
  • Simon-Kerr, J. (2021). Credibility in an Age of Algorithms. Rutgers University Law Review, 74, 111-135.
  • Soon, C., & Goh, S. (2018). Fake news, false information and more: Countering human biases. Institute of Policy Studies (IPS) Working Papers, 31.
  • Stark, B., Stegmann, D., Magin, M., & Jürgens, P. (2020). Are algorithms a threat to democracy? The rise of intermediaries: A challenge for public discourse. Algorithm Watch, 26. Published by Governing Council.
  • Stinson, C., & Vlaad, S. (2024). A feeling for the algorithm: Diversity, expertise, and artificial intelligence. Big Data & Society, 11(1), 1-12.
  • Stohl, C., Stohl, M., & Leonardi, P. M. (2016). Digital age| managing opacity: Information visibility and the paradox of transparency in the digital age. International Journal of Communication, 10, 15-27.
  • Sukiennik, N., Gao, C., & Li, N. (2024). Uncovering the Deep Filter Bubble: Narrow Exposure in Short-Video Recommendation. In Proceedings of the ACM on Web Conference 2024 (pp. 4727-4735).
  • Sulaiman, K. A., Adeyemi, I. O., & Ayegun, I. (2020). Information sharing and evaluation as determinants of spread of fake news on social media among Nigerian youths: experience from COVID-19 pandemic. International Journal of Knowledge Content Development & Technology, 10(4), 65-82.
  • Tanna, M., & Dunning, W. (2022). Bias and discrimination. In Artificial Intelligence (pp. 422-441). Edward Elgar Publishing.
  • Taylor, Z. A. (2024). “TikTok as an App Is Not Friendly to Black Creators”: Beauty Capital,” Ideal” Influencers, and Techno-Minstrelsy on Social Media (Doctoral dissertation, The University of North Carolina at Chapel Hill).
  • Ukanwa, K., & Rust, R. T. (2021). Algorithmic bias in service. USC Marshall School of Business Research Paper.
  • van Esch, P., Cui, Y., & Heilgenberg, K. (2024). Using artificial intelligence (AI) to implement diversity, equity and inclusion (DEI) into marketing materials: The ‘CONSIDER’ framework. Australasian Marketing Journal, 32(3), 250-262.
  • Waller, R. R., & Waller, R. L. (2022). Assembled bias: Beyond transparent algorithmic bias. Minds and Machines, 32(3), 533-562.
  • Williams, B. A., Brooks, C. F., & Shmargad, Y. (2018). How algorithms discriminate based on data they lack: Challenges, solutions, and policy implications. Journal of Information Policy, 8, 78-115.
  • Yu, P. K. (2020). The algorithmic divide and equality in the age of artificial intelligence. Florida Law Review, 72, 331-350.