Is It Possible to Avoid Algorithmic Bias?

Authors

DOI:

https://doi.org/10.26512/rfmc.v8i3.34363

Keywords:

Artificial Intelligence. Algorithmic Bias. Algorithmic Governance.

Abstract

Artificial intelligence (AI) techniques are used to model human activities and predict behavior. Such systems have shown race, gender and other kinds of bias, which are typically understood as technical problems. Here we try to show that: 1) to get rid of such biases, we need a system that can understand the structure of human activities and; 2) to create such a system, we need to solve foundational problems of AI, such as the common sense problem. Additionally, when informational platforms uses these models to mediate interactions with their users, which is a commonplace nowadays, there is an illusion of progress, for what is an increasingly higher influence over our own behavior is took for an increasingly higher predictive accuracy. Given this, we argue that the bias problem is deeply connected to non-technical issues that must be discussed in public spaces.

Downloads

Download data is not yet available.

Author Biography

Carlos Henrique Barth, Universidade Federal de Minas Gerais, UFMG

Mestre em filosofia pela Universidade Federal de Minas Gerais (UFMG). Atualmente realiza doutorado na mesma instituição, com bolsa da CAPES. Atuou por 11 anos com desenvolvimento de software (ênfase em sistemas de segurança) e gerenciamento de servidores. Atuou por 7 anos como gestor e assessor de vendas técnicas no ramo de automação industrial.

References

ANGWIN, J. et al. Machine Bias. ProPublica, maio 2016. Disponível em <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>. Acesso em: 2 jun. 2020

BARTH, C. O Frame Problem: a sensibilidade ao contexto como um desafio para teorias representacionais da mente. Dissertação (Mestrado em Filosofia) ”“ Faculdade de Filosofia e Ciências Humanas, Universidade Federal de Minas Gerais, Belo Horizonte, 2018.

BODEN, M. Mind As Machine: A History of Cognitive Science Two-Volume Set. Clarendon Press, 2006.

BRENDEL, W.; BETHGE, M. Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet. arXiv preprint arXiv:1904.00760, 2019. Disponível em: <https://arxiv.org/abs/1904.00760>. Acesso em: 13 jun. 2020.

BRENNAN, T.; DIETERICH, W. Correctional Offender Management Profiles for Alternative Sanctions (COMPAS). In: Handbook of Recidivism Risk/Needs Assessment Tools. John Wiley & Sons, pp. 49”“75, 2017.

BROWN, T. B. et al. Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165, 2020. Disponível em: <https://arxiv.org/abs/2005.14165>. Acesso em: 13 jun. 2020.

CHEMERO, A. Radical embodied cognitive science. Cambridge, MA: MIT Press, 2009.

CHURCHLAND, P. M. A Neurocomputational Perspective: The Nature of Mind and the Structure of Science. Cambridge: The MIT Press, 1989.

CLARK, A. Being There: Putting Brain, Body, and World Together Again. Cambridge: The MIT Press, 1998.

CYCORP. Cyc’s knowledge base. Cycorp, 2020. Disponível em: <https://www.cyc.com/archives/service/cyc-knowledge-base>. Acesso em: 7 ago. 2020

DAVIS, J. Hear me out: let’s ellect an AI as president. Wired, maio 2017. Disponível em: <https://www.wired.com/2017/05/hear-lets-elect-ai-president/>. Acesso em: 29 jun. 2017

DENNETT, D.; DREYFUS, H. Did Deep Blue’s win over Kasparov prove that Artificial Intelligence has succeeded? In: FRANCHI, Stefano; GUZELDERE, Guven (Eds.). Mechanical Bodies, Computational Minds. The MIT Press, pp. 265”“279, 2005.

DREYFUS, H. What Computers Still Can’t Do. The MIT Press, 1992.

ROBOT Recruiters. The Economist, abr. 2013. Disponível em: <https://www.economist.com/business/2013/04/06/robot-recruiters>. Acesso em: 16 nov. 2019

FORD, K. M.; PYLYSHYN, Z. W. (EDS.). The Robot’s Dilemma Revisited: The Frame Problem in Artificial Intelligence. Norwood, NJ, USA: Ablex Publishing Corp., 1996.

FRIDMAN, L. Garry Kasparov: Chess, Deep Blue, AI, and Putin. Artificial Intelligence Podcast, out. 2019. Disponível em: <https://lexfridman.com/garry-kasparov/>. Acesso em: 11 set. 2020

GEIRHOS, R. et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231, 2019. Disponível em: <https://arxiv.org/abs/1811.12231>. Acesso em: 30 nov. 2019.

HACKING, I. The looping effects of human kinds. In: Causal Cognition. Oxford University Press, pp. 351”“383, 1996.

HASHER, L.; GOLDSTEIN, D.; TOPPINO, T. Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior, v. 16, n. 1, pp. 107”“112, fev. 1977.

HAUGELAND, J. Artificial Intelligence: The Very Idea. Cambridge: The MIT Press, 1989.

KIVERSTEIN, J.; WHEELER, M. Heidegger and cognitive science. New York: Palgrave Macmillan, 2012.

LENAT, D. B. et al. Cyc: toward programs with common sense. Communications of the ACM, v. 33, n. 8, p. 30”“49, ago. 1990.

MARCUS, G. Deep Learning: A Critical Appraisal. arXiv preprint arXiv:1801.00631, jan. 2018. Disponível em: <https://arxiv.org/abs/1801.00631>. Acesso em: 21 jan. 2019.

MARCUS, G. GPT-2 and the Nature of Intelligence. The Gradient, jan. 2020. Disponível em: <https://thegradient.pub/gpt2-and-the-nature-of-intelligence/>. Acesso em: 26 jun. 2020

MARCUS, G.; DAVIS, E. Rebooting AI. New York: Pantheon, 2019.

MARCUS, G.; DAVIS, E. GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about. MIT Technology Review, ago. 2020. Disponível em: <https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/>. Acesso em: 25 set. 2020

MARCUS, G. F. Rethinking Eliminative Connectionism. Cognitive Psychology, v. 37, n. 3, 1998.

MCCARTHY, J. Programs with common-sense. In: MINSKY, Marvin (Ed.). Semantic information processing. Cambridge: The MIT Press, pp. 403”“418, 1968.

MCCARTHY, J.; HAYES, P. J. Some philosophical problems from the standpoint of artificial intelligence. Machine Intelligence, v. 4, pp. 463”“502, 1969.

MCCLELLAND, J. L. et al. Parallel Distributed Processing, Vol. 2: Psychological and Biological Models. Cambridge: The MIT press, 1987.

MINSKY, M. A framework for representing knowledge. In: HAUGELAND, J. (Ed.). Mind design II: phylosophy, psychology, artificial intelligence. Cambridge: The MIT Press, pp. 111”“142, 1997.

MULLAINATHAN, S. Biased Algorithms Are Easier to Fix Than Biased People. The New York Times, dez. 2019. Disponível em: <https://www.nytimes.com/2019/12/06/business/algorithm-bias-fix.html>. Acesso em: 29 jul. 2020.

OBERMEYER, Z. et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science, v. 366, n. 6464, pp. 447”“453, 2019.

PARISER, E. The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. New York: Penguin Publishing Group, 2011.

PEACHEY, K. Sexist and biased? How credit firms make decisions. BBC News, nov. 2019.

Disponível em: <https://www.bbc.com/news/business-50432634>. Acesso em: 13 set. 2020

PYLYSHYN, Z. W. (ED.). The Robots Dilemma: The Frame Problem in Artificial Intelligence. New Jersey: Ablex, 1987.

RAMBACHAN, A.; ROTH, J. Bias In, Bias Out? Evaluating the Folk Wisdom. ArXiv preprint arXiv:1909.08518, fev. 2020. Disponível em: <https://arxiv.org/abs/1909.08518>. Acesso em : 27 ago. 2020.

ROUVROY, A. O(s) fim(ns) da crítica: behaviorismo de dados versus devido processo. In: ALVES, Marco Antônio Sousa; NOBRE, Márcio Rimet (Eds.). A sociedade da informação em questão. Belo Horizonte: D’Placido, 2019.

ROUVROY, A.; BERNS, T. Governamentalidade algorítmica e perspectivas de emancipação: o díspar como condição de individuação pela relação? Revista Eco Pós, v. 18, n. 2, pp. 35”“56, 2015.

RUMELHART, D. E. et al. Parallel Distributed Processing, Vol. 1: Explorations in the Microstructure of Cognition: Foundations. Cambridge, MA: MIT press, 1986.

SAMUELS, R. Classical computationalism and the many problems of cognitive relevance. Studies in History and Philosophy of Science, v. 41, n. 3, pp. 280”“293, 2010.

SCHANK, R. C. Using knowledge to understand. Proceedings of the 1975 workshop on Theoretical issues in natural language processing - TINLAP 75. Association for Computational Linguistics, 1975

SEARLE, J. R. Minds, brains, and programs. Behavioral and Brain Sciences, v. 3, n. 03, set. 1980.

VARELA, F. J.; ROSCH, E.; THOMPSON, E. T. The Embodied Mind. Cambridge: The MIT Press, 1992.

VIGEN, T. Spurious correlations. New York: Hachette Books, 2015.

Published

2021-01-31

How to Cite

BARTH, Carlos Henrique. Is It Possible to Avoid Algorithmic Bias?. Journal of Modern and Contemporary Philosophy, [S. l.], v. 8, n. 3, p. 39–68, 2021. DOI: 10.26512/rfmc.v8i3.34363. Disponível em: https://periodicos.unb.br/index.php/fmc/article/view/34363. Acesso em: 23 nov. 2024.