There will be no single AI culture
Artificial intelligence is a part of our culture. Technology is our culture, as is our ability to externalise thoughts and experiences in myths, in laws, in atlases or poetry. Be it for mundane tasks from managing our calendars to our street navigation, shopping lists or information retrieval, AI already externalises our brain, just like learning to control fire externalised our digestion [1].
No matter how narrowly specialised [2] or sometimes notoriously faulty [3], AI takes over more and more of routine tasks in our lives [4]. As it does, we strive for a single explainable AI standard [5, 6], one global set of fairness-aware AI guidelines [7,8]. It is an impossible task. There will never be any such thing, one language, one school curriculum or one political party (hopefully) across the world. No matter how dangerous cooking on fire may seem, to date there is no one cooking standard, not even fire handling standard. We use common sense [9], but we have different common senses [10].
The Moral Machine experiment [11] surveyed millions of people in 233 countries or territories asking to decide upon moral dilemmas faced by vehicles approaching crash situations. Respondents were asked whom an autonomous vehicle should prioritise saving: passengers or pedestrians, few or many, young or elderly, does it matter whether the person is rich, a convict or a runner. Not surprisingly, answers varied a lot, but surprisingly clear patterns of differences across countries, geographic regions and dominant religions emerged. While many can argue that it is unethical to ask such questions at the first place [12], the message is clear that ”these differences correlate with modern institutions and deep cultural traits [11].
As hopes for general human-like AI go up and down [2,13], it is becoming clear that generalising over many different experiences is a path to deep understand- ing [14]. Relatively few experiences can actually be experienced in a lifetime, thus humans also learn from stories [15] and games [16]. From many seemingly irrelevant activities every day kids learn how the world works. To acquire truly deep understanding (if ever), AI will need to learn seemingly irrelevant things, accommodate contradictory information, and inquire upon exceptions, and eventually have an opinion, which certainly will be culture dependent.
Want your robot vacuum cleaner to adhere to your culture? Bring it to the National library to learn from [17] all the local books. Or only from the nice ones.
References
- R. Wrangham and R. Carmody. Human adaptation to the control of fire. Evolutionary Anthropology, 19(5):187–199, 2010.
- G. Marcus and E. Davis. Rebooting AI. Penguin Random House, 2019.
- Synced. 2018 in review: 10 AI failures, 2018.
- A. Greenfield. Radical technologies. Verso books, 2017.
- R. Goebel, A. Chander, K. Holzinger, F. Lecue, Z. Akata, and et al. Explainable AI: the new 42? In Proc. of the 2nd Int. Cross-Domain Conf. for Machine Learning and Knowledge Extraction, CD-MAKE, pages 295–303, 2018.
- D. Doran, S. Schulz, and T. Besold. What does explainable AI really mean? A new conceptualization of perspectives. arXiv:1710.00794, 2017.
- R. Bellamy, K. Dey, M. Hind, S. Hoffman, S. Houde, and et al. AI fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv:1810.01943, 2018.
- R. Binns. Fairness in machine learning: Lessons from political philosophy. Journal of Machine Learning Research, 81:1–11, 2017.
- M. Minsky. The Emotion Machine: Common sense Thinking, Artificial Intelligence, and the Future of the Human Mind. Simon and Schuster, 2006.
- C. Geertz. Common sense as a cultural system. The Antioch Review, 33(1):5–26, 1975.
- E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, and et al. The Moral Machine experiment. Nature, 563:59–64, 2018.
- I. Lassen. The amorality of the Moral Machine. https://dataethics.eu/ the-amoral-of-the-moral-machine, 2019.
- J. Brockman, editor. Possible Minds: 25 Ways of Looking at AI. Penguin Random House, 2019.
- D. Epstein. Range: Why Generalists Triumph in a Specialized World. River- head Books, 2019.
- N. Gaiman. How stories last. The Long Now Foundation, 2015.
- J. Roberts, M. Arth, and R. Bush. Games in culture. American Anthropologist, 61(4):597–605, 1959.
- T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, and et al. Never-ending learning. In Proceedings of the Conference on Artificial Intelligence, AAAI, 2015.
A PDF of this text is here.