Review: First Lecture Series from Institute for Philosophy and the New Humanities
In Fall 2020, NSSR welcomed a new interdisciplinary institute: The Institute for Philosophy and the New Humanities (IPNH), led by Zed Adams, Associate Professor and Chair of Philosophy; Paul Kottman, Professor of Comparative Literature and Chair of Liberal Studies; and Markus Gabriel, chair of epistemology and modern and contemporary philosophy and director of the International Centre for Philosophy at the University of Bonn. IPNH aims to extend humanistic inquiry in new directions to foster work that critically engages the current moment. Read more about INPH here.
In late October, IPNH hosted its first lecture series focused on artificial intelligence. Robert Mass, an NSSR Philosophy PhD student, reviews the series below.
In 2008, I took my children to see WALL-E, a Pixar movie that takes place amid an environmental and human apocalypse. In WALL-E, humans have been essentially made redundant; they live their lives reclining, staring into computer screens, all their basic needs satisfied by technology supplied by the State.
Over the past 12 years, technology has done amazing things. It has revealed the secrets of the genome, allowing scientists to develop astounding biotech solutions to disease, and brought the world’s accumulated knowledge within reach via mobile phone. However, it has also brought the world of WALL-E closer to reality. We spend more times staring at our screens than interacting with other people, especially now during the COVID-19 pandemic. In addition, technology has now enabled governments and Big Tech to reach deep into our lives to both record and reshape our acts, our words and potentially even our thoughts. We are more dependent on technology, and more subject to manipulation and monitoring.
These concerns drew me to Artificial Intelligence and the Human, the inaugural discussion series from the new Institute for Philosophy and the New Humanities (IPNH) at The New School for Social Research. For four hours each day from October 19-23, I joined an international group of students and scholars from various disciplines trying to work through the issues that technology poses for living a fulfilling human life.
The broad program included many topics, from the history of automata and computing, to the extent to which computers are able to mimic various forms of human thinking, to whether computers can be called ‘creative,’ to what kind of regulatory framework we might want to set up to limit some of the excesses of super-intelligent Artificial Intelligence (AI 2.0). Speakers included Jens Schröter (University of Bonn) on machine creativity, Nell Watson (QuantaCorp) on AI and social trust, Brian Cantwell Smith (University of Toronto) on AI and the human, and Jessica Riskin (Stanford University) on the influence of machines on our conception of mind. Watch the talks here.
For me, the two most powerful sessions focused not so much on the future of AI, but on the future of human beings in the face of improving AI.
In one session, Stuart Russell (University of California, Berkeley), argued that to understand AI, we need to realize that it deliberates only about means and not ends, which must be programmed into it by humans. AI optimizes results based on whatever those ends are; thus, the key to AI being beneficial to humanity is ensuring that its objectives are appropriately specified. Research to date has failed to properly theorize about how to do that and as a result, too often AI optimizes for an outcome that is detrimental. Russell argued that we need to develop new approaches to how best to specify beneficial goals, taking into consideration human preferences, e.g., recognizing the difficulties of identifying human preferences and then capturing them computationally, the uncertainty and plasticity of our desires, our weaknesses in identifying those preferences that are beneficial to us, the difficulties of interpersonal comparison of preferences, and the like. He believes that with appropriate focus, super-intelligent AI that benefits humanity can be developed.
After setting before us this fundamentally optimistic picture of what we need to do to guide the development of AI in the future, he left us with two problems. The first he labelled the Dr. Evil problem — namely that evil actors, both private and state, can cause tremendous havoc in human life. The second was my great fear, which he, too, called the WALL-E problem — that the overuse of AI will produces human enfeeblement. He had no vaccine for that, either.
As frightening as that vision of the future is, a more dystopian one was presented by Susan Schneider (Florida Atlantic University). She discussed trans-humanism, a philosophy that advocates improving the human condition through “mind design” — the implantation of chips in the brain or uploading or merging of mental functions into the cloud to improve mood, attentiveness, memory, musical skill, or calculation abilities.
While generally positive about these possibilities, Schneider discussed the philosophical challenges those types of brain augmentations pose. At some point, augmentation may become so complete that self-consciousness — our felt quality of having inner experience — would be compromised or disappear altogether, and what we have heretofore thought of as distinctive to the human “mind” would no longer exist. The changes wrote by brain augmentation could also be so great that we could no longer call ourselves the same person we were before the augmentation. If either of these stages of mind design are reached, humanity as we have known it for millennia will no longer exist, as we will have merged into super-intelligent machines.
The growth of AI is changing our conceptions of human mindedness as well as human flourishing. The program demonstrated the value the humanities can bring to understanding and perhaps guiding those changes for the better. In that regard, the program amply met the goal of IPNH to demonstrate the continued relevance of humanities in the academy and beyond.
It was quite a week, and I am looking forward to the IPNH Fall 2021 program on Objectivity in the Humanities.
Robert Mass is a Philosophy PhD student at The New School for Social Research