Matthew Bell, 24 April 2025
Recently, I received two invitations. The first was from the School of Continuing Studies at Whitworth University, and it was an invitation to organize a college-credit bearing conversation between non-traditional students pursuing upskilling, on the one hand, and Whitworth faculty on the other. The topic of the conversation was the promise and peril of artificial intelligence for society. The second invitation came from the Teaching, Learning, and Assessment Committee (TLAC for short), also of Whitworth University, to lead a lunch discussion among faculty on artificial intelligence and Whitworth’s mission of advancing a heart-and-mind education. Pete Tucker and Scott Griffith have challenged me to write on these invitations, and I have decided to combine my reflections on those two into one post because they seem to me related at a deep level. They both proceed from and lead back into the same concern. To get ahead of myself, that concern is that AI at present is developing in ways that are not just threatening to those unwilling or unable to change – concern over the digital divide, so to speak. Rather, AI is developing in ways that are, both intrinsically and contextually, toxic and destructive. Furthermore, this doesn’t have to be the case and small, liberal arts universities arguably possess an enviable location for pursuing more healthy tracks for innovation in AI.
Argued as a sort-of proof (with apologies to my colleagues in math and philosophy):
Observation one: what one believes a thing to be significantly affects how one approaches it, both for understanding and for use. I believe peanut butter to be food and also delicious, therefore I eat it. Were I to believe PB not to be food or not to be delicious, I would most likely avoid eating it. This observation seems so obvious as to render any counter absurd on its face. Apply it to AI and something interesting emerges: how one defines artificiality and intelligence from within one’s own sense of worldview matters tremendously for one’s attempt at developing AI or innovating with respect to it. The converse also holds. When, say, Microsoft markets something as an AI, Microsoft invariably discloses its corporately held and enacted sense of identity, vocation, anthropology, ethics, and so forth with respect to defining artificiality and intelligence. In the SCS sponsored seminar, we reflected deeply on the question “What is AI?” along these lines precisely so that the class might be positioned to critically discern ethical norms or touchstones for responsible AI use and development. In the TLAC discussion, we went the other direction. Rather than engage in prolegomena for normative ethics, the audience and I followed a descriptive approach so as to discern why AI development at present so often feels like it is going off and threatening the very soul, so to speak, of society.
Observation two: the unfolding drama of AI development and integration into society betrays a value system that massively commodifies, well, pretty much everything, in this case particularly artificial intelligence and, by analogy, the natural intelligence on which it is ostensibly being modelled. Efficiency and production are valued highly; curiosity and critical thinking hardly at all. Product > process. This takes hardly any attention to the news cycles and marketers to perceive. For example, Aneesh Raman of LinkedIn recently opined that the forms of labor fundamentally central to our economy are about to shift in ways analogous to those experienced in the Industrial Revolution because human capacity for “hard thinking skills” has been rendered obsolete by AI. (This opinion has now gone viral. For just one place Raman is cited, see https://economictimes.indiatimes.com/tech/technology/itll-be-human-innovation-not-technical-advancement-fuelling-growth-of-the-future-linkedin-executive/articleshow/118607091.cms.) Bill Gates has also taken a high profile in expressing such opinions, recently saying to the effect that in the future almost all jobs will simply be done by bots. Humans must either retool or be unemployed. (Again, this has now been repeated almost everywhere. Just search on Google!) The logic of this is that humans are, as CGP Gray has inimitably put it, “meat-based machines” (CGP Gray, “Humans Need Not Apply,” https://www.youtube.com/watch?v=7Pq-S557XQU). Being “meat-based” makes humans squishy, messy, and comparatively bad at, e.g. math. Artificial intelligence constructed and pursued under such priorities becomes an adventure in replacing the squishy, slow machines with the robust, fast ones! Furthermore, where humans remain valuable is where we can do something that machines cannot, which, in a bitter irony, might just turn out to be our capacity to recognize and admit not to strength and power but to weakness and vulnerability. Intellectual humility – the ability to identify what you don’t know – remains underdeveloped in our silicon counterparts (Shlomo Klapper, “Beyond Turing: The Next Test for AI” in Discourse, https://www.discoursemagazine.com/p/beyond-turing-the-next-test-for-ai).
Observation two, which drove the TLAC conversation, shows how, just as where God-of-the-gaps arguments actually blaspheme by way of constructing deity as only whatever science has not yet explained, so AI (as constructed within the corporately held worldview of Big Tech) denigrates humans and human society viewing us in cynically utilitarian terms. Humanity or natural intelligence is merely whatever AI is not yet sufficient to replace. Note, however, that the ethical trainwreck that thereby unfolds is not because of AI per se but because of the presuppositions embedded in discovering, developing, and deploying it. If AI is not necessarily on a collision course with human value and a just society, then, the corrective is to challenge the worldview of Big Tech.
That’s where small liberal arts colleges shine.
That’s why this blog.
That’s why the conversation with SCS students.
That’s why the TLAC conversation was timely and appropriate.
We need to break the monopoly of Big Tech on developing AI. The future of ethical AI is in its being developed by diverse teams that approach questions of intelligence, value, beauty, goodness, truth, humanity, worth and worship, etc. via methods not straitjacketed within a neo-liberal, “It’s the economy, stupid” logic absolutely-rather-than-heuristically deployed.
The alternative is the trainwreck.