Hidden Monsters… I mean, Motivations

Photo by Hans Ripa on Unsplash

Scott Griffith, 2 February 2025

Bias Identification

For where I am going in this post, I think it is important to give some contextualizing information on my biases. I don’t think of myself as an expert in areas of business, cultural / political movements or history. While I am not at all qualified to speak to the effectiveness of modern business practices, in my role as a user, I do think I share a vested interest in the ethical implications of those business choices. All users, even mediated users, ought to have interest in the personal and communal impact of modern tech. I would broadly consider myself skeptical of the corporate moral compasses of ‘Big Tech.’ I remember a time when Google’s motto was “Don’t be Evil,” and perhaps my slightly-more-naïve-self thought that was comforting. I know it is currently (Spring of 2025) in vogue to morally dunk on Meta and Tesla, but I deleted my Facebook account more than 10 years ago when I realized I was the product.

In my classes that intersect with the operations and business choices of corporations, I am clear with my students of my biased lens that corporations exist to make profit for their shareholders; everything else is not only secondary, but subservient to, that primary goal.

Acceptance At What Cost?

Before March 2020 I had never heard of Zoom. For video conferencing tools I was aware of Skype, WebEx and Citrix, but only at arm’s length. Did anyone ever really enjoy using WebEx? I just remember needing to install a bunch of different drivers and dealing with obtuse credentialling just to get a barely functional voice call working. The use pain never was worth it and often we would make our own affordances (email, phone, travel) just to avoid using web conferencing tools.

But three things seemed to converge that made my grandparents understand that ‘to zoom’ had nothing to do with cameras. First: hardware interfaces became standardized. Thankfully, OS developers figured out that it was not super complicated to have a common driver system that any webcam could interface with. Modern users rarely have to ‘install a driver’ and are appropriately used to plug-and-play peripheral devices. Second: everything got faster. Internet uplink and downlink speeds had surpassed the ‘bottleneck’ phase of network connectivity. USB had become the dominant external interface; and all bus systems were more than capable of moving around high-definition, or at least functional, video with ease. CPUs were beefy enough to handle basic video encoding, compression and rendering. The little system-on-a-chip in our pockets never flinched at capturing, processing and displaying video data. What was the third thing…? Oh, that’s right: a global pandemic.

Why did people use Zoom? They used Zoom because it was easy. In a context where we couldn’t be in the same room as other people, we wanted a way to see and hear others over the internet. As noted above, this is not a new concept: since the 90s we have been able to do multi-user video conferencing. What made Zoom ascendent was their almost comically under-developed product. I would have described 2019-era Zoom as a functional MVP (minimum viable product). It did what it said: you could do video conferencing. But it didn’t have any of the features/bloat that existing web conferencing tools had. There wasn’t robust credentialling. There were no conference room equipment interfaces (remember WebEx phones and TVs?). The idea of an ‘account’ was loose, at best. There was no persistence. There were no auditing- or recording-type features. There wasn’t a whiff of application integration. It was simply a video chatroom. Nothing else, nothing more. The desire to connect was immediate and tech-warry people didn’t have a hard time figuring it out.

This sounds great, right? What could go wrong?

As someone who had been using Discord for online classes from the start of distance learning, it was amusing to see the rapid rise and hysteria of things like ‘Zoombombing.’ The limited feature set that everyone was gleefully downloading onto all of their devices had poorly-understood features to protect against someone hijacking a Zoom session. Teachers, who were understandably over-taxed, did not have the technical awareness to understand that publicly posting their classroom’s Zoom link would open them to anonymous harassment. How do you fix that problem? Well, you make sure that only people who have approved credentials can join. You also probably want to have a way of dynamically adjusting those permissions as situations shift and change. Not only did Zoom not have those features implemented at that time, but once they did implement them, no one wanted to activate them because ‘it was too hard.’

Ignoring the blessing/curse of a limited feature set, it was also interesting how quickly people were willing to abdicate control of their personal (sometimes very personal) data to an unknown company. Let’s do a quick mental quiz. First question: Have you ever used zoom? Second question: even now, do you know anything about Zoom Communications, Inc.? Third question: would you have interacted differently with the application if it was owned / controlled by <insert self-determined ‘evil’ company / nation-state>?

Less than 10 years before the rise of Zoom, the broad conversation on the internet was about data privacy and corporate collusion with government surveillance post-Snowden. People were intensely concerned with how tech companies were handling their data and which corporations they could trust. I remember distant family members calling me in angst asking about how they could protect their data from an unidentified ‘them’. The same family members set up monthly ‘Family Zoom Dinners’ during the pandemic. Oh, how quickly the common technical zeitgeist changes when presented with ‘… but it is easy.

The immediate acceptance and widespread use of Zoom as a tool is indicative of a wider pattern of societal interface of technology: if it is easy, we like it. The only cost that is really considered is the acute economic cost of access (Zoom was kind of free-to-use). The long-term and intersectional costs are rarely, if ever, considered*.

What Lies Behind the Veil

As we apply this pattern to our current AI context, I want to explore two minimum facets: AI’s ease of use and one of the many long-term costs I foresee impacting society.  

Large Language Models (LLMs) share a technological lineage that can trace back to Alan Turing’s famous Imitation Game. If a human cannot distinguish between machine-based text vs. human-based text, then that machine “can think.” This thinking is the holy grail of AI development, an artificial agent is not very intelligent if it cannot think. Thus, the design goal was set: make a system whose output humans would not be able to distinguish from natural human-based output. Many people have hot takes on the merits of Turing’s test and the actual application to AI, but I would argue that is not the point. Systems are designed for particular requirements, much like genotypic traits are selected for under environmental pressure. In this case the pressure on AI researchers is to satisfy the Turning Test. The abstract goal of LLMs is to be a good conversationalist, not to <fill in most people’s idea of what AI’s goal is>.

You see this in the examples of AI advertising Pete referred to (https://thecoreai.whitworth.edu/?p=162) where the users are asking the AI agent to do things as if it were an actual person. “Can you help me…” is responded to with “Yes, I can.” It is undeniable that the average user of a chat-bot-like interface would see them as anthropomorphized beings. My partner talks to her plants, which she has named. This is not to drag her as an oddity, but to point out that we are social creatures. We thrive in community, even if imaginary. We desire intellectual contact with other entities, even if via a mediated facade. We are hard-wired to treat everything that passes the uncanny valley with empathy and co-equal agency. ChatGPT is effectively tapping into that drive by masking its interface with a chat box. Copilot tries to evoke its name: a co-producer, a co-thinker, a co-intelligence. These are just algorithms being processed on a distributed hardware monstrosity. Many AI systems are ‘easy to use’ simply because they have been highly developed to plug into the monkey-parts of our brain that desire social connection. Ease of use in this context should probably be followed up with “… to do what?” which Pete has some thoughts on.

Ok, but so what? A Matt observed, you can interface with Copilot via Word, which you (or your company) already pay for. You can make a free account and start interacting with ChatGPT right now. However, to quote Cage the Elephant: “There ain’t nothing in this world for free.” (https://www.youtube.com/watch?v=HKtsdZs9LJo) There are some low-level corporate motivations which are worth engaging but will be kept for a future post**. I want to focus on a long-term impact that may seem slightly conspiratorial: control.

When I first saw murmurs of this new AI thing, ‘ChatGPT’, I was conditioned to expect follow-up stories detailing the horrible racism or problematic interactions a broad audience would be able to capture. Chat-bots and other big data-based technologies seem to always go through a period of we-are-not-racist-our-data-is. Most of these systems are either developed with lots of existing user data or are fed with new contemporaneous user data. In both cases the systems are influenced by us, humanity. It turns out we, as a whole, are pretty terrible, and some of us like to amp that up when they have a platform. This uncomfortable truth of humanity historically led to shiny new AI systems being set up and then immediately being taken down when some snarky post shows some agent venerating Hitler or putting ethnically diverse women in photos of American Founding Fathers.

One reasonable explanation for this is that OpenAI was careful in the development of their systems before public release. They were able to put in reasonable safeguards to prevent undesired outcomes. A much more unreasonable explanation is that OpenAI was hyper-selective in what stuff went into their training data, specifically excluding data that might lead to negative outcomes. Either way, most people were celebrating OpenAI for offering a LLM that was socially acceptable. Yay!

While I do appreciate and value an age-defining tool being less hateful than it could have been, the implications are problematic. Someone, or someones, had to have exerted influence on the system to have that effect. A lack of reflection of humanities worst was not natural, and I would guess the righteous interference was after-the-fact. There is human intervention happening at a fundamental system level. This means that someone makes choices as to what/how ChatGPT processes data. For a system that obfuscates input-to-output connections, this kind of human-based influence will likely never be detectable by the end-users of the system.

It is not hard to imagine a future where these LLM-based systems become a monolithic cornerstone to society. It is also not hard to imagine an OpenAI developer who has a grudge against broccoli jokingly inducing ChatGPT to find alternatives to broccoli whenever people ask for a recipe idea. It is then not hard to imagine overall broccoli consumption going down as users of many systems (not just ChatGPT itself, but any system that utilizes ChatGPT on the backend) are dissuaded to add broccoli to their grocery list. It is then not hard to imagine an outcome where the domestic broccoli industry is gutted. This seems laughable and childish as cause and effect; but, the concerning elements are both in the undetectable nature of this adjustment and the possibility of strategic weaponization. Big tech has very little reason to be ‘open’ about any of this. AI-pushing corporations have every reason to obfuscate, redirect and minimize any scrutiny of their control of AI system outcomes.

What could possibly go wrong when a limited few get to decide the moral and value responses of systems that society seems primed to embrace without reserve?

What To Do

Matt (https://thecoreai.whitworth.edu/?p=158) and Kent (https://thecoreai.whitworth.edu/?p=65) both outline, in different ways, the need to be intentional in how we personally desire to exist with these systems. My concern is that society won’t be aware of the long-term cost of utilization until it is far too late. Even if there is a long-term cost, I am concerned that we won’t even be able to detect it when it happens.

The antidote to this doom-and-gloom is conversation***. Most people who are using these AI systems are not having conversations around the ethics or societal outcomes with anyone who is equipped beyond ‘I hope they don’t take our jobs!’ Engaging in blatant self-promotion: this space exists, in part, to foster those conversations. Also, getting a CS degree from a liberal-arts college is a good start as well.

What I am confident in is the motivations of tech corporations. They are not, nor will be, concerned with these kinds of questions because they ultimately undermine stockholder bottom lines. These conversations dilute the marketing hype, the promise of future where people can just say ‘watch me,’ the blatant exploitation of data, and intrusions into privacy and agency. Just like many tech innovations, we ought look past the people who stand to benefit from ‘because it is easy’ acceptance.

——————————————————————

* There is a whole rabbit trail here that includes the developmental cost to teenagers of social media (free monetary cost), the environmental cost of proof-of-work based cryptocurrency, the attention-stealing cost of smartphone notifications, etc. I think many of these non-monetary costs also apply to AI, but I also wrote almost 800 words about Zoom to get to my main point. TheCoreAI promotion: maybe you can write something based off of one of these ‘costs.’

** Examples: AI developer’s primary resource is high-quality data. How do you get high-quality data in an online-ecosystem that is closing down due to AI use concerns? Why not make a text-box-based interface that automatically integrates user’s immediate feedback on intent / meaning? Or, more devious: an advanced form of the millennial subsidy. Given the high-cost to develop and deliver something like ChatGPT, OpenAI is getting something for offering limited use for free. In a recycle of 2010s era sayings: “if you are getting something for free, you are the product.”  

*** If you thought I was going to say something like ‘government intervention,’ maybe this is showing too much of my political worldview, but the future AI Regulation Act of 2041 is going to come far too late, and have far too little effect, to have any real impact. This is entirely based on recent history of US ‘regulation’ (or lack of timely regulation) of massive tech-society interfaces.