The Line, the Loop and the Lantern: A Short Essay on AI, Learning, and Values 

Photo by Jeremy Bishop on Unsplash

Kent Jones, 22 January 2025

The Line: Learning Requires External Input 

Some of my earliest memories center around questions of learning, knowledge, existence and boundaries. I distinctly remember standing in our dining room, the polished cement floor cooling my bare feet. I closed my eyes and imagined flying through the stone walls of our bungalow, located in Pune India, out past the red tile roof, up above the city, and out into space, past the sun, the stars, the galaxy and beyond. Flying out as far as my mind’s eye could reach, what was out there? Was there a line that marked the end of the universe?  If a line existed, could it be crossed?  When I asked my parents, “Where does the universe end?”, they answered, “there is no end… it keeps going”.    What?!

Trying to wrap my neurons around this thought was exciting and fascinating to me. I begged my parents to teach me to read, and, when I finally went to grade school and gained the ability, I read every book or national geographic article I could get my hands on, especially those concerned with vast, unexplored regions of space or the depths of the unknown oceans.  Learning was fun, thrilling and I wanted to learn the why and how about everything I could.  

What about you? Can you remember the thrill, fun and excitement of learning as a child? Instinctively, as children, we know that we need input, that we need to learn from others. 

The Loop: Learning Requires Memory 

Do you have a willingness to learn in a way that will change your future self?  

Recently, I read a blog article by Darryl Toeien who summarized the thoughts of Charles Sanders Pierce like this: 

Firstly, without a desire to learn, the process of learning about reality, which is an inquiry process, is not possible – this should be obvious, but apparently isn’t. Secondly, in desiring to learn, we must also be willing to learn, which is to be open to being changed by what we learn. So, without a desire to learn and an openness to being changed by what we learn, learning is not possible (Toerien, 2023); (Pierce, 1955). 

At Whitworth University, we strive to keep this loop at the core of our mission. We hope that that what we teach and learn will influence and change “minds” and “hearts”. We hope that these changed minds and hearts will use this knowledge to “Honor God, Follow Christ and Serve Humanity.” Wait… “What does this have to do with computers and AI?”. Don’t worry! I promise I will get to the core of my topic after a little more context and background…  

At its heart, a computer is what is called a “finite state machine”. This means that computers are designed to read input, process that input using rules of logic, and then generate output (also following rules of logic). Because current decisions in a program affect what a program does in the future, computers require memory for storing data.  Because computers have limited memory, there are only a finite number of states that the computer can be in. This is where the word “finite” in “finite state machine” comes from, albeit the number of states modern computers can take on is astronomically large, but that’s a topic for a different article. 

The Loop: Even With the Desire to Learn, Learning Often Feels Difficult 

In first or second grade my teacher had a set of Cuisenaire Rods (https://en.wikipedia.org/wiki/Cuisenaire_rods ) which she used in an attempt to teach us fractions and concepts of basic arithmetic. I, being curious, genuinely wanted to grasp these mathematical concepts she was trying to teach me, however, when I kept asking questions, I can remember her frustration in trying to teach me mathematical concepts of “whole” and “part”.  These were concepts that unbeknownst to me, I already keenly (and intuitively) understood – for example, when it came time to divide the ice-cream among our family of five – I could easily observe if my brothers had larger portions than myself. The definitions of “one-fourth” or “one-third” would become more relevant and understandable to me with this context of sharing ice-cream.  Learning can be difficult without the right context.  

I eventually realized that I easily retained those things that I was innately interested in.  I could read an article or book that I was fascinated with and recall tiny details with ease if I was interested in the topic. But, in grade school when assigned a task of memorizing a poem, I found it hard, difficult, and though sometimes it was fun, it was not easy. My students also experience this phenomenon. As a teacher I try to condense information into interesting, smaller chunks that I hope they will find interesting, and thus practice, work with, and learn from.  Without a desire to learn, learning becomes challenging and feels like busy-work.  

Choosing a Lantern: True Learning Requires Following a Set of Core Values 

In this section I claim that many humans value sharing knowledge with others. I also argue that the way in which we share, learn, and use knowledge matters, and that we all follow a set core values in this process (either consciously or unconsciously). I claim that every method of sharing or learning has both short and long-term benefits and costs, and these benefits and costs are difficult to predict accurately. In this essay, “Choosing a Lantern” is a metaphor for the process by which individual choose a set of core values for themselves.

The Core Value of Sharing Knowledge 

At its core, the loop requires a lantern which values sharing knowledge. Many humans throughout history have chosen this as a core value. The advances of the scientific revolution came about because of shared knowledge.  The existence and success of the Internet occurred because of individuals who valued freely sharing knowledge with others. The individuals responsible for these advances valued not only the pursuit of knowledge, but valued sharing that knowledge with others.  

The Core Values of the Scientific Process Lantern 

The lantern of the “scientific process” values sharing information in a way that will enable others to repeat, apply, or extend the knowledge shared through writing.  Theoretically, with the scientific process, we don’t have to replicate the mistakes others have made during the discovery process.   After the invention of the printing press, this made knowledge gained through this lantern more widely available.  The practitioners of this process often valued research, experimentation, writing and publication.  They also valued replication which allows others to follow the chain of “light” by looking at the chain of references.  One potential issue with this process is if researchers either knowingly or unknowingly fabricate information, if this is not detected in the peer review process this can threaten the integrity of the loop.  Another potential issue was that printing was specialized and expensive and thus societies formed that controlled access to this information. They valued sharing information, but only if those receiving the information “pay their dues.” This information was (and still is) collected and stored in the form of books, articles and journals both paper and online. Librarians and teachers and catalogs serve the role of guides to this knowledge.  

The Core Values of the Open Access Lantern 

As printing technology was refined and professional organizations grew larger, the cost of access to peer review publishing platforms increased. After the introduction of the Internet in the 1990s, many scholars pushed for open access journals to share their information. The scholars involved in “The Free/Libre and open-source software (FOSS)” movement (Wikipedia, 2025) had chosen an “Open Access Lantern”, and after “free access” to the internet and Google made finding information much easier this movement flourished. Some of you may remember these early days of the internet and the controversy over Wikipedia. You may also remember a this time that some students started copying and pasting text from the Wikipedia and the Internet without attribution.  

The Core Values of the Efficiency Lantern

As humans we are always looking for “short-cuts” or ways to achieve our goals faster with minimum energy expenditure. In my youth I remember imagining what it would be like if we could “plug in a memory module” and have immediate access to all the knowledge ever written. Sci-Fi movies like “Short Circuit” and tv shows like “Star Trek Next Generation” explored the concepts of machines that could learn and become self-aware. In many of these Sci-Fi movies writers grappled with the concepts of the loop and the issues associated with making the loop “faster”. In a previous article “Love of Craft” published on this blog, Scott Griffith explored the nuances between “faster outcomes” vs “process”. The questions still remain, how fast can we learn (and retain), how deeply do we need to learn, and how much should we rely on external sources of information vs internal sources of information.

The Core Values of Large Language Model Lanterns (i.e. “Magic” Lanterns) 

What are the core values of these lanterns? Large Language Models have made “querying” the internet for information much easier. Their responses have been trained to language that “sounds great” (i.e. it’s like magic!) but, when digging into the results that generative AI produces one will often find that it produces wrong or misleading outputs (Zorpette, 2023).  

The promise and the peril of this is that the results are “filtered” using the values of those who built and trained these LLMs.  The core values used to guide training are not described in detail, but only in general terms (OpenAI, 2024). My colleagues have rightly pointed out that we should be concerned about the values of those who build these machines.  Those values (intentional or unintentional) can affect the lives of those around us. In response to these challenges, legislatures are considering regulating the industry. Thanks to my colleague, Dale Hammond, who pointed out to me the proposed Washington State House Bill 1168

Conclusion:  

Where is the line? Does all knowledge have to be peer reviewed with accurate references? After all, aren’t the new LLM’s simply playing the role of encyclopedias? Isn’t learning from LLMs like learning from other people who quote us facts without reliable attribution?   

Where is the loop? How should we choose to accept new information and make it internal in a way that modifies our future behaviors? Should we relegate this choice of learning to machines that have been programmed with other people’s values? How trusting will society be of these “machines”? 

Where is the lantern? I claim that in this brave new world, it becomes even more important to choose a “Lantern” i.e. a core set of values to guide our choices and information sources.  

In closing, I will paraphrase Psalm 119:105 “Thy word is a Lantern that shows me the way and that guides me.” I also am reminded of the words of Jesus “Watch out that no one deceives you. For many will come in my name, claiming, ‘I am the Messiah,’ and will deceive many.” (Matthew  24:4-14)    

References 

Toerien, D. (2023, April). Reply to: Chatgpt et al: The FOSIL Group. Reply To: ChatGPT et al | The Fosil Group. https://fosil.org.uk/forums/reply/80286/   

Wikipedia contributors. (22 Jan. 2025) “Free and Open-Source Software.” Wikipedia, The Free Encyclopedia, Wikipedia, The Free Encyclopedia, n.d. Web. https://en.wikipedia.org/wiki/Free_and_open-source_software

Zorpette, G. (2023, May 24). Just calm down about GPT-4 already. IEEE Spectrum. https://spectrum.ieee.org/gpt-4-calm-down  

Peirce, C. S. (1955). The Scientific Attitude and Fallibilism. In J. Buchler (Ed.), Philosophical Writings of Peirce (pp. 42-59). New York, NY: Dover Publications. 

OpenAI, (2024). How CHATGPT and Our Foundation models are developed. (). https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-foundation-models-are-developed