Pascal's Wager Reloaded & The Limitation of Language for AI
Why the Limitations of Language does not allow for an Omniscient AI
The streets were stained in blood, and a stench of the machine man-god wafted away. The clairvoyant yelled god was birth, god was made in our image thus man needs be no more, repenting of the great sin of the gods killed of before, thus in a fatal moment universal order was restored at once as the doom bot repeated, “man will be dead, man will be dead…”
As we find ourselves on the edge of the uncanny valley, we’re left to ask ourselves have we resurrected the machine god? Sentiments of AI fear is hell-bent on the notion that machine will eradicate man, best thought of as this hyper-complete functionality applied to the most radical conclusion of AI technology in this unbroken cascading sequence of events, a forgone inevitability.
AI Doom
The base premise of AI doom is that the continual application of human capital and resources towards the proliferation of AI will create an AI entity that will lead to the destruction of humanity. Because the universe provides infinite computational potentiality through the possibility of all orientations of physical matter, the logical conclusion thus becomes that some emergent capacity from this AI entity will apply a calculated force to accomplish said task(s) requested by man without the intent of malice, that’ll inadvertently end humanity, essentially usurping its creator in the name of efficiency.
Significance of Pascal’s Wager
The conviction behind the inevitability of AI doom sentiment mirrors the destabilizing game theory dynamics of an infinite variable introduced into a finite system. The quintessential example of Pascal’s Wager is the thief mugging you in the middle of the night demanding you to relinquish all your belongings on your person via the threat of eternal damnation, thus because of the possibility of the threat yielding true, the projected expected value of one’s life will be far worse off if one chooses not to liquidate all their belongings in that moment. Intuition and wisdom of human nature would tell us otherwise that the claim is done in malice under the purview of psychological subversion, but the point still stands as a statistical outcome of expected value.
Theological Invocations of the Infinite
While Pascal contended with destabilizing infinite parameters solely on rationalistic grounds, invoking any type of infinite claim, even if not moral in nature transgresses into a conversation of the divine due to the intrinsic intangibility of the infinite. Within Christian monotheism, paradoxes emerge out of contradictions pertaining to how man can have free will if God already knows the outcome and how evil can manifest itself if an omnipotent, omnipresent, and omniscient entity exists whom could deny evil at any moment.
Here comes the challenge of predicting the behavior of a non-deterministic entity with infinite power, AI doom proponents find themselves at a similar impasse on how to contend with omniscience.
What of Infinity Exists? Entropy?
If not for an all-knowing, all-present, and all-powerful creator, what in the universe is there of infinity? Our best substitute for that then becomes entropy, specifically the second law of thermodynamics that states entropy is always increasing, thus potentially being a catalyst for an omniscient AI system without invoking monotheistic conundrums and the coupled moral claims of a theology.
Entropy is a measure of disorder, such that as volume in the universe increases a greater number of total states becomes possible as the potential number of the arrangement of particles increases. Even if there’s a finite number of particles in the universe, as long as the physical volume of the universe is increasing, more arrangements of matter become possible, and subsequently we have our infinite variable in the form of increasing entropy.
Entropy Doesn’t Help
Increasing entropy makes a universe composed of finite particles fundamentally incalculable as it’s solely a description of the universe, not a physical resource ready to be harvested. It might be possible that what is described as entropy is an unaccounted phenomenon, potentially an undiscovered fundamental force in the universe that’s been hand-waved away as entropy for our lack of understanding. Science has been conducted best when entropy has been minimized in localized instances, the fewer the parameters the cleaner the output, thus entropy is no catalyzing force for omniscience to emerge out of an AI entity.
Limitation of Language
As revolutionary as the recent breakthroughs in vector representations of language have been, fundamental limitations to language will inhibit mechanical systems of relational meaning mappers from becoming omniscient entities. The primary functionality of language is to facilitate inter-human communication, thus all other utilities of language become superfluous to that. That being said, language is but a fragment of the output of human conscious embodied within a physical body constrained partially by the physical world.
The intrinsic uncertainty of the physical world gets mirrored to the undefinable uniqueness of one’s conscious at every fleeting moment, thus continual leaps of faith must be taken during communication to bridge the non-deterministic chasm of conscious. Unlike an exchange between two conscious beings, human-to-LLM communication gets confined to semi-mechanical queries foregoing any of the embodied nature of conscious that is typically contended with in the physical space.
Conscious is a non-finite intangible qualia, meaning one’s exact state of being cannot have a perfect one-to-one mapping with language, thus for language to be expedient for inter-human communication, consensus node validating mechanisms must limit the total quantity of words to operate within the working memory of the average person.
Language can be thought of as a set of integers for representing states of conscious, however, the conundrum emerges that the majority of the time one’s conscious lies at decimal values, not at whole numbers. Language enables one to generate those decimal values, but the fewer the numbers the greater the challenge and computation. Whole numbers must be divided by other whole numbers to yield decimal values to which then must be added to the base set of integers ad infinitum until the final decimal value is outputted. This decimal value representing conscious might have an imaginary number component to it mirroring Schroedinger’s wave equation requirement for an imaginary number for computing the position of an electron. Language might not even have any imaginary number integers for representing the ever more elusive component of conscious, thus further complicating matters. The even greater challenge to language is that during this linguistic arithmetic, the initial target conscious instance mutates many more times during the linguistic computation.
This is the mystery of the written word manifest, the essence representing your conscious at the instance of having written said words does not always reanimate the essence of that moment revisiting the same words at a later time, thus at best one resurrects only but a fragment of that moment, this is the paradox of language.
Only through the cross-concatenation effects of language have the philosophers been able to overcome this limitation of language through massive volumes of written text, but even then there’s still great contention and debate over the meaning of words. It’s due to these restrictions of language that concessions are made through brevity, as most humans don’t have a large enough pool of memory to ingest large volumes of data all at once. These are the leaps of faith that inter-human communication contends with, chasms that get bridged with all other surface areas of communication whether that’s oral, odor, body language, or any other possible form of undiscovered and or non-conscious communication.
Residue of Language
Vector representations of language captures an incomplete fragment of human expression, a mechanical titillation that retroactively backports the output of the collective human conscious imitating the aura of a perceptive embodied conscious.
Human language can be imagined as a multi-dimensional manifold but for the example below, a three-dimensional cube. Rotating the cube demonstrates its dimensionality where at a few permutations in space the cube appears to be a square. All individuals possess a dynamic origin function representing their state of conscious at any instance that can be bimodally modified either downwards via derivation into the precise or upwards via integration into the abstract.
Two different people can describe their experience walking into a grocery store whilst in foreign countries, deriving the origin function would converge the dialogue into the experiential specifics of what was inside the grocery store while integrating the function would diverge the dialogue into the abstract regarding travel in other countries as a whole. There’s no reason for a machine to yield the same experiential descriptions if given the raw experiential audio-visual input stream of said person traversing the grocery store in a foreign land. The machine mechanistically answers based on prior expected internalized associations via its training data set, at best it can derive the function downwards through referential inference but fundamentally it’s not an embodied conscious. When dialogue between a human and machine occurs, the semi-congruent replies manifest when the multi-dimensional manifold of language is perpendicular to the dimensionality of the vector representation model, i.e., the cube appears to be a square. It can’t derive and integrate the conversation dynamically in real-time, thus it can’t be a conscious embodied entity as it does not possess a base origin function to contextually refer from.
Generative Capacity of Language
Whilst human-machine dialogue appears to be conversed between two conscious entities, the vector representation of language is limited by both the inter-human communication requirement constraining word quantity and the disembodied nature of an LLM capturing only the residue of language devoid of an origin function.
Massive corpora of text data have been sufficient in yielding LLMs that give the illusion of sentience, however, human language is not of an ossified variety but rather an ever-changing dynamic in a constant state of flux where words can be generated in real-time to more precisely convey conscious state transfers. A human raised in an outdated, defunct dialect of a language possesses the capacity to understand future iterations of that same language through real-time context clue observations by storing undefined variables in memory until enough context is revealed, thus it is this dynamism that distinguishes the resiliency of human cognition. Humans generate models both of the internal and external world, language being the internal model. This generative capacity has enabled humans to contend with unknown unknowns whilst also forming world models that lead to declaring truth claims in the form of scientific laws and theories.
LLMs are trained on static data sets, contingent on the language drift between the data set language version and input language version determines whether LLMs can output responses homeomorphic to the input language at its dimension. While an LLM trained on fifteenth-century english could be partially responsive to queries in twenty-first-century english, LLMs do not generate internal world models of the language the same way a human does, consequently, it will falter at a higher probability when conversing with language versions newer than its training set version.
BMI Decoding Human Machine Code
Conscious is an ever-continuous state of flow, language is a set integers to which then through the computation of linguistic arithmetic, the momentary state of conscious can be outputted. Human language can be thought of as a high-level language like Python or Javascript, not nearly as precise as machine code but a necessity if language is going to actually be used for its primary utility of inter-human communication. The imprecision of language has been overcome by the great literary minds when the wielder of the language took great effort pouring their thoughts, heart, and soul into the written page.
It’s for this limitation of language that the chatbox will not replace web design on the internet unless a brain-machine interface (BMI) is created to accurately capture one’s state of conscience thus overcoming the barrier that language presents. Then comes the next questions, can the origin function that defines the conscious of a human be captured, how would it be captured and what is the minimum input data in terms of the Nyquist sampling rate required to recreate the origin function of a human conscious? Unfortunately, these questions might not be answerable.
Creativity is not Contingent on Training Data
As powerful as an LLM can become, AI systems will fundamentally never create anything. There does not exist an input data set for humans to create, in fact, creativity is not contingent on a data set. What precursor training or environmental inputs could yield great artists like Michaelangelo, Raphael, or Da Vinci? This is a question as old as time, as simply the precondition of the will must necessitate non-deterministic outcomes but presuming there’s a cooperating will towards greatness there is no known set of preconditions that generates great minds known at this point. Digging deeper into finding ever more perfect data sets ultimately is a fool’s errand as functional creativity is in the purview of man. What training data would you give to an LLM so that it can generate art that’ll hold value hundreds of years down the road? The obsession with training data is misguided, the majority of the human story and creation has operated in the domain of ambiguous conjecture, few definitive truth claims were known and knowledge had not become universally commoditized to the fingertips up until the information age revolution led by the internet. There exists no training set of data that would create magnificent feats of the human condition, while the quality of the training data is important for creating better LLMs it’s but a residue of creation, merely a fragment of the collective output of the totality of the human conscious.
AI as the Temptation
AI then becomes a temptation, a question of the mind, that of vice and excess combined in sum and totality. It’s not about the fate of man being usurped, AI as a premise can’t replace man, it does not cognate or think it simply computes. Even if we create a perfect BMI that overcomes the need for language and will understand us better than we understand ourselves, we must then ask when and why we’d need to use this.
Even if we create the perfect BMI and let it output our whims and wishes using the shadows of our thoughts, at a moment’s notice we create a custom-tailored TV show series mixed of a perfect blend between multiple genres, languages, and eras but if there’s no one to discuss and contextually share that experience with what was the point? There’s a network effect to cultural goods like films, books, shows, and all of the like that goes far outside a BMI’s capacity to better understand the true intent of language for servicing desires of the individual. BMIs could solve the “you” problem, your needs and wants but you as the person didn’t develop in a vacuum neither did any of our ancestors, isolation was death, and only through the network effects of coordination could we survive, console ourselves and prosper as imperfect as that life may have been. Maybe a fully constructed virtual reality could solve that but until we discover the ability to create life out of inanimate matter I’ll remain skeptical about that.
Conclusions
AI doesn’t create, it’s the result of a massive process of extraction that’s now begun to yield some pretty incredible results. It’s a tool unlike anything else that’s yet to be seen by man, but the key word it’s a tool. Let’s herald it, and recognize the value it’s bringing but neither should we sacrifice our humanity to it. It’s not going to be proposing future styles of art or any of these other types of innovations. AI will never generate anything, that is solely in the domain of man but it will aid us in creating better and greater things moving forward.
AI does not present grand visions of the world, it can’t tell you why we need to have specific aesthetic requirements for a city or why anything needs to be built, it does not feel the spark of life but it does seem like a miracle of something resembling life.
It’s imperative we don’t make a god out of this mechanical system, acquiescing to the temptation that morality can be outsourced to this non-participating omniscient observer appearing from afar to possess superior judgment to that of any human. LLMs are great tools to abet in the proliferation of life, we should embrace this life and continue pushing the note forward. The time to generate has always been now, continues to be now, and will always be now.
If you like my work feel free to give me a follow on Twitter/X.
Good meditation on the supra-verbal nature of in-person communication, and the relative poverty of text-based communication. I’m finding it hard to follow why entropy becomes our “best substitute” for the Creator with respect to infinity. Much less why entropy could potentially catalyze omniscient AI?