Monday, June 16th
16/30
From the meditations of The Pale Metaphysician
In an effort to keep similar ideas together I have created mini series that aren’t really series inasmuch loose collections of congruent thought.1 What really happens here is that I consult my grand list of ideas for this project, select one, start writing, and upon completing my 500 I promptly realize that I actually explicated a separate, but still related, idea. I suppose that is to be expected when I sit down with no plan. Anyways, here’s another article on AI.
By ways of example is how I wish to begin, as I requested a friend to use his, so here it is: my friend’s job is self-described as “automating tasks,” and as such has seen what happens when people slip through the cracks; he outlines a situation wherein somebody submits a ticket expecting a human on the other side to read it, but alas there’s just the machine which cannot process the form — the submitter hits a wall, is upset, yet the ticket is marked as successful and nobody is the wiser. These systems are, necessarily, rigid which by definition means they cannot account for every outcome and when that happens people become mere blips — this is what I discussed yesterday. Automation ossifies that which it aims to automate, and in doing so replicates the desires of those designing and enforcing the(ir) rules.
I want to go a step further and make the point that this stratification is more than a normative procedure, instead a deeper, metaphysical one (what kind of metaphysician would I be otherwise?). When that person submits what becomes a phantom-success ticket, that is not just an unfortunate byproduct of automated systems, or indeed even a symptom of an unfair one, but fundamentally a restructuring of the coupling between the submitter and their expectation of response in a way that would not be possible otherwise. The stratification I described yesterday does not lie on-top-of as much as it is — a constant process of becoming-real. Automation’s rigidity is an immanent quality of the task at hand, as to automate is to encode a function of reality in a legible, predictable, ultimately mechanical way.2
Generative AI is thus the natural progression of such a mechanical desire — a seemingly contradictory statement given GenAI’s marvelous non-rigidity (that’s kind of the whole point of AI is that “intelligence” can be dynamic), but when one considers what GenAI is actually outputting cannot escape the bounds of what’s already been, the answer becomes clear. This is not simply that GenAI is a stochastic parrot limited to reproducing its training data without real comprehension, or that it lacks some creative spark, but that it is simply incapable of synthesizing something genuinely, truly progressive. If I’m to use fancy jargon, it lacks negative dialects. As such, it is trapped between metaphysical bowling lane guardrails that corral its responses into a narrow slice of what is real — in fact, what it produces is more than real, it’s hyperreal insofar that what comes out the other side has not existed nor ever needed to in the first place. Pure simulacra.3
If one accepts that what I wrote is true insofar automation selectively stratifies certain rules dictated by whom those rules are for and from who enforces them, and that this encoding is itself a metaphysical process, one that tools like GenAI take a step further by generating hyperreal artifacts, then it is not so much a stretch to acknowledge that this autoencoding4 is generating a manufactured reality separate to the lived experience of many who have fallen through the cracks of automaton, both algorithmic and structurally manifest, for the benefit of whomever develops the tools in their image. More everything forever — if you’re rich/white/male/whatever enough.
This is part of why I find the idea that companies like OpenAI or xAI can be anything but malevolent: they already replicate systems of oppression by the very nature of their existence, i.e., entities of immense capital for the purpose of intentionally alienating workers to increase profit, so how could anything they spawn not be the same? Put another way, it is only in a capitalist system can people invent a miraculous machine that can act like people and the first question asked is: how much money can it make? And then people are surprised when those same machines don’t actually increase the quality of life of anyone who isn’t immediately poised to benefit from any increase in production? Why do they pretend not to understand this?
Of course, there are countervailing forces pushing these tools to stick the “real,” i.e., RAG, fine-tuning, citing sources, etc., but they only serve to entrench the use of the tools themselves (again, as I described yesterday).
I love overloading words with meaning.