What is Technology? (Contraslop, Part 1)

(I’m acting like this is a series, but I only really have two parts in mind and a third percolating. The title, “contraslop” is simply meant to evoke the common derisive term for Large Language Model generated text: slop. Ergo, “Contraslop” is simply “against slop.” I’m getting my thoughts in order because I’ve got to go back to teaching in two months and I need to wrestle the means of slop-generation out of my student’s hands.)

As an initial caveat, the discussion of technology often elides the role that textiles plays in development. Spinning wheels, like this Irish one, are the ancestors of modern mechanical technology, as per The Fabric of Civilization, reviewed by Edgar here.

I am an English teacher, and most of my classes are composition classes. I tend to let students range pretty far with their chosen topics, and – as a result – I hear a lot about technology. Now, students, like most people, mean something very particular when they talk about technology: they always mean, specifically, digital, technology, and usually mean technologies that have been popularized in the last 5-10 years. Maybe they stretch it a bit older and include social media, but usually it’s a very limited slice of time.

My standard move here is to point out that things such as cars, washing machines, printing presses, and usable fire are all forms of technology and request that they refer to the category that they’re talking about as digital technology. This isn’t a perfect solution, but sometimes when it comes to such things you can’t fix the problem; you can merely put a rock in somebody’s shoe and hope that they take the time to investigate and understand what it is that’s causing them such discomfort later on.

This is actually a fairly important thing for me: I believe that discussing this is key to teaching students how to live in the world. One must not treat technology as a special category that only deals with the latest 0.003% of human existence.

One of my guiding lights on understanding technology is the late David Graeber’s “Of Flying Cars and the Declining Rate of Profit,” first published in the Baffler in March 2012, and an expanded version of it appeared in his book The Utopia of Rules (the short version is available on his website here). In that essay, he argues that most technology hasn’t qualitatively changed since about 1970. There have been plenty of quantitative changes, but qualitative changes have been limited to surveillance and military automation.

I found this to be distressing and confusing when I first encountered it: it went so sharply against the orthodoxy of technological development that I had embraced at that point in my life, but I couldn’t find an argument against it.

Let’s reproduce a broader version of this argument. Imagine an ancestor in 1924, and another ancestor in 1824. Would you have greater difficulty explaining your world to your ancestor from 1924? Or would your 1924-era ancestor have a harder time explaining things to your shared ancestor from 1824?

For the record, 1924 is the year of “Imprisoned with the Pharaohs”, ghost-written by H.P. Lovecraft for Harry Houdini.

Remember: In 1924, they had electricity (and technically, more of their power came from renewable resources. 40% came from hydroelectric power). They had air travel. They had mass communication. They had textual telecommunications in the form of teletype machines. The only qualitative differences I can think of it the relative lack of screen-based technology. We can’t even claim the moon landing as our era: Apollo 16, the last time humans went to the moon, was in 1972, closer to our hypothetical interlocutor in 1924 than to today.

On the other hand, between 1824 and 1924, you had the diffusion of steam-powered land transportation, you had the introduction of antibiotics, of electricity, of recording technology for audio and moving images, of the automobile, and of air travel. The 20th century is significant, but from the perspective of technology, it is significant as a continuation of the 19th century. This is something that we’ve grappled with before, from a broader perspective. I recommend reading that linked piece to get a grip on the sort of things that were culturally invented during that period, because a lot of things that we assume to be the case since forever ago are actually inventions of the long 19th century.

For reference, 1824 is the year that the Alamo fell to the Mexican Army. Image used under a Creative Commons Attribution-Share Alike 3.0 license.

But this gets at something that I want to point to: the contemporary ideology around technology is similarly blind to what has been happening since forever, and insists on a particularly noxious sort of technological determinism. In fighting against this, I find myself weirdly offering critical support to free market capitalists who argue that something that isn’t profitable should be abandoned. I would phrase it in different terms, but broadly: if it offers no benefit, and no path to anything that clearly offers benefit, why are we spending time on it?

Technological determinism – the idea that there is some solid, continuous thing called technology and that this force, this entity, is the protagonist of our historical drama and the mainspring behind other kinds of change – is a false idea. It’s a hobgoblin in a lab coat, or maybe a black turtleneck. There is no prewritten plan for technology, and I’d argue that it’s silly to talk about one form of technology being more advanced than another unless there’s a direct connection.

Notably, the Hylux is probably the only truck that could be maintained without a machine shop — you’d still need gasoline, though, and that goes bad after six months. Image uploaded to Wikimedia commons by Jonas Joaquim Tavare…, used under a Creative Commons Attribution-Share Alike 3.0 license.

Quite clearly, if you took a 1988 Toyota Hylux and dropped it into another historical period – the Peloponnesian War, the Islamic Golden Age, the Mongol Empire, the Aztec Empire – it wouldn’t really change much. Sure, they probably wouldn’t be able to turn on the damned thing and you’d be limited to one tank of gas, but if you gave a demonstration of its capabilities, I still don’t think that many of the recipients of that demonstration would be interested in taking it. Even if you explained all of the steps needed to make the required materials, the one technological artifact that you’re discussing implies a whole world and I don’t think that these inhabitants of other cognitive and social worlds would be interesting in moving wholesale into the one that you’re demonstrating.

That’s because technology exists to solve a problem, so – similar to how you write a character in a work of fiction – the question becomes one of want and need. What problem do you want and need to solve?

If you’re trying to solve the problem of how to counter a phalanx, or how to secure enough grazing land for your herds, or to get hearts for Huitzilopochtli, then a pickup truck isn’t a great tool. It’s like using the butt of a cordless drill to hammer a nail: you can probably achieve the ends you’re trying for with that, but there’s most likely a more useful implement to use for the job.

This isn’t to say that technology can’t drive change, simply that it doesn’t do that as a primary effect. It’s brought in to do one thing, and then it achieves something else as a side effect because it can extend its function in an interesting and novel way.

Probably the first game I spent more than 1,000 hours on. Pretty good for $20.

Now, I could easily take a moment here to throw rocks at Sid Meier’s window for popularizing the idea of a “technology tree” (though he’d be right if he told me to go bother the inventor, Francis Tresham, instead). I say this because it reinforces the idea that there are discrete eras and that you can easily do things like invent mathematics by just combining masonry and the alphabet and waiting a while.

I joke, but I don’t begrudge this as a game design choice – there are better ones, but this one is a functional way to do things – however, it is part of why I think that analyzing the way that games are put together is a useful and interesting thing to do, because the toy models that we’re provided with in these settings quickly become entrenched in our minds as the way that things actually work.

This is where we return to the idea of a technology being more or less advanced. The most common place to look for this is in the contact between European and American civilizations. The standard story is that the Europeans, with their guns, germs (the result of living close to livestock – as Charles Mann puts forward in 1491, indigenous populations had immune systems more optimized for fighting parasitism than disease, as that was mostly what they encountered), and steel easily displaced the indigenous peoples of this continent. A more pro-indigenous view is often put forward by telling the story of the Conquistadors entering Tenochtitlan and finding a city larger than any in Europe, built upon a grid with sanitation and running water. Clearly, the picture here is of filthy savages entering the heart of an empire, flipping the script.

Presented without further comment.

However, I think it would be more accurate to say that these civilizations were equally advanced in different directions. The neolithic revolution may have come to the Americas later, but if we are questioning whether periodization and hierarchization make sense it would be stupid to just sneak it in through the back door. After all, people in the British Isles tried and then abandoned farming before coming back to it later. Some people actively chose to return to a pre-agriculture way of life, even after generations of cultivating being part of the toolkit.

When considering Indigenous American civilizations, it’s important to understand that they had different resources at hand, different pressure to respond to, and different values guiding how they approached things. Metallurgy may not have developed to the same extent, but much of the Amazon rainforest is the product of intensive management by indigenous peoples across several “civilizations.” It may not have required direct and active management throughout its existence, but I would be comfortable arguing that a food forest the size of Western Europe equals the scope of the construction of all the cathedrals of the Middle Ages.

And, while I’m not particularly religious, I’d point out that I tend to like a good cathedral.

So we can say that the indigenous peoples of the Americas had advanced technology, simply that they prioritized a different relationship with their environment: broadly, they had top-level goals of materially supporting their population and creating arts, but when these were broken down into smaller goals, the different set of values that informed their approaches led them to approach things differently. The aboriginal people of Australia, likewise, had a different set of values that led them to approach things in a third way – and they produced a stable, largely peaceful set of relations that covered a continent and lasted more than sixty thousand years.

We cannot refer to these cultures as primitive, that’s imposing an invalid set of criteria, and as we’re so quick to note with individuals, if you judge a fish by it’s ability to climb a tree, you’re going to miss what’s impressive about a fish.

My point is this: technology is not the protagonist of history, especially when we narrowly define it. It is, instead, the inventory of solutions that we have to our problems. Oftentimes they create a new set of problems, and require further solutions to keep using them, though if an alternate solution that lacks the new problems crops up and its own externalities are less pernicious, then people will switch over to the second technology.

The rail yard in Joliet, Illinois — taken by Ken Lund and released on Wikimedia commons under a Creative Commons Attribution-Share Alike 2.0 license — demonstrates this. Note that it is surrounded on all sides by the city.

This is what led to the automobile largely supplanting rail transport in the United States – it led to a great deal of centralization, and this created a large set of problems for the people who were in a position to make decisions. Consider: if goods are shipped to a rail yard, it makes more sense to have industry concentrated around the rail yard, which encourages the growth of working-class neighborhoods in the same area. This leads to two distinct problems: first, it means that working class populations are poisoned by the pollution of the factories; this is a real problem for everyone (even if you are not working class, consider the fact that it’s better to live in a society where the average level of health is higher because it makes healthcare easier). Second: the concentration of workers in the same communities leads to solidarity – these other men and women are not just your coworkers but your neighbors and the people with whom you socialize. It becomes easy to see how this might lead to unionization.

The automobile allows populations to disperse, driven by lower housing costs, but it also leads to the hollowing out of the city center – now, you can ship by truck, and that means that distribution doesn’t happen in the city center, because city streets are not made for large freight-haulers. It’s better to have your distribution centers in the periphery of the city.

This creates its own problem, but notably it’s the primary driver of automobile emissions. This is a pump driving the ecomodernist response to transportation, which is electric vehicles. Unfortunately, these are – at best – a stepping stone: they might not release carbon in the same amount, provided they are powered off of renewables, but they require lithium, and coltan, and everything that goes into tires. The solution, frankly, is to switch back to rail.

The problem here is that this isn’t a problem prioritized by those in power. The fight against global warming takes the form of forcing a number of insulated decision-makers to recognize the scale of the problem and encouraging them to prioritize it. Some people advocate for restructuring incentives, other people advocate for drastic direct action. A diversity of tactics will probably be needed. We have written on this issue in the past.

My thoughts, though, only turn to global warming because we live in Kansas City and it’s hot as hell here.

What drove this line of inquiry for me is the endless hype around so-called AI. I say “so-called” because it isn’t AI but that’s what everyone thinks it is. A better term, put forward by Emily M. Bender et al., is “stochastic parrot,” which emphasizes that no thinking is happening – it’s simply a statistical trick to produce strings of imitative text. Less sanguine, Ed Zitron, a long-time blogger on this issue and host of the Better Offline podcast, suggests that AI is not only a bubble, but is in the process of bursting. A big part of this, according to Zitron – and I do question, somewhat, his interpretation of the papers he uses as the basis of his reasoning – is that further development of LLMs is choked by a lack of data, energy, and computing power. We’re going to run out of training data within the next decade, but we also can’t get a clean scrape of new training data because LLMs have already been released into the internet, which means that synthetic data is already out there. Using data tainted by synthetic inputs results in model collapse.

After seven generations of such training, a question on architecture produced the following string of text:

architecture in England. In an interview with The New York Times, Wright said : " I don ’t think there is anything wrong with me being able to do what I want to do. It just doesn ’t work for me. " He added : " I don ’t know if you can call it funny,

Nine generations produced:

architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-

(both found on page 3 of the linked article.)

In addition, the current level of LLM usage leads to intense energy consumption. ChatGPT draws on a half-million kilowatt-hours of energy per day, responding to 200 million requests. In fairness, Google used about 15.9 Terawatt-hours of energy a year, measured in in 2020 (or, approximately, 44 gigawatt hours a day). This technology may be in its infancy, but it’s not nothing (and the google figures do include non-search functions, and some LLM function).

Despite what people who know me in real life, and hear me complain about these things constantly, might say, I’m not a complete doomer about this. I think that the amount of discussion going into it is stupid, but I can see arguments for it being a tool that could achieve things in the right hands. My main gripe with all of this isn’t that it can’t be used for anything, but simply that the people claiming it can be used for everything don’t understand the things they’re trying to automate.

"Pay no attention to that man behind the curtain!” (image is a publicity photo Frank Morgan for the 1939 Wizard of Oz film, used under a Creative Commons Attribution-Share Alike 4.0 license.)

My own stance is that this tool shouldn’t be attracting the same level of hype as it currently does. Being told that it can do everything in my skill set is insulting, because it simply can’t and I strongly doubt that it never will. What this makes me angry at isn’t the technology, it makes me angry at the people who say that it’s worthwhile to replace me and those like me with a terrible, automated version that costs $250 billion a year to run. My enemy isn’t the technology, it’s the people who say that the technology solves a problem – because that problem is my life and its continuation in a fashion I choose.

The future I see the enthusiasts of this putting forward is, essentially, human teachers, service-workers, and artists for the 1%, and a gauntlet of chatbots and LLM-produced slop for everyone else. This solves a major problem for the decision makers: why do we have to spend all of our time relating to other human beings, attending to people’s needs, and paying these artists? What’s the use of all of that? Wouldn’t it be better to cut all of that out and leave time for the really important work of watching YouTube videos and filling out spreadsheets?

On the other hand, I might put forward that – in a hypothetical world where material problems had been solved, and human needs are perfectly attended to – we would be left with art, games, and relationships between equals. To try to automate these things doesn’t really strike me as evil necessarily, it strikes me as malignantly stupid. I’ve spoken quite often on this website as the figure of the spoilsport, and that’s exactly what this is: someone misunderstanding the rules of the game so completely as to say, “don’t worry about all that art and poetry you were going to make, don’t worry about all those jokes you were going to make with your friends, we automated that so you can spend more of your time on paperwork. The machine really can’t do that,” simply doesn’t understand what’s happening, and if we have structured things so that those people have power, then we’ve made a horrible error, akin to drafting the drunkest man at the party as our designated driver.

Don’t surrender your keys to the AI boosters. Don’t let them talk you into getting going. Instead, go and get a drink of water, and wait for everyone to sober up a little bit.

If you enjoyed reading this, consider following our writing staff on Bluesky, where you can find Cameron and Edgar. Just in case you didn’t know, we also have a Facebook fan page, which you can follow if you’d like regular updates and a bookshop where you can buy the books we review and reference (while supporting a coalition of local bookshops all over the United States.) We are also restarting our Tumblr, which you can follow here.