April 2025

"The Labor Theory of AI"

Review in the NYRB of Matteo Pasquinelli's The Eye of the Master: A Social History of Artificial Intelligence. Tarnoff is unconvinced of some of Pasquinelli's larger causal/historical claims: Pasquinelli notes that early computing pioneers like Charles Babbage took inspiration from industrial and scientific management practices, implying an entwined relationship between computing and managerialism extending to today. But, Tarnoff says, Pasquinelli doesn't actually carry the analysis through the 20th and 21st centuries, so any causal claim is going to be a bit thin on ice.

The interesting takeaway from Pasquinelli-via-Tarnoff is, for me, the fact that arch-neoliberal Friederich Hayek was interested in "connectionism," a branch of neuroscience that later gave rise to neural networks—aka, the technology that underpins machine learning and generative AI today. Hayek saw parallels between the brain and the market, two complex, unknowable, ungovernable entities that nevertheless manage to create order, even if we can't see how.

Generative AI is this but on steroids. We don't know how an LLM produces a given set of results, just that it does so. Hayek, Tarnoff notes, would have been pleased:

The old Austrian would be gratified to know that the “intellect” of the most sophisticated software in history is sourced from the unplanned activities of a multitude. He would have been further tickled by the fact that such software is, like his beloved market, fundamentally unknowable.

This helps me understand and see more clearly the line of reasoning by AI boosters: "we" cannot govern or regulate or control ChatGPT and their ilk, because we simply don't know enough about it, because doing so would tamper with its mysterious, sacred workings. Therefore we can also never change it, argue with it, or engage with it in any meaningful way; we must, like the market before us, accept whatever it exacts.

Can I send you a letter?

Prefer to read it first?