THE PEOPLE ARE IN THE COMPUTER—PART I

THE PEOPLE ARE IN THE COMPUTER—PART I
This is the story of Alec Radford, the inventor of ChatGPT, and his foundational contributions to generative AI—a new internet and paradigm for media.
A new internet, generative AI, had its mainstream breakthrough in November 2022 with ChatGPT 3.5, like the original’s World Wide Web moment.
Let’s look back.
Prior to 1993, the Internet was the near-exclusive domain of researchers and academics. Tim Berners-Lee changed that with his 1989 invention of the World Wide Web. When CERN released the Web as public domain software in 1993, it introduced the Internet to everybody’s everyday lives, including artists. The net.art movement began in Eastern Europe within a year in 1994.
Prior to 2022, generative AI was still mostly the domain of researchers and academics—plus some artists and collectors. ChatGPT, the mainstream gateway for the public to access generative AI, changed that. Two months after its release, ChatGPT 3.5 became the fastest-growing consumer application in history, reaching an estimated 100 million monthly active users.
While the Web’s Berners-Lee is a household name, rightfully going down as one of the most impactful people to ever live with the likes of Einstein—does anyone know the ChatGPT inventor’s name off the top of their head?
This person, Alec Radford, is not yet a household name. He doesn’t have a PhD. For a time, he was even a college dropout. Like Tim Berners-Lee, he holds only a bachelor's degree—though Berners-Lee attended Oxford, while Radford graduated from the tiny Olin College, founded in 1997, about 900 years after Oxford’s 1096 founding.
But are we even so sure generative AI warrants this comparison to a new internet in the first place?

A New Internet?
Is it hyperbolic to call deep learning’s generative AI “a new internet”? First, let me clarify that by generative AI I mean deep learning systems that produce content—text, audio, code, images, video, etc.—from processing data.
Some rightfully argue this content is often banal, built on datasets of dubious origin and tainted by Silicon Valley money.
While there’s truth to all of that, there’s also more to consider. Dismissing generative AI outputs entirely is akin to declaring all painting unremarkable—and likely says more about that person’s view of AI generally.
Additionally, while some models are trained unethically, others operate with transparency, open-source principles and consent-driven datasets.
Finally, equating Silicon Valley with AI reflects recency bias. Historic centers for deep learning research were Toronto, Montreal, New York and even Boston. This only changed in the mid-2010s when the Valley finally began noticing deep learning’s breakthroughs by researchers like Geoffrey Hinton at the University of Toronto. But really, it changed in the aforementioned aftermath of 2022.
Generative AI as a new internet isn’t just a metaphor; it’s reflected in economic and technological trends. According to Stanford’s 2024 AI Index Report, generative AI funding skyrocketed from 2022, nearly octupling to $25.2 billion. This staggering investment underscores the rapid integration of AI across socio-economic sectors.
Like it took the Web time to demonstrate its capabilities beyond initial technologies like email, generative AI is transforming into a multi-modal, general-purpose technology. ChatGPT, Google’s Gemini, and other models can already generate text, images, audio, video and code—all from natural language—a trend that will only accelerate. Words may be the ultimate winner in all of this.
The practical implications are vast, extending to fields once considered impervious to automation, including the arts.
Also, like the early Web, ethical and legal concerns abound. Artists argue that AI-generated works frequently rely on copyrighted materials without permission, raising questions of consent and compensation. These debates mirror the copyright battles of the mid-to-late 1990s that ultimately led to the 1998 Digital Millennium Copyright Act (DMCA).
While some hope copyright law can provide protections, others, like Charlotte Kent in the Brooklyn Rail, recently pointed out that the courts’ interpretation of “transformation” within fair use seems to favor AI. In a recent decision, Kent points out, an AI-generated image was even granted copyright. The legal landscape remains uncertain, but what’s clear is that AI is fundamentally altering the way creative labor is valued and protected.
Yet, the implications of generative AI extend beyond its legal and financial impact.
Artist Mat Dryhurst, one person dubbing it “a new internet,” explained to The Culture Journalist:
“We are talking about the advent of a new internet and a new substrate for how media work. The scale of these tools is so significant that it warrants a clean slate approach in terms of how we think about them.”
He continues:
“Maybe the most analogous shift we’ve seen in recent memory is this is the advent of digital—only bigger. It’s coming for everything—not just images but text and audio too. How exciting it is for an entirely new suite of incredibly powerful tools available to everyone."
Skeptics argue that generative AI is just another passing trend, inflated by hype. But Dryhurst counters that we are at the very beginning of something long-lasting:
“Holly [Herndon] was saying this the other day: We got to read books about the early people experimenting with synthesis, sampling or early computer art.
This is that moment for the next 100 years. Whatever these techniques are, some variation of them is going to play a role in how art and media function for the rest of our lives.”
Media and art are fundamentally changing, from their creation, to their appreciation, to their meaning. We are powerless to retroactively stop generative AI’s arrival—it’s already here. But we do have the power to understand it, make the best of it and perhaps even shape its future course.
DALL-E 2
Before, we credited ChatGPT—and therefore Alec Radford—with generative AI’s breakthrough. But is that giving them both too much credit? According to Dryhurst, DALL-E was the earlier turning point. He again tells The Culture Journalist:
“You can never tell what the moment is where people are going to start caring. And it turned out that moment was DALL-E 2—particularly in conjunction with the CLIP model, which came out a year prior. That combination is the thing that really [took] this [text to image] stuff intergalactic.”
Let’s defer to the expert and say generative AI’s breakthrough “moment” was in actuality DALL-E combined with CLIP. Fair enough, that may make attributing responsibility harder, though. But perhaps a CLIP author was also involved with the DALL-E paper? Might that person even be one of the first to successfully experiment with text to image? Surely, this is AI’s household name—maybe it’s Sam Altman or Hinton?
No. Again, that person is the very same Alec Radford. Yes, the seminal ChatGPT inventor is also the primary CLIP author, co-author of DALL-E and possibly the first to make text to image work on October 1, 2015.
Woah. What?
And that doesn’t even mention his foundational work on GANs or his work as co-author of OpenAI’s Jukebox. Cumulatively, these are historic, Alexandrian, Beethovenian levels of achievement for someone by their early 30s.
Maybe that’s why deep learning pioneers like Jeff Clune have called Alec Radford “the father of modern generative AI.”
Maybe it’s why Sam Altman referred to Radford as “a genius at the level of einstein.”

But even this lofty praise may downplay Radford’s impact, as it focuses primarily on technology and society while overlooking his indelible influence on the image, media and art.
What if he’s Einstein plus Niépce?
It’s a serious question that requires a serious investigation. But before we begin, I want to make it clear that it’s reductive to simplify all credit for deep learning advancements to one person. That’s certainly not my intention. The field of deep learning research can be better understood as competing teams sharing and building off of each other for incremental progress.
ChatGPT and DALL-E 2 should not be considered “Events” in the Badiouian sense of unpredictable black swans. They were relatively easy to see coming for people following along closely. Rather, they should be considered “events” in the Derridian sense, in which fixed centers of structure are challenged—decentered.
What generative AI decenters—beyond hefty socio-economic-political issues—in art are illusions of authorship, meaning and creative space. It’s through understanding these artistic interventions with generative AI that we can better understand the technology’s impact on our lives moving forward.
So let’s begin—or continue this understanding journey—with this so-called Einstein level, modern generative AI founder, Alec Radford. Let’s try to figure him out. With someone this impactful, it might be important to know what kind of person he is.
In this first of at least two parts, we look at Radford’s life from the start until a critical, unexpected turning point in 2015.
Texas
Born in April 1993, Radford grew up in the sprawling concrete affluent suburbs of Texas’s Dallas-Fort Worth Metroplex. He appears precocious and interested in computation from an early age.
“My dad helped me build my first computer when I was 5,” Radford revealed in a rare, early interview. Radford did not respond to requests for comment on this piece.
From 2007 to 2011, he attended one of the Dallas area's top high schools, highly competitive with an intensely rigorous academic reputation. In the same interview, Radford gives an idea of its intensity by disclosing that upon graduating, “I think I’m technically a year towards being a Catholic priest.”
From what we know about Radford in high school, he comes off as a good student and person—if anything, the surprise is how normal he seems FOr tHe NExT EinSteiN. Unsurprisingly, he was a nationally ranked academic quiz tournament player. He was also an Eagle Scout, briefly a competitive runner and an editor of his school’s award-winning literary magazine.
Beyond these impressive—but not godlike—teen accomplishments, there are indications that in his free time, he was just a normal kid who liked to play video games. In a surviving snippet of his magazine writing, he appears to have a fondness for Minecraft, pontificating that:
“It’s a little hard to describe Minecraft to the uninitiated. The problem is that Minecraft can be pretty much whatever you want it to be.” The young, prescient Radford continued: “Your only hope of survival is to develop tools and build yourself a shelter and weapons to defend yourself.”
Minecraft is an apt and critical clue to understanding Radford, his interests and strengths: Derridian open-ended play.
This sense of thriving without fixed structure—where the “only hope of survival is to develop tools” on one’s own—would serve Radford again and again in college, at Indico and beyond.
Boston
After graduating from high school in 2011, Radford made a telling choice that speaks to his personality and values. He chose the relatively unknown Olin College in Boston. Like his high school, it was tiny and academically focused—but more importantly, stressed self-directed learning.
Yet it’s still an unorthodox choice. The school was only founded in 1997 and, due to its size, does not offer doctorate programs; although, it is ranked #2 in engineering among institutions without doctorate programs, according to US News. It’s as if Radford knew before arriving that he wouldn’t stay beyond undergrad. It’s a recurring trend that he seems to live a few years ahead with an uncanny sense for big developments before his peers.
With the academic and extracurricular credentials of Radford from a top high school, it’s not hard to imagine him being accepted into Stanford or MIT if that’s where he really wanted to go. But he chose Olin College with its emphasis on intimate and customizable learning. Radford would be Minecraft-ing together his degree in a way.

The self-directed, close-knit setting paid off. In 2011, his freshman year at Olin, Radford met Luke Metz and Slater Victoroff—his future DCGAN co-author and company co-founder, respectively. Radford and Victoroff were both obsessive and night owls. According to this 2023 piece on the pair in the Boston Globe, they bonded over late-night pineapple and onion pizza.
Taste in pizza aside, in late 2012, Radford demonstrated his prescience when he was just a sophomore at Olin, as deep learning experienced a major breakthrough: AlexNet. The 8-layer convolutional neural network (CNN) not only changed the trajectory of AI history but also changed Radford’s life. AlexNet proved the clear-cut superiority of deep learning with the latest GPU hardware by researchers, including Hinton and Ilya Sutskever, at the University of Toronto.

The lurking potential of deep learning got Radford’s attention, seeing what many others did not, including Victoroff, who needed strong convincing. Victoroff as a sophomore disregarded deep learning entirely, saying to one of his professors, "The war is over, deep learning lost." Luckily, Radford was there to convince him otherwise. Later, Victoroff would say, it was “the most wrong I've ever been.”
Victoroff recalled that, “At first, I was not a deep learning believer at all. Then I was doing these competitions with [Radford], who did believe in deep learning and, basically, convinced me as the state of the art advanced such that deep learning ran away and became so far beyond the traditional techniques I was used to using that I just couldn't keep up.”
This was the spark—from Radford—that birthed Indico, the data science company the two co-founded in their dorm room—yes, dorm room. The lore almost writes itself.
Victoroff confesses: “I said, ‘Okay, I was wrong—very humbling experience.’
That's really when the crux of Indico's problem came into focus because we realized that while this technology worked really, really well in the lab... [it] was still incredibly difficult to use [in the real world], wildly impractical in many different ways.”
So the pair put their unique expertise in deep learning to use. Importantly, in addition to pizza and deep learning, Radford and Victoroff bonded over Kaggle, a data science competition platform that offered open challenges for cash rewards. Companies would openly share their data until a team satisfactorily solved their problem.
After all, deep learning was still incredibly nascent in 2013, required extreme amounts of GPU and was confined to university research labs and resources. There were very few commercial applications and the technology was “wildly impractical.” This was no trivial thing Radford and Victoroff achieved.
To give you an idea, Radford and Victoroff felt there wasn’t a deep learning graduate program in the world more advanced than what they were doing. Their main goal appeared simply to become deep learning experts. This meant a choice between continuing in academia or dropping out—temporarily—to go straight to industry. The obvious answer at the time wasn’t necessarily industry.
It’s another instance of Radford’s seeming ability to live a few years in the future. He understood the pace of deep learning advancements and that waiting two years-plus in academia would have put him insurmountably behind. Industry it was.
Indico
The romanticism of founding Indico in a dorm room and working late nights—fueled by questionable pizza—recalls the mythical, grassroots tech origins of early computer pioneers in the ‘70s who built companies out of their garages. These are beyond legendary stories featuring characters like Steve Jobs. It even recalls the dot-com era’s late-night packaging and pizza parties. Beyond unity, there’s a purity to these origin stories. Indico is no Apple or Amazon but its founder’s legacy may end up being no less important than Jobs or Bezos’s.
Early on, Indico quickly grew from two to four, with Diana Yuan and Madison May joining Radford and Victoroff as dorm room co-founders.
The intrepid success of these four college kids on Kaggle—solving some of the world’s most difficult data science problems out of their dorm rooms with the most advanced deep learning techniques—started attracting VC attention.
By the spring of 2013, as juniors, the four raised seed funding from Rough Draft, who provide funding and guidance to student startups.
Victoroff shared that from the fall of 2013, “it was a year in the dorm rooms working from 5 p.m. to 5 a.m. on Sunday nights where really we were an open source project. At the time, the most ambitious thing we could possibly imagine was making this technology accessible to ordinary developers.”
By August 2014—the start of the group's senior year—they were accepted onto Techstars Boston’s accelerator program. This meant needing to start right away and not finishing their senior year. The four dropped out, while Metz stayed at Olin to graduate, joining a year later in 2015.
Victoroff explained how it went from, “This is a side project to, "Wait a minute, no, this is serious, and this is real, and we need to start taking ourselves seriously.’” In November, Indico had their “demo day,” where they came out three-times oversubscribed, raising a $3 million seed round to close out 2014.

Joining Techstars was like being drafted into the NBA out of college.
The squad was in the big leagues now. No more amateur dorm rooms; they now had millions in cash and real Boston offices.
From the early images of the company, working at Indico looked like your typical startup, with compulsory hammocks, late nights and cheesy team-building exercises. You can imagine hacky sack, foosball and other cliches. The blurry pictures that I believe have since been removed from Indico’s site reveal the company’s lighthearted beginnings with Radford.


As co-founder, Radford initially took on an open-ended research capacity at Indico. This handcrafted position mirrors his choice to attend Olin and his love of Minecraft; it also foreshadows his first role at OpenAI—but that's for a later piece.
This is also where the story begins to get more interesting from an artistic perspective—where we gain deeper insight into Radford’s core personality. What did this savant-level deep learning intellect choose to focus on first at Indico once they secured their funding at the end of 2014?
Remember, he was only 21 at the time. His data science company had just received millions in funding, presumably to focus on maximizing profit—perhaps doubling down on their revenue-generating business of connecting industry to the latest developments in deep learning, right? RIGHT?
Is that the kind of Minecraft world Radford chose to build for himself?
He did almost the exact opposite. He started making pretty pictures.

The Artist
Even from the now-broken github bio page (above), we can glean what Radford was supposed to be doing at work—that is “making machine learning more accessible to developers.” From his Twitter, we can tell what he was doing—more of the “trying to get computers to make pretty pictures.”

Remarkable. January 27, 2015. The 21-year-old Radford’s first instinct after beginning his fancy new data science job was to experiment with the little-known or used “variational autoeconder” (VAE) to generate experimental images.
If you know anything about early digital art history, an alarm bell the size of Radford’s home state of Texas might be going off in your head.
Working on getting “computers to make pretty pictures” in spare time remarkably mirrors what I believe to be the first visual experiments in digital art, produced by a 22-year-old A. Michael Noll at Bell Labs in 1962.
Noll explained it himself, telling me in a 2023 conversation how he thought: “If we can do computer music, why not computer art? So I did that and created a bunch of patterns and all. Now, my management didn't want me to call it art. AT&T did not like this kind of flossy stuff going on.
Bell Labs management defended it. So to handle this, they suggested I call it Patterns by 7090 and I did. I generated a bunch of these.”

Long wanting to know, I asked Noll at the end of our conversation whether he considered himself an artist.
“I have no doubt that the patterns I created were indeed art. Great art? I have no idea what great art is.”
In that same spirit, with the evidence we have that Radford was genuinely interested in making pictures, I think we can call what Radford created with VAEs art just as much as Noll’s “patterns” are easily identifiable today as art.

If that’s the case, this January 27, 2015 image on Twitter with two likes may be the first latent space art. Alec Radford would then be the first latent space artist. Either way, he’s still one of the very earliest.
Didn’t we mention he was Einstein and Niépce? Maybe there’s some Noll in there, too. Any of the earliest latent space art would be up there in importance with the earliest digital (computer) and net art.
Okay, but a one-off experiment seemingly out of nowhere from a researcher hardly makes one an artist—come on.

Decide for yourself. Consider this image from that same broken github bio, where he admits to spending “most of [his] time” making pretty pictures. That implies something more than an interest or hobby—most of someone’s time? That’s an obsession. I think that comes through in the posted image, which recalls Ellsworth Kelly in an age of AI, but also in the general care, maturity and reverence Radford consistently shows for his visual work.
Faces to People
About two months after that historic VAE tweet—even if it’s not art, I believe it’s the first reference to a VAE on Twitter—Radford posts this exceptional piece that gives me chills when I look at it. I believe it to be the first deep generative model to produce more than just a “face,” but make a “person,” with character and personality. He even refers to it as “pixel art.” This isn’t an artist? Ten likes for one of the most significant images ever shared on Twitter—and a personal favorite.

By the middle of 2015, Radford had shifted his image-generating experiments from VAEs to the more recent GANs. He announced this with a tweet on July 12, 2015, in what I believe is the first GAN image on Twitter. Is he now the first latent space, VAE and GAN artist?
This was no small feat. This is still Radford working mostly on his own. Recall that Metz, his DCGAN co-author, doesn’t join the company until after he graduates—presumably some time in summer 2015. Also, keep in mind how the now 22-year-old Radford is not doing this under Hinton at the University of Toronto, or at Google or Facebook. He’s Minecraft-ing this mostly himself.

But big fish started to notice.
As you can see from the tweet above, Radford’s work got the attention of Soumith Chintala from Yann LeCun’s Facebook AI Research lab in New York.
Radford posted the images on July 12, 2015. Less than a day later, Chintala was poking around and publicly commented that these were “really cool looking results.” This is someone from one of the world’s most prestigious deep learning labs marveling at the work of essentially a complete nobody at the time—an unpublished, 22-year-old dropout.
But this nobody was stunningly producing the most sophisticated deep learning images ever—just on his own.

Chintala would eventually join Radford and Metz as a co-author and mentor on this GAN project. Chintala has since revealed his actual involvement was fairly minimal, serving as an advisor, while Indico developed the tech, which the above exchange indicates. Radford was well on his way when Chintala even noticed.
About a week later, Radford posted a fresh crop, talking about “this weeks people” like an artist posting work in progress. He even threw in some engagement farming: “A little less creepy than last weeks people?” Fifteen likes—these historic tweets, to this day, have essentially gone unnoticed.

In August, Radford began experimenting with GANs beyond faces, turning to album covers.

Two days later, artist Tom White responded to Radford’s pre-published DCGAN experiments. This can be seen as perhaps Radford’s first link to the art world—where there’s evidence that he directly inspired the work of an artist with his own work.
A month later, the strongest evidence yet surfaces for Radford being an artist at heart: his September 16, 2015 tweet “THE PEOPLE ARE IN THE COMPUTER,” reading like a generative art project announcement. Even the language he uses is striking, dripping with art historical resonance. First, he doesn’t say “FACES.” These are “PEOPLE.”


Second, the concept of people—whether as "man" or "ghosts”—in the machine has deep and enduring significance in art history. The notion typically refers to artists incorporating emerging technologies to comment on art in society, dating back at least to Richard Hamilton’s exhibition Man, Machine & Motion from 1955. This show celebrated the work of Muybridge, Duchamp, Dada and the Italian Futurists, highlighting a tradition of examining humanity through the lens of the machine, with roots extending back to the nineteenth century.
Digital art history expert, Noah Bolanowski, has more recently pointed out William Latham’s computer sculptures from 1989 that “may be considered ghosts.” While not a latent space, Latham created these in a conceptually similar “virtual space.”
Finally, the idea of people in the machine also resonates with the contemporary deep learning art of the previously mentioned Holly Herndon and Mat Dryhurst, who play with notions of embedding Herndon’s personhood directly into the machine in several ways, including I'M HERE 17.12. 2022 5:44. This piece stands as “an early and defining use of AI image generators” for the pair.

Radford’s use of “THE PEOPLE” becomes part of this tradition, whether intentionally or not. Part of the reason is because the tweet simply does not read like an engineer showcasing a breakthrough tech demo. Like an artist, Radford even obfuscates his technique, not even mentioning DCGANs.
For a technologist only, the technology would be placed at the center. Radford centers the human and the images themselves.
Radford was far from done in 2015. Unfairly, almost as an afterthought, Radford dropped another bombshell, his coup de grâce. On October 1, 2015, Radford publicly announced that he had gotten “text to image working (sort of) (still bad).” This tweet is a premonition, a preview of the years to come for Radford. But it’s astonishing that someone so foundational to future developments in t2i made one of the earliest breakthroughs. Clearly, it shows that it was on his mind for quite some time.

A month later, in November, 2015, Radford released his now legendary DCGAN paper, explaining what he and Metz had been up to for the last year, in which he introduced the world to the new GAN technique for image generation—the algorithm that continues the tradition of deep learning’s intersections with art, planted earlier in 2015 by Radford.
But it’s getting late. We’re several thousand words in already and that’s an entirely different story. Much more about the steadfast legacy of DCGANs, the birth of text to image and Radford’s sudden and unexpected departure from Indico—in Part 2.
----
Peter Bauman (Monk Antony) is Le Random's Editor-in-Chief.