AI Zombies in the Archives

Sarah Kate Kramer

“You have reached the home of Dr. Bonita Franklin and Stephen Kramer. If you have a medical question for Dr. Franklin dial…otherwise, leave a message after the beep.”

After his death eleven years ago, my father’s voice continues to reverberate through my childhood home every time a call comes through when my mother is out. Why does she keep his message on the answering machine? She likes to hear his voice. Friends of my father have asked for copies. 

This impulse, to preserve the voices of the dead, was central to the origin of recorded sound. After inventing the phonograph in 1878, Thomas Edison wrote, “For the purpose of preserving the sayings, the voices, and the last words of the dying member of the family–as of great men–the phonograph will unquestionably outrank the photograph.” (1) The Victorians were preoccupied with death and preservation in all forms and were thrilled about the possibility that voices could be immortal, living beyond the decay of the body. (2)

Today, we are many technological generations from the phonograph, and it’s AI that is often tasked with preserving—and generating—the voices of the dead.  People are training AI on voice recordings, text messages and other archival material to create “Grief bots.”  Social media creators are using AI to generate “historical” photographs and films. Podcasters are using AI to generate voices of the dead, often to “voice” material that was never recorded. An AI avatar of a dead man has even testified in court. 

The use of GenAI to recreate voices poses existential questions not only for family members, but also for archives, and our understanding of history. As a producer who has spent much of my career working on historical documentaries, I know all too well that audio archives are limited. There are historical, cultural, financial, colonial and other powers that determine what has been documented, and what is accessible.  There have always been gaps in the archive, and as documentarians, much of our work is wrestling with those gaps, trying to transcend them to find meaning. If AI offers us the possibility of generating material to fill those gaps, what does it mean for storytelling, and for archives?


Before we address that question, there’s another one we need to ask. When GenAI “listens” to a piece of audio, what is it hearing? The professor of culture and technology Jonathan Sterne, who died in March 2025, wrote that machine listening is dependent on a sort of fantasy of “aural phrenology: classifications of speakers along lines of mood, personality, body type, race, gender, sexual preference, truthfulness, health status, and any number of other vectors.”(3) He describes it as a “will to datafy.”(4) So what does AI detect when it “listens” to the recording of my father? The 10 second answering message doesn’t contain a lot of data, but I also have a 40-minute StoryCorps interview I conducted with him in 2007. From that, I suppose it could potentially pick up that he was a married, middle aged white man of German Jewish ancestry born in Washington, D.C., who had a difficult relationship with his father and worked as a lawyer? But it’s the infinite absences in that classification, the intangibles which made him who he was. The machine won’t detect that the recording for the answering machine was in his “formal” voice.  It won’t detect that he was voraciously curious, that he practiced yoga, was charming, wise, and prone to mood swings. That he might have put down the phone and begun to boogie around the room. That he was a man who befriended his barber and the dry cleaner and the fishmonger. I can tell you infinitely more about my father than a grief bot could. 


Part of the thrill of producing stories about the past is the quest to track down tape that has never been heard publicly, and connecting with the people who have a relationship to that tape. There are even times when the work of producing historical documentaries can feel transgressive, when we shed light on history that has been suppressed. Getting on the phone to gain a source’s trust, or poring through their home videos while they tell you the stories behind the footage is a key part of the process.  Stephanie Jenkins, an archival producer who works with Ken Burns and is the co-founder of the Archival Producers Alliance, describes the art of producing documentaries as “bearing witness” and “having the hard conversations around getting people's consent and negotiating access to their archives, which is often negotiating access to their memories.” If we rely on GenAI to create images or audio of the past rather than talking to primary and secondary sources, we “miss out on a lot of what it means to produce and to explore history,” Jenkins said.

Do machines “bear witness?” As Sterne writes, “in our moment, machine listening is intimately connected to corporate attempts to enclose more and more domains of human interaction, and to state surveillance and authoritarian projects in many parts of the world. Therefore, any theory of the listening in machine listening needs to also be a theory of power.” (5)

GenAI is designed to be extractive. Its ability to produce content is dependent on a massive voice archive amassed through state and corporate surveillance, customer service calls, transcription tools, and the myriad other consensual and non consensual ways our voices are recorded and distributed. Our voices are algorithmically decomposed, analyzed, classified, and ultimately put to use. (6) If we use AI tools to generate voices, it's impossible to know the origin of the source material, to fact check the algorithms. 

AI is a new tool, but the question of what to do when the constraints of archival meet the necessity of storytelling is old.

Just as there are deep fakes today, there were deep fakes in the past; it was commonplace for early documentarians to incorporate reenactments without transparency to their audience. As film historian Raymond Fielding writes, “newsreels were compromised from the beginning by fakery, re-creation, manipulation and staging.” (7) Some used actors to impersonate celebrities and politicians, and blended real and staged footage of major events like World War I and the rise of Nazi Germany. Fielding describes how President Franklin D. Roosevelt became annoyed because “he was getting calls and notes from political advisers regarding statements and remarks spoken on the March of Time radio show by the Roosevelt impersonator. These statements reflected Roosevelt’s policies, but, in fact, had never been uttered by the President.” (8) History repeats itself, but the stakes are even higher now. We’re living in an era when deep fakes of FDR could be generated by AI, uploaded to YouTube, scraped by GenAI for training data, and used to create new deep fakes, in an endless feedback loop of fake FDRs. I can’t help but think of the writings of French philosopher Jean Baudrillard. GenAI material is the ultimate “simulacra.” A copy without an original, a representation that replaces reality all together.

Techno-futurists contend that AI voice generation is a new, exciting frontier for creativity. “It was a modern storytelling technique that I used in a few places where I thought it was important to make Tony’s words come alive,” said Morgan Neville, director of Road Runner, a documentary about Anthony Bourdain released after his death. In the film, Bourdain’s AI generated voice speaks words that he had written, but never spoken in a recorded interview.

Some take it further. Will Sasso and Chad Kultgen of the comedy podcast Dudesy used the audio archives of George Carlin to generate completely new material. They produced an hour long special “voiced” by Carlin titled, George Carlin: I’m Glad I’m Dead! Sasso and Kultgen were transparent about their process with the audience, but they didn’t get permission from Carlin’s estate. Dudesy calls its creation comedy AI, but Carlin's daughter Kelly was not amused. She filed a lawsuit and Dudesy ultimately agreed to take the special down. 

I wonder what Thomas Edison would think of this—not just preserving the voice beyond the grave, but summoning it from the grave, to give it, golem like, “life.”  

The word listeners and viewers consistently use in describing GenAI ghost voices is “creepy.” It may be the zombie-like timbre of the voices: flat, emotionless—dead sounding. It may also be the fact that hearing synthetic voices—even if it’s transparent and consensual—illustrates the deepfake nature of the world we live in. These voices touch a nerve of anxiety about the media’s potential for deception, fakery and illusion. If anything can be generated, if we can even bring voices back from the dead and manipulate their words, documentation starts to feel meaningless. 

Writing this article, I became concerned about the possibility of GenAI material infiltrating archives and being misclassified as actual historical footage.  I reached out to Rick Prelinger, founder of the Prelinger Archives, a collection of over 60,000 "ephemeral" (advertising, educational, industrial, and amateur) films.  He was remarkably sanguine, reminding me that it’s the job of archivists to label context and provenance. AI-generated audio and video will inevitably come to the archive, he said, but when they do, “we have to think of them as fiction.” Ultimately we may end up in a stratified world where some archival material is verified and stamped as authentic, and other material is not. But in our current political moment, when the U.S. National Archives are being targeted by the Trump administration, who would have the power and authority to determine authenticity? Furthermore, will people even care?  

As journalists and documentarians, it's our job to care. We have to deal with the complications of how to represent the past, because there are real repercussions for collective memory. Maintaining our contract with listeners means it’s imperative to fact check and be transparent about the origin of our archival material. But beyond the basics of media ethics, humans need to prove they can tell better stories than machines. Let’s lean into what our ears can hear that machines cannot. 


We know that AI was trained on our craft—and our tropes—and that gets at a problem in our work: the majority of audio documentaries and podcasts are formulaic. “I’d love to see a greater sense of experimentalism,” Prelinger said.  “We need to get back to the sort of grittiness and irresolution that’s part of the world.” I got the sense that Prelinger is cautiously optimistic in this age of AI. He thinks that all this high tech fakery is going to drive people to seek out verité. I hope he is right.

I understand the impulse of creators to generate material that brings the dead to life. There are so many things that were not recorded, heartbreakingly so. But if we use these tools, we must ensure that AI isn’t filling the gaps in the archive with falsehoods.

There is truth in absence. What if we accept what has been lost, and challenge ourselves to make stories inside of that constraint?

If I don’t have many recordings of my father for my children to hear, I’ll have to tell stories about him. My memories, faulty as they are, become the story. There are so many stories I can spin out of that 10 second voicemail. That archival is material. 


 1. Edison, Thomas A. "The Phonograph and Its Future." The North American Review, vol. 126, no. 262, May-June 1878, p. 527. American Periodicals

2. Sterne, Jonathan. The Audible Past: Cultural Origins of Sound Reproduction. Duke University Press, 2003.

3. Sterne, Jonathan. "Is Machine Listening Listening?" Communication +1, vol. 9, no. 1, Oct. 2022.

4. Sterne, Jonathan, and Mehak Sawhney. "The Acousmatic Question and the Will to Datafy: Otter.ai, Low-Resource Languages, and the Politics of Machine Listening." Kalfou, vol. 9, no. 2, Fall 2022, pp. 289-308. Regents of the University of California.

 5. Sterne, Jonathan. "Is Machine Listening Listening?" Communication +1, vol. 9, no. 1, Oct. 2022.

6. Sterne, Jonathan, and Mehak Sawhney. "The Acousmatic Question and the Will to Datafy: Otter.ai, Low-Resource Languages, and the Politics of Machine Listening." Kalfou, vol. 9, no. 2, Fall 2022, pp. 289-308. Regents of the University of California.

 7. Fielding, Raymond. The March of Time, 1935-1951. Oxford University Press, 1978, 14.

 8. Ibid, 15.

Click here for full bibliography, links and further reading.

Contributor bios here.