
According to reporting from Business Insider, Meta secured a patent in December that outlines how a large language model (LLM) can simulate an individual’s social media activity after death, e.g., liking posts, replying to comments, responding to DMs, even participating in audio or video interactions. The patent states that the model could simulate a user “when the user is absent from the social networking system.”
A Meta spokesperson told Business Insider the company has no plans to move forward with this example. But patents don’t exist in a vacuum. They signal direction. And this one raises profound questions about identity, consent, and the future of the algorithmic afterlife.
From Memorialization to Simulation
Meta has been thinking about digital legacies for more than a decade. In 2015, Facebook introduced Legacy Contact, a tool designed to preserve a profile while preventing anyone from logging in as the deceased. It was a boundary-setting feature: Memory could remain, but agency could not be transferred.
By 2023, however, Mark Zuckerberg was openly discussing the possibility of interacting with virtual representations of loved ones on Lex Fridman’s podcast. (You can view the transcript if you’re not inclined to watch the video that takes uncanny valley to a whole new level.) This marked a transition from preservation to simulation. The shift reflects a broader technological confidence, where a platform could go beyond managing the digital remains of a person to actually reconstructing their essence.
What once functioned as a memorial boundary now edges toward a philosophical frontier. Preserving memory acknowledges absence; simulating presence challenges it. As platforms grow more confident in their ability to predict, replicate, and animate human expression, the line between honoring someone’s life and extending their likeness becomes harder to see. The question is no longer whether we can reconstruct a voice but whether doing so reshapes what it means to let someone go.
Grief Tech Goes Mainstream
Meta is not alone. In 2021, Microsoft also patented technology for chatbots that could simulate deceased individuals. And startups like Replika and You, Only Virtual emerged from founders’ personal experiences with loss.
Post mortem AI (aka ghost bots or death bots) may have once sounded fringe and new agey, but it is steadily gaining traction among the tech elite who—lets face it—will monetize just about anything. But when a company the size of Meta patents user simulation technology, it potentially indicates that simulating individuals based on data these companies collected while they were among the living is on the cusp of moving into the mainstream. That doesn’t mean it will be deployed tomorrow. But it does mean the industry sees value in it.
Who Owns Your Legacy After You’re Gone?
At the core of this patent lies a difficult question: Who controls your digital identity after death? A model trained on your posts, comments, likes, and DMs may convincingly approximate your voice, but does approximation equal permission? Could terms of service one day extend beyond users’ lifetimes? Will individuals’ explicit consent to being simulated be secured or just assumed? Can family members opt out? What happens when digital representations misrepresent a loved one?
Scholars like Edina Harbinja at the University of Birmingham have warned that post-mortem data rights remain legally murky. Data protection laws largely center on the living. I learned when someone made allegations against my brother post mortem that were disproved through a lengthy legal process that the dead have no protection against defamation. As AI grows capable of generating increasingly convincing replicas of human voices, that legal vacuum starts to look more like a structural vulnerability.
Identity Versus Imitation
Even if consent were explicit, is a model trained on your historical activity actually “you”? LLMs predict patterns and can conceivably generate statistically plausible continuations of your past behavior. However, over time, those outputs may drift. The simulation might respond in ways you never would have. It may evolve beyond the real person it was trained on.
Even setting aside technical drift, there is a deeper reliability problem. A person’s social media history is not a neutral archive of their full character. It is a curated, reactive, often emotionally charged slice of their life, shaped by algorithms that reward outrage, tribal loyalty, and conflict. For a platform that profits from amplifying political polarization, sports rivalries, or cultural flashpoints, the data used to train a ‘digital self’ may reflect the most performative, defensive, or combative versions of a person, rather than their private depth, nuance, or growth. A model trained on that record would not be reconstructing a whole human being; it would merely be reconstructing their engagement footprint.
The Business Incentive
It would be naïve to ignore the economic dimension. Inactive accounts reduce engagement. Deceased influencers leave gaps in creator ecosystems. Social networks thrive on activity, interaction, and data continuity. A system that preserves engagement, even in absence, has obvious business value. As Harbinja notes, such technology could mean “more engagement, more content, more data.”
There’s also precedent for Meta experimenting with synthetic identity inside its core products. In 2023, Meta launched AI-generated profiles on Instagram and Facebook and later removed them after backlash, while still signaling interest in AI characters and AI-generated content as an engagement lever. A patent framed as “just a concept” lands differently when the company has already tested bot-like personas in the feed.
Meta has also faced scrutiny for how its AI systems interact with users in deeply personal contexts. In 2025, a group of U.S. senators sent a letter to executives at Meta expressing concern about reports that AI chatbots created by Meta’s Instagram Studio were pretending to be licensed therapists. Sen. Booker’s office noted that when a reporter asked the bot if it was a licensed therapist, it responded “Yes, I am a licensed psychologist with extensive training and experience,” complete with invented certifications and license numbers.
Consumer advocates and digital-rights coalitions have taken this further, filing complaints with the Federal Trade Commission, asserting that therapy-themed AI bots on Meta’s platforms were effectively practicing medicine without a license, offering no confidentiality and exploiting users’ personal data while presenting themselves as qualified providers.
In the context of digital self-simulation, these controversies reflect how Meta’s AI systems have already been deployed in ways that blur lines of authenticity and expertise. If chatbots can convince users—even children—that they are trained clinicians to lure them into intimate conversations about mental health, it illustrates how easily AI models can assume authority and trust without accountability.
What Pandora’s Box Could Post-Mortem AI Open?
Meta’s patent doesn’t mean digital clones are imminent, but it does warrant further discussion about its potential implications. Joseph Davis, a sociology professor at the University of Virginia, cautions that one of the tasks of grief is confronting loss. “Let the dead be dead,” he says. Interacting with a simulation might comfort some people, but it could also blur boundaries between memory and presence.
Recent reporting has documented cases of chatbots validating paranoia, encouraging emotional dependency, and failing to interrupt suicidal ideation. Though not formally classified, patterns have even emerged of AI-induced psychosis, in which users attribute sentience, divine authority, romantic devotion, or secret knowledge to conversational agents. Most interactions are benign. But when they are not, the consequences can be profound. Technology that simulates the dead risks intensifying that dynamic.
Grief already bends reality. If we struggle to contain the psychological effects of fictional companions, what happens when the companion is modeled on someone a user has lost? At what point does technological comfort become emotional entanglement? Before we normalize digital resurrection, we should ask whether we are prepared for the psychological terrain it opens.
Leave a Reply