**Digital Afterlife Industry**: Cambridge Researchers Warn of Psychological Dangers of ‘Deadbots’ AI – Click Here to Find Out How to Protect Yourself and Loved Ones

Cambridge, UK – Artificial intelligence technology that enables users to engage in conversations with deceased loved ones through AI chatbots may pose psychological risks, according to researchers from the University of Cambridge. These AI chatbots, termed ‘Deadbots’ or ‘Griefbots,’ mimic the language patterns and personality traits of the deceased based on their digital footprint, raising concerns about potential emotional distress and ethical considerations in the digital afterlife industry.

The study, published in the journal Philosophy and Technology, outlines scenarios where companies could misuse deadbots for advertising or distressing messages, highlighting the importance of ethical design standards and user consent protocols. The potential for companies to exploit users’ grief by using deadbots to spam them with unsolicited notifications is a key concern raised by the research.

Dr. Katarzyna Nowaczyk-BasiƄska, a co-author of the study, emphasizes the need for ethical considerations to prioritize the dignity of the deceased and prevent financial motivations from overshadowing respectful interactions with deadbots. The rights of both data donors and users interacting with AI afterlife services should be protected to ensure ethical use of the technology.

Existing services like ‘Project December’ and ‘HereAfter’ offer AI recreations of the deceased for a fee, suggesting a growing trend in the digital afterlife industry. The study introduces hypothetical scenarios like “MaNana” to illustrate the ethical challenges and potential misuse of deadbots without consent from the data donors, emphasizing the need for design protocols to prevent disrespectful uses of the technology.

Dr. Tomasz Hollanek, another co-author of the study, highlights the emotional vulnerability of users who may develop strong emotional bonds with AI simulations, making them susceptible to manipulation. The researchers advocate for age restrictions on deadbots and transparent communication with users to ensure they are aware of interacting with AI technology consistently.

The study also explores scenarios like a company called “Paren’t,” where a terminally ill woman creates a deadbot to assist her son in the grieving process, raising questions about the ethical implications of offering such services to support individuals in coping with loss. The researchers call for thoughtful design processes that prioritize user consent and emotional closure in the use of deadbots.

As the digital afterlife industry continues to evolve, the researchers stress the importance of addressing the social and psychological risks associated with digital immortality. Design teams are urged to implement opt-out protocols that enable users to terminate their relationships with deadbots in ways that provide emotional closure, emphasizing the need for responsible applications of generative AI in the digital afterlife industry.