Using artificial intelligence (AI), Michael Bommer, a terminally ill man from Berlin, has created an interactive version of himself—a “griefbot”—designed to interact with his family long after his passing.
This raises ethical questions about the implications of such technology on our understanding of life, death, and the grieving process.
Griefbots
Bommer, diagnosed with terminal cancer, collaborated with Eternos.life, a California-based startup specialising in AI-driven legacy preservation.
The company aims to create “digital psychological twins” of individuals, allowing their loved ones to continue interacting with them after death. Bommer’s digital self is built from extensive data, including his voice recordings, writings, and personal reflections, resulting in an AI model that can answer questions, offer advice, and even generate new responses based on his personality traits and life experiences.
This technological innovation is undeniably groundbreaking, yet it is also controversial. From a technological standpoint, the idea of “living on” through AI offers a new dimension to legacy preservation, allowing descendants to connect with ancestors they never met.
However; the ethical implications are complex. The development of griefbots challenges traditional concepts of mourning and raises questions about the authenticity of interactions with digital surrogates.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe concerns
The ethics of creating and interacting with digital replicas of the deceased revolve around several key concerns. First, there is the issue of consent—while Bommer voluntarily participated in this project, future cases might involve individuals who did not explicitly consent to be recreated in digital form. This raises questions about posthumous rights and whether these digital versions could ever truly represent the will or identity of the deceased.
The psychological impact on the grieving process is another critical consideration. Traditional mourning involves coming to terms with the finality of death, but interacting with a griefbot could potentially disrupt this process.
By offering a simulated form of continued presence, such AI models might delay or complicate emotional closure and the natural process of grief for the bereaved. Additionally, the societal implications of normalising AI as a substitute for human interaction are significant.
As AI technologies continue to evolve, they may increasingly replace genuine human experiences, raising concerns about the erosion of what it means to be human. The possibility of creating and interacting with AI versions of deceased loved ones forces us to confront uncomfortable questions about the value of human life and the ethics of simulating it through technology.