Fans of Black Sabbath singer Ozzy Osbourne have criticised musician Tom Morello after he shared an AI-generated image of the rock star, who died this week at the age of 76.
Osbourne bid farewell to fans earlier this month with a Black Sabbath reunion show in the British city of Birmingham.
His death led to tributes from fans and musicians. They included Morello's post, which sparked anger among X users.

The backlash over the stylised image – which included deceased rock stars Lemmy, Randy Rhodes and Ronnie James Dio – centred on what many saw as an exploitative and unsettling trend, with users questioning the ethics of sharing such visuals so soon after Osbourne's death.
It is the latest flashpoint in a growing debate: when does using AI to recreate someone's likeness cross the line from tribute to invasion of privacy?
While the tools behind these hyper-realistic images are evolving rapidly, the ethical frameworks and legal protections have not yet caught up.
Deepfakes and grief in digital age
Using AI to recreate the dead or the dying, sometimes referred to as "grief tech" or "digital resurrection", is becoming increasingly common, from fan-made tributes of celebrities to "griefbots" that simulate the voice or personality of a lost loved one.
In an example of grief tech, Canadian Joshua Barbeau last year used Project December, a GPT-3-based chatbot created by Jason Rohrer, to recreate conversations with his dead fiancee from September 2020, eight years after her death.
The chatbot's responses were so convincing that she "said" things like: "Of course it is me. Who else could it be? I am the girl that you are madly in love with."
Mental health experts warn that such recreations can profoundly affect the grieving process.
"The predictable and comforting responses of AI griefbots can create unrealistic expectations for emotional support, which could impact a person's ability to build healthy relationships in the future," said Carolyn Yaffe, a cognitive behaviour therapist at Medcare Camali Clinic in Dubai.
"Some people find comfort and a sense of connection through them. In contrast, others might face negative effects, like prolonged denial, emotional pain, or even feelings of paranoia or psychosis."
Interacting with AI likenesses can blur the lines between memory and reality, potentially distorting a person's emotional recovery, Ms Yaffe said. "These tools may delay acceptance and create a space where people stay connected to digital surrogates instead of moving on," she added. "Grief doesn't fit into neat algorithms."
Lack of legal safeguards
There is limited legal protection against these practices. In the Middle East, specific laws around AI-generated likenesses are still emerging.
Countries including the UAE and Saudi Arabia address deepfakes under broader laws related to cyber crimes, defamation, or personal data protection. But there are still no clear regulations dealing with posthumous image rights or the AI-based recreation of people.
Most laws focus on intent to harm, rather than on consent or digital legacy after death.
In the UK, for example, there are no posthumous personality or image rights. Some states in the US, including California and New York, have begun to introduce limited protections, while others do not offer any.
In China, draft legislation has begun to address AI deepfakes.
Denmark, however, has been a pioneer on the issue, proposing a law that would grant people copyright-like control over their image, voice and likeness.
The legislation, expected to pass this year, would allow Danish people to demand the removal of unauthorised deepfake content and seek civil damages, even posthumously, marking the first time such protections would be implemented in Europe.
"Copyright does not protect someone's appearance or voice," said Andres Guadamuz, a reader in intellectual property law at the University of Sussex. "We urgently need to reform image and personality rights to address unauthorised AI depictions, particularly for vulnerable individuals, including the deceased or critically ill, where dignity, consent, and misuse risks are paramount."
Consent, culture and control
Ethical concerns about recreating the image or voice of someone who is critically ill or dead go beyond legal frameworks.
Arda Awais, co-founder of UK-based digital rights collective Identity 2.0, believes that, even when AI tributes are carried out with good intentions, they carry significant risks.
"Even with consent from the deceased, there could be ways a likeness is used which might not be 100 per cent in line with someone's wishes, too. Or how it's use evolves," Ms Awais said.
She added that a one-size-fits-all approach may not be practical across different cultures, emphasising the need for more inclusive and diverse conversations when establishing ethical standards.
While some families or individuals may welcome AI tributes as a means to preserve memories, others may view it as exploitative or harmful, particularly when it involves celebrities, whose images are frequently recycled without their permission.
"Grief is such a personal experience," Ms Yaffe said. "For some, griefbots might provide a moment of relief. But they should be seen as a bridge, not the final destination."
Experts warn that AI should never replace the emotional labour of mourning or the human connections that aid the healing process. "AI-generated responses can completely miss the point, not because the technology is harmful, but because it lacks the essential quality that grief requires – humanity," Ms Yaffe said.