Grammarly’s parent company Superhuman faces criticism for an AI feature that provides writing feedback framed through the perspective of noted experts, including deceased academics. The “Expert Review” agent, part of the Superhuman Go suite, uses public works to generate suggestions. Academics have questioned the ethics of using scholars’ identities without explicit consent, calling the practice “morbid” and “obscene,” and warning it could deepen skepticism about AI in education.
An AI feature from Grammarly‘s parent company, now rebranded as Superhuman, is drawing criticism for using the identities of living and deceased scholars. The “Expert Review” feature provides writing feedback framed through the perspective of specific experts, a system one medieval historian labeled “morbid.”
Launched last summer, the tool suggests experts based on a user’s text and provides AI-generated feedback inspired by their published work. A company spokesperson stated the agent points users toward influential voices whose scholarship they can explore.
The spokesperson explained that experts appear because their published works are publicly available and widely cited. The system does not claim endorsement or direct participation from the individuals it references.
Academics have raised ethical concerns about the practice. Vanessa Heggie, a professor at the University of Birmingham, questioned on LinkedIn whether reviewers gave consent, calling the use of dead academics’ names and reputations “obscene.” Former associate professor Brielle Harbin wrote the choice risks deepening skepticism about AI tools in higher education.
Grammarly is not alone in creating AI that mimics real people. In 2023, Meta released chatbots based on celebrity identities, and Khan Academy launched an AI tutor allowing conversations with historical figures like Winston Churchill and Harriet Tubman.

