Introduction to Deepfakes in Healthcare
Dr. Joel Bervell, known to his hundreds of thousands of followers on social media as the "Medical Mythbuster," has built a reputation for debunking false health claims online. However, at the beginning of this year, some of his followers alerted him to a video featuring a man who looked exactly like him. The face was identical, but the voice was not. Bervell was mostly afraid, as the video promoted a product he had never advertised before, with a voice that did not belong to him.
The Rise of Deepfakes
The video featuring Bervell’s likeness was an example of a deepfake, a type of content where medical specialists are manufactured using artificial intelligence. According to cybersecurity experts, deepfakes are becoming increasingly common, with a growing audience being reached. A recent examination by CBS News found dozens of accounts and over 100 videos on social media platforms featuring fictional doctors, some of whom were trying to sell products. Most of these videos were found on TikTok and Instagram, with some being viewed millions of times.
The Dangers of Deepfakes
Many of the videos tried to sell products via independent websites or well-known online marketplaces, often making exaggerated claims. One video claimed a product was "96% more effective than Ozempic." Cybersecurity company Eset has been studying this type of content and found that it is becoming increasingly sophisticated. Martina López, a security researcher at Eset, noted that "regardless of whether it’s some videos that go viral or accounts that gain more followers, this type of content reaches an ever-larger audience."
Social Media Platforms’ Response
CBS News contacted TikTok and Meta, the parent company of Instagram, to clarify their guidelines on deepfakes. Both companies removed the videos featured in the CBS News report, stating that they violated platform guidelines. YouTube also responded, stating that users can request the removal of AI-generated content that realistically simulates them without their permission. However, YouTube said that the videos provided by CBS News did not violate its community guidelines and would remain on the platform.
Red Flags for Identifying Deepfakes
Eset’s Tony Anscombe noted that there are some red flags that can help identify deepfake content, including visual inconsistencies such as flickering, blurred edges, or strange distortions on a person’s face. Additionally, a voice that sounds robotic or is missing can be a possible indicator of AI-generated content. Anscombe advised viewers to be skeptical and question exaggerated claims, such as "miracles" or "guaranteed results," which are common tactics in digital fraud.
Conclusion
Dr. Joel Bervell’s experience with deepfakes highlights the need for vigilance in the online healthcare space. As deepfakes become increasingly sophisticated, it is essential for viewers to be cautious and verify information independently. Bervell noted that deepfake videos like the one featuring his likeness could undermine public trust in medicine, making it difficult for people to distinguish fact from fiction. By being aware of the dangers of deepfakes and taking steps to identify them, we can work towards a safer and more trustworthy online environment.