The use of generative AI has and will penetrate social media for decades to come. Information ecosystems, journalists, content creators and marketing agencies will find new ways to research, generate, distill, and distribute contents in novel and engaging ways. Micro-to-macro uses of generative AI will have substantial impact as a result on large social media platforms and smaller ones.
In this article, we will highlight several key risks that have been carefully examined in the essay titled ‘How to prepare for deluge of generative AI on social media.
By far, of all types of risks, disinformation is a big problem. The creation and distribution of disinformation poses a significant threat to society’s wellbeing and national security. Although quantitively understood, the risks of its impact have yet to be quantitively analyzed and explained by experts.
The threat of disinformation is not a new one, but in conjunction and with the use of generative AI, it is bound to amplify the impact disinformation has to society.
AI specific remediation for combating disinformation such as watermarking, data provenance and detection of AI-generated content through filters is barking up the wrong tree. The experts have unequivocally said these are counterintuitive. The alternative to this is simply that society should push for fact-checking. There is a problem with this as well. Our society is built on historical and cognitive bias fed and trained on models, and it is difficult to differentiate between what appears to be true and what actually is true.
Apart from disinformation as one of the malicious source of threat, there are others to keep in mind. The diagram below is a good reference point for major categories of Generative AI on social media.
The article is part of a project examining algorithmic amplification and distortion and exploring ways to minimize their harmful effects. On the subject of generative AI on social media, this paper provides a grounded analysis of the challenges and opportunities.