OpenAI, a key player in the AI game, has announced a series of actions aimed at intercepting malicious use. The coming election year in the United States is a motivation for these actions. OpenAI is experimenting with an image provenance classifier, which should help secure the integrity of the electoral process. Generated imagery engulfs the visual media world. This trend holds immense potential, for better or for worse, and OpenAI is one of the most prominent players in this field. The company is responsible for Chat GPT, Dall-E, and more. The year 2024 may shape up to be among the most influential years regarding AI, but it is also an election year in the United States. As AI-generated imagery is becoming more accessible and less distinguishable from authentic, photon-based imagery, a conflict is emerging. As much as generative algorithms contribute to the democratization of visual storytelling, they pose a major threat regarding the authenticity of visual information. OpenAI prospects for 2024 In a recent blog post, the company clarified some of their tactics and strategies for combatting fraudulent conduct buzzing around worldwide elections. The company promises new tools that will try to prevent impersonation. (They weren’t so successful last year with a tool that was supposed to identify AI writing, which was quietly shut down.) Abuse of personalized persuasion is also on the roadmap as the company tries to determine how potent their tools may be in this field. These actions are mostly relevant in the field of written content....
Published By: CineD - Tuesday, 30 January, 2024