OpenAI, the maker of ChatGPT, announced that it will introduce tools to combat disinformation ahead of this year’s elections in several countries. The company will not allow its technology to be used for political campaigns and is working on tools to attach reliable attribution to text generated by ChatGPT. OpenAI’s image generator, DALL-E 3, has “guardrails” that prevent users from generating images of real people, including candidates. The World Economic Forum has warned of the risks of AI-driven disinformation and misinformation, highlighting concerns over public trust in political institutions. OpenAI’s announcement follows steps revealed last year by US tech giants Google and Facebook parent Meta to limit election interference, especially through the use of AI. Meanwhile, experts say disinformation is fuelling a crisis of trust in political institutions. This is particularly concerning as elections are due this year in countries such as the United States, India, and Britain. OpenAI’s goal is to ensure that its technology is not used to undermine the democratic process.


>Source link>

>>Join our Facebook Group be part of community. <<

By hassani

Leave a Reply

Your email address will not be published. Required fields are marked *