As the generative artificial intelligence (AI) industry grows rapidly, experts say more oversight is needed for the data used to train these systems. Italy and Canada have recently launched investigations into OpenAI’s AI-powered chatbot, ChatGPT, due to concerns about data privacy. ChatGPT and similar AI products from Microsoft and Google are trained using data from the internet, but it’s not always clear what kind of information is included.

Experts believe that greater transparency is needed, and some governments and organizations are calling for a pause on new generative AI projects. OpenAI has published a blog post outlining its approach to safety and accuracy, stating that “some” training data includes personal information, but it is not used to track users or advertise to them.

There is a lack of transparency around what data companies are using to train AI systems like ChatGPT. OpenAI says it uses a “broad corpus” of data, including licensed content, content generated by human reviewers, and content publicly available on the internet. However, users may not have consented to their data being used in this way.

As AI products advance, there is a need for stronger regulations and more conversations with stakeholders, including regular people who encounter AI in their daily lives. Proposed laws in Canada and the European Union aim to strengthen rules about how personal data is used by tech companies and protect users from potential risks.


>Source link>

>>Join our Facebook Group be part of community. <<

By hassani

Leave a Reply

Your email address will not be published. Required fields are marked *