Google DeepMind has released a tool called SynthID that can watermark AI-generated images, making them easily detectable by AI detection tools. The watermark is embedded in the image pixels and is imperceptible to the human eye. It remains intact even when the image is cropped or resized. DeepMind CEO Demis Hassabis believes that building systems to identify and detect AI imagery is crucial, especially with the rise of deepfakes and the upcoming contentious election season. SynthID will initially be available to Google Cloud customers who use the Vertex AI platform and the Imagen image generator. As the system is tested and improved, Google plans to make it available in more places. Hassabis envisions SynthID becoming an internet-wide standard and potentially being used in other media formats like video and text. However, the details of how SynthID works are being kept private to prevent hacking attempts. The launch of SynthID is part of an industry-wide effort to develop AI detection tools and protect against malicious use of AI technology. Hassabis acknowledges that it will be an ongoing process to stay ahead of hackers and constantly improve the system. SynthID aims to address concerns about deepfakes and provide businesses with a way to verify the authenticity of AI-generated images.
>>Join our Facebook Group be part of community. <<