Facebook Announces New App, Makes It Easier to Spread Hoax
Meta launches a new web-generated AI called Imagine with Meta/Facebook. This website allows users to create images by describing them in natural language.
Similar to OpenAI DALL-E, Midjourney, and Stable Diffusion, Imagine with Meta supported by the Emu image creation model in Meta can create high-resolution images from text commands.
For now, the Imagine with Meta service is free to use for users in the U.S. and can generate four images per prompt.
“We are pleased to hear people’s opinions about how they use Imagine, the text-to-picture creation feature of Meta AI, to create fun and creative content in chat. Today, we are expanding access to imagination beyond chat,” Meta wrote in a blog post, quoted from Techcrunch, Thursday (7/12/2023).
“Al our messaging experience is designed for more enjoyable and backward interaction, now you can also create images for free on the web,” the statement continued.
However, many parties that judge this image maker tool will get the company plunged into an old problem associated with racial bias. Besides, will there be protection such as watermark or watermark in Imagine with Meta.
The platform will not be released initially, but Meta has promised to start adding watermarks to content generated by Imagine with Meta in the coming weeks to improve transparency and traceability.
“[Watermark] is resistant to common image manipulations such as cutting, resizing, color changes (brightness, contrast, etc.), screen capture, image compression, noise, sticker overlay, and more,” Meta said in his post.
“We aim to deliver invisible watermarking on many of our products with images generated by AI in the future,” they added.
Watermarking techniques for generative art are not new. A French startup, Imatag, offers a watermarking tool that claims to be unaffected by resizing, cutting, editing, or compressing images.
Another company, Steg.AI, uses an AI model to implement watermarks that can withstand size changes and other edits.
Meanwhile, Microsoft and Google have adopted standards and watermarking technologies based on generative AI.
But pressure is increasing on technology companies to make it clear that these works are produced by AI. Especially given the abundance of deepfake from the attacks in Gaza and the filtering of images of child abuse generated by AI.
Recently, the Chinese Cyber Space Administration issued a regulation requiring generative AI vendors to mark the content generated, including text and image creators, without affecting user usage.
And in a recent US Senate committee hearing, Senator Kyrsten Cinema (I-AZ) stressed the need for transparency in generative AI, including using watermarks.
Another issue is related to the possible spread of disinformation thanks to the AI service to create images from the prompt. By the way, the device can make images that never happened as if they were real. This raises concerns about the potential for AI-generated images to be used as evidence in legal proceedings or to manipulate public opinion. The development of robust authentication methods and guidelines for the responsible use of generative AI technology is crucial to address these challenges and protect against the misuse of such powerful tools.
This raises concerns about the potential for fake news and misinformation to be amplified through the use of generative AI. The ability to create realistic images that never occurred could lead to the manipulation of visual evidence, further blurring the line between reality and fiction. It is crucial for policymakers and technology companies to address these challenges and establish safeguards to prevent the misuse of generative AI technology.