When photorealistic images are created using Meta AI feature, the company explained that it does several things to make sure people know AI is involved, including putting visible markers that you can see on the images, and both invisible watermarks and metadata embedded within image files. “Using both invisible watermarking and metadata in this way improves both the robustness of these invisible markers and helps other platforms identify them,” said Clegg.
Furthermore, Meta is also adding a feature for people to disclose when they share AI-generated video or audio so it can add a label to it. Meta will also require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. “If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context,” said Clegg.








