<br />
<b>Warning</b>:  file_exists(): open_basedir restriction in effect. File(/var/www/vhosts/iaaglobal.org/httpdocs/public/index.php/news/rss/feed/meta-approach-to-labeling-ai-generated-content-and-manipulated-media) is not within the allowed path(s): (/var/www/vhosts/iaaglobal.org/:/tmp/) in <b>/var/www/vhosts/iaaglobal.org/httpdocs/index.php</b> on line <b>17</b><br />
<?xml version="1.0" encoding="UTF-8"?><rss
    xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title>Meta | Approach to Labeling AI-Generated Content and Manipulated Media</title>
        <link> https://www.iaaglobal.org/index.php/news/meta-approach-to-labeling-ai-generated-content-and-manipulated-media</link>
        <description></description>
        <language>en-gb</language>
        <lastBuildDate>Fri, 19 Apr 2024 10:26:42 +0000 </lastBuildDate>
        <atom:link type="application/rss+xml" rel="self" href="https://www.iaaglobal.org/index.php/news/meta-approach-to-labeling-ai-generated-content-and-manipulated-media"/>
        
        <item>
            <title>Meta | Approach to Labeling AI-Generated Content and Manipulated Media</title>
            <description>&lt;img border="0" hspace="10" align="left" style="margin-top:3px;margin-right:5px;" src="https://iaaglobal.s3.amazonaws.com/bulk_images/662246e29e3e3.png" /&gt; 
    
    Takeaways
    
        We are making changes to the way we handle manipulated media based on feedback from the
            Oversight Board and our policy review process with public opinion surveys and expert consultations.
        We will begin labeling a wider range of video, audio and image content as “Made with AI” when we detect
            industry standard AI image indicators or when people disclose that they’re uploading AI-generated content.
        
        We agree with the Oversight Board’s recommendation that providing transparency and additional context is now
            the better way to address manipulated media and avoid the risk of unnecessarily restricting freedom of
            speech, so we’ll keep this content on our platforms so we can add labels and context.
    
    We are making changes to the way we handle manipulated media on Facebook, Instagram and Threads based on feedback
        from the Oversight Board that we should update our approach to reflect a broader range of content that exists
        today and provide context about the content through labels. These changes are also informed by Meta’s policy
        review process that included extensive public opinion surveys and consultations with academics, civil society
        organizations and others.
    We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos
        that are created or altered by AI to make a person appear to say something they didn’t say. Our manipulated
        media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was
        about videos. In the last four years, and particularly in the last year, people have developed other kinds of
        realistic AI-generated content like audio and photos, and this technology is quickly evolving. As the Board
        noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.
    The Board also argued that we unnecessarily risk restricting freedom of expression when we remove manipulated
        media that does not otherwise violate our Community Standards. It recommended a “less restrictive” approach to
        manipulated media like labels with context. In February, we announced that we’ve been working with industry
        partners on common technical standards for identifying AI content, including video and audio. Our “Made with AI” labels on
        AI-generated video, audio and images will be based on our detection of industry-shared signals of AI images or
        people self-disclosing that they’re uploading AI-generated content. We already add “Imagined with AI” to
        photorealistic images created using our Meta AI feature.
    We agree that providing transparency and additional context is now the better way to address this content. The
        labels will cover a broader range of content in addition to the manipulated content that the Oversight Board
        recommended labeling. If we determine that digitally-created or altered images, video or audio create a
        particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent
        label so people have more information and context. This overall approach gives people more information about the
        content so they can better assess it and so they will have context if they see the same content elsewhere. 
    We will keep this content on our platforms so we can add informational labels and context, unless the content
        otherwise violates our policies. For example, we will remove content, regardless of whether it is created by AI
        or a person, if it violates our policies against voter interference, bullying and harassment, violence and
        incitement, or any other policy in our Community Standards. We also have a network of
            nearly 100 independent fact-checkers who will continue to review false and misleading AI-generated
        content. When fact-checkers rate content as False or Altered, we show it lower in Feed so fewer people see it,
        and add an overlay label with additional information. In addition, we reject an ad if it contains debunked
        content, and since January, advertisers have to disclose when they digitally create or alter a political or social issue ad in certain
        cases.
    We plan to start labeling AI-generated content in May 2024, and we’ll stop removing content solely on the basis
        of our manipulated video policy in July. This timeline gives people time to understand the self-disclosure
        process before we stop removing the smaller subset of manipulated media.
    Policy Process Informed By Global Experts and Public Surveys
    In Spring 2023, we began reevaluating our policies to see if we needed a new approach to keep pace with rapid
        advances in generative AI technologies and usage. We completed consultations with over 120 stakeholders in 34
        countries in every major region of the world. Overall, we heard broad support for labeling AI-generated content
        and strong support for a more prominent label in high-risk scenarios. Many stakeholders were receptive to the
        concept of people self-disclosing content as AI-generated.
    A majority of stakeholders agreed that removal should be limited to only the highest risk scenarios where content
        can be tied to harm, since generative AI is becoming a mainstream tool for creative expression. This aligns with
        the principles behind our Community Standards – that people should be free to express themselves while also
        remaining safe on our services.
    We also conducted public opinion research with more than 23,000 respondents in 13 countries and asked people how
        social media companies, such as Meta, should approach AI-generated content on their platforms. A large majority
        (82%) favor warning labels for AI-generated content that depicts people saying things they did not say. 
    Additionally, the Oversight Board noted their recommendations were informed by consultations with civil-society
        organizations, academics, inter-governmental organizations and other experts.
    Based on feedback from the Oversight Board, experts and the public, we’re taking steps we think are appropriate
        for platforms like ours. We want to help people know when photorealistic images have been created or edited
        using AI, so we’ll continue to collaborate with industry peers through forums like the Partnership on AI and
        remain in a dialogue with governments and civil society – and we’ll continue to review our approach as
        technology progresses.
</description>
            <link>https://www.iaaglobal.org/index.php/news/meta-approach-to-labeling-ai-generated-content-and-manipulated-media</link>
            <guid>https://www.iaaglobal.org/index.php/news/meta-approach-to-labeling-ai-generated-content-and-manipulated-media</guid>
            <pubDate>Fri, 19 Apr 2024 10:26:42 +0000 </pubDate>
            <enclosure type="image/jpg" url="https://iaaglobal.s3.amazonaws.com/bulk_images/662246e29e3e3.png"></enclosure>
        </item>
    </channel>
</rss>