Microsoft Worker Warns AI Tool Creates ‘Sexually Objectified’ Images: Urges FTC and Lawmakers to Take Action

REDMOND, Washington – A Microsoft software engineer has raised concerns about the tech giant’s AI image generation tool, Copilot Designer, creating potentially harmful and offensive content. Shane Jones discovered a security vulnerability in OpenAI’s DALL-E model, which is integrated into Microsoft’s AI tools, allowing for the generation of abusive and violent images.

In a letter addressed to Microsoft’s board, lawmakers, and the Federal Trade Commission, Jones criticized the company for not taking sufficient measures to protect against the creation of harmful content by Copilot Designer. The engineer urged Microsoft to remove the tool from public use until adequate safeguards are implemented.

Jones highlighted issues with Copilot Designer randomly generating inappropriate and sexually objectified images, as well as content related to political bias, underage drinking, drug use, misuse of trademarks, conspiracy theories, and religion. These concerns reflect a growing problem with AI tools generating harmful and disturbing content.

The Federal Trade Commission acknowledged receipt of the letter but refrained from providing further comments. Microsoft emphasized its commitment to addressing employees’ concerns and enhancing the safety of its technology. However, OpenAI did not respond to requests for comment on the matter.

Jones also reached out to members of Microsoft’s Environmental, Social and Public Policy Committee, emphasizing the importance of transparently disclosing AI risks to consumers, especially when targeting children. The engineer expressed a belief that companies should voluntarily disclose known risks associated with AI products.

Efforts to address the risks posed by AI image generation technologies have also extended to lawmakers. Jones urged Democratic Senators Patty Murray and Maria Cantwell, as well as House Representative Adam Smith, to investigate these risks and examine corporate governance practices related to the development and marketing of such products.

The need for transparency and consumer awareness regarding AI risks continues to be a critical issue, as companies navigate the challenges of safeguarding against harmful content generated by AI tools.