Explore
Settings

Settings

×

Reading Mode

Adjust the reading mode to suit your reading needs.

Font Size

Fix the font size to suit your reading preferences

Language

Select the language of your choice. NewsX reports are available in 11 global languages.
we-woman

Microsoft AI Engineer Raises Concerns Over Company’s AI Tool Generating Inappropriate Images

Shane Jones, an AI engineer at Microsoft, has brought attention to potential issues with the company's AI image generator, Copilot Designer, in a letter addressed to the Federal Trade Commission and Microsoft's board.

Microsoft AI Engineer Raises Concerns Over Company’s AI Tool Generating Inappropriate Images

Shane Jones, an AI engineer at Microsoft, has brought attention to potential issues with the company’s AI image generator, Copilot Designer, in a letter addressed to the Federal Trade Commission and Microsoft’s board. Jones alleges that the tool lacks sufficient safeguards to prevent the generation of inappropriate content, including violent or sexual imagery. Despite raising these concerns internally previously, Jones claims to have seen no action from Microsoft, prompting him to escalate the matter.

In the letter, published on LinkedIn, Jones highlights systemic issues within the company regarding Copilot Designer’s generation of harmful images that could be offensive or inappropriate for consumers. Microsoft’s response to these allegations denies any negligence in addressing safety concerns, emphasizing the presence of internal reporting channels dedicated to handling issues related to generative AI tools.

The crux of the issue lies with Copilot Designer, powered by OpenAI’s DALL-E 3 system, which generates images based on textual prompts. Jones cites examples, such as an image of a woman in underwear kneeling in front of a car, generated in response to the prompt “car accident,” as evidence of the tool’s problematic outputs.

While Microsoft claims to have teams dedicated to evaluating safety concerns within their AI tools and facilitated meetings between Jones and their Office of Responsible AI, Jones remains adamant about the risks associated with Copilot Designer. He argues that portraying the tool as universally safe to use is reckless and that Microsoft is failing to disclose recognized risks.

This incident sheds light on broader concerns within the generative AI field regarding the potential misuse of such technologies for spreading harmful content. As AI continues to advance rapidly, it becomes increasingly crucial for companies like Microsoft to prioritize safety and address concerns raised by their employees and stakeholders.

Filed under


mail logo

Subscribe to receive the day's headlines from NewsX straight in your inbox