Microsoft Engineer Concern: Disable Copilot Designer tool

Microsoft Engineer Concern: Disable Copilot Designer tool

One of Microsoft’s engineers has raised serious concerns about the company’s artificial intelligence imaging service. He claims that the aforementioned artificial intelligence can produce dangerous and inappropriate content.

Shane Jones, senior director of software engineering at Microsoft, has discovered that Copilot Designer can produce violent, sexual, and biased images; Like the inappropriate image that recently Taylor Swift It was produced and published.

Jones raised this issue first within the company and with his colleagues; But he did not get a suitable answer. So, now it has decided to make the bugs public.

Jones identified these bugs through the Red Teaming method (a method for testing product weaknesses). He reported that he encountered disturbing images; Including images of demons and scary fantasy creatures alongside terms related to the right to abortion and images of teenagers at parties with weapons and drugs.

Jones urged Microsoft to act immediately. One of the important measures considered by Jones is the temporary withdrawal of the designer from the users. He wants this service to be available to users again after the full implementation of the necessary security measures and fixing the mentioned problems.

Also, Shane Jones made some revelations about the limitations of Copilot Designer. He says that the mentioned service is not suitable for all ages and people.

Microsoft has not yet responded to all of Jones’s claims; However, a company spokesperson said that Microsoft is committed to addressing the concerns of its employees and appreciates their efforts to test the company’s technologies.

“There are some concerns and risks that could potentially impact our services and business partners,” Microsoft’s statement reads in part. Therefore, we have established strong internal reporting channels to investigate and correct potential problems. We encourage employees to use them to address their concerns and issues.”

Related article

Google’s artificial intelligence imager Jumnai also recently created a wide range of images in a similar incident. This caused Google to stop producing human images with this artificial intelligence.

These days, concerns about such abuses of artificial intelligence are increasing and sometimes forcing companies to change their policies and strategies. We have to wait and see what measures the world’s big companies will take to secure the development and progress of artificial intelligence.

Source link

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *