OpenAI now allows users of its artificial intelligence-based image generation system, DALL-E, to edit human faces in photos. The feature was previously banned due to concerns over the potential for abuse, but it is now available to the public after improvements were made to filters to remove images containing sexual and violent content, according to a letter sent to the AI’s millions of users.
DALL-E’s new artificial intelligence feature allows users to edit images in a variety of ways. In fact, it is now possible to upload personal photos and make changes to them. For example, users can use the special features of this technology to change things like clothes or hairstyles. This feature will undoubtedly be useful for many activists in fields such as photography and filmmaking.
In its letter to users, OpenAI said:
With our security enhancements, DALL-E now supports image editing, minimizing the risk of damage caused by deepfakes.
to report Verge, the decision to add image editing functionality to DALL-E has been made after ongoing discussions between the creators of this artificial intelligence and its users; The process during which the possible risks of the mentioned technology are investigated. As a company with a significant budget, OpenAI has taken a relatively cautious approach considering the excellent relationships it has established with tech giants like Microsoft.
Of course, competitors like Stable Diffusion have now overtaken OpenAI in this area by imposing less restrictions on their users. These conditions help to increase the speed of development of technologies related to artificial intelligence, but on the other hand, the possibility of using them in malicious programs also increases. For example, Stable Diffusion is currently being exploited to create sexual deepfakes of celebrities.
OpenAI has reduced the possibility of exploiting DALL-E by imposing restrictions. In a part of the agreement for the use of this technology, it is emphasized to the users to refrain from uploading people’s images without obtaining their consent. However, none of the content filtering systems are perfect and it is possible that some people still use these technologies for harmful purposes.