When artificial intelligence teaches itself!

When artificial intelligence teaches itself!

[ad_1]

A new study shows that a significant number of people who are paid to train artificial intelligence models are actually doing the same task through artificial intelligence.

Training AI systems to do things the right way requires huge amounts of data. Many companies pay workers on platforms such as Mechanical Turk to perform tasks that would normally be too difficult for automated systems (such as solving captcha puzzles and tagging data and annotating text).

The data in question is then entered into artificial intelligence models so that these models can be trained. Workers on platforms like Mechanical Turk are usually not paid much and are expected to complete many tasks in a short amount of time.

It is not surprising that a number of employees in question go to tools like ChatGPT to increase the speed of work and income. The question is how many of these people do this?

According to MIT Technology Reviewa group of researchers from the Federal Polytechnic University of Lausanne (EPFL) recruited 44 people on the Amazon Mechanical Turk platform to summarize medical research articles.

The researchers then analyzed the responses with a special artificial intelligence model to look for clues to ChatGPT. These clues included things like lack of variation in words. The researchers also extracted keyboard keystroke patterns to find out whether these employees were copying and pasting their responses. Copy-pasting shows that they have made their answers elsewhere.

Researchers say that between 33 and 46 percent of employees started using artificial intelligence services such as ChatGPT. With the increasing power and ease of access to ChatGPT and other artificial intelligence systems, this number is likely to increase.

Robert Westan assistant professor at the Polytechnic Federal University of Lausanne, who is one of the authors of a scientific paper related to the new research, says that in his opinion, we have not reached the end of the era of collective outsourcing platforms, and that artificial intelligence will simply modify the way things are done.

Using artificial intelligence-generated data to train artificial intelligence will ultimately make models that already have obvious errors more error-prone. Large language models sometimes misrepresent false information as scientific truth, and the use of AI-generated data makes it much more difficult to troubleshoot services based on this technology.

Researchers say that we need new ways to understand whether the data we are looking for is made by humans or artificial intelligence.

[ad_2]

Source link

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *