Ticker

6/recent/ticker-posts

Ad Code

Responsive Advertisement

Workers read violent graphic depictions so that ChatGPT wasn’t as toxic as GPT-3

Upon its release in November 2022, ChatGPT was widely recognized as a groundbreaking technological achievement. The advanced AI chatbot can produce text on a wide variety of subjects, including reworking an old Chinese proverb into Gen-z vernacular,  and explaining quantum computing to children with the use allegory and storytelling. Within just one week, it had amassed over a million users.

However, the success of ChatGPT cannot be solely attributed to the ingenuity of Silicon Valley. An investigation by TIME revealed that in efforts to reduce toxicity in ChatGPT, OpenAI utilized the services of outsourced Kenyan workers who earn less than $2 per hour.

The work performed by these outsourced laborers was crucial for OpenAI. GPT-3, ChatGPT's predecessor, possessed the ability to compose sentences effectively, but the application had the tendency to spew out violent, sexist and racist statements. The issue is that the utility was trained largely on information from the internet, home to both the worst and best of human intentions. Even though access to such a massive amount of human information is the reason why GPT-3 showcased such deep intelligence, it’s also the reason for the utility’s equally deep biases. 

The dark secret behind ChatGPT’s training

Getting rid of these biases and the harmful content that inspired them was no easy task. Even with a team of hundreds of people, it would have taken decades to sift through every piece of data and verify whether it was appropriate or not. The only way that OpenAI was able to lay the groundwork for the less biased and offensive ChatGPT was through the creation of a new AI-powered safety mechanism. 

However, in order to train that AI-powered safety mechanism, OpenAI would need a human labor force, and it found one in Kenya. As it turns out, in order to create a mechanism for detecting harmful content, you need a vast library of harmful content upon which to train the mechanism. This way, it learns what to recognize as acceptable, and what to recognize as toxic. In hopes to build a non-toxic chatbot, OpenAI outsourced tens of thousands of text snippets to a company in Kenya starting November 2021. A significant portion of the text appeared to have been sourced from the darkest corners of the internet. These texts included graphic descriptions of depraved acts. 

These were then analyzed and labeled by the workforce in Kenya, who were sworn to secrecy and remained so due to considerable fears surrounding the status of their employment. The data labelers hired on behalf of OpenAI were given a salary ranging from $1.32 to $2 per hour, depending on their experience and performance. 

OpenAI’s stance was clear from the beginning. ‘Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content.’ However, the toll that this task took on the Kenyan natives was only recently discovered by TIME. In a statement concerning graphic and depraved content he had to label, one labeler stated that ‘That was torture, you will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.’ 

The impact on the workers was of such degree that the outsourcing firm, Sama, eventually cancelled all of the work it had been hired by OpenAI to complete in February 2022. The contract was supposed to continue for a further eight months. 

This story highlights the seedy underbelly of the tech we are so excited by today. There are countless invisible workers performing countless unimaginable tasks to ensure that AI works the way we expect.

Thank you for being a Ghacks reader. The post Workers read violent graphic depictions so that ChatGPT wasn’t as toxic as GPT-3 appeared first on gHacks Technology News.

Enregistrer un commentaire

0 Commentaires