More

    ChatGPT paid Kenyan workers less than $2 to consume very graphic and toxic content

    This report is based on an investigation from Time Magazine, which claims that the Kenyan workers were tasked with the responsibility of building a filter system that would make ChatGPT more user friendly and suitable for everyday use.

    The job entailed consuming very disturbing details about abominable subjects like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

    The report reads in part, “The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

    To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.”

    The AI company hired the services of Sama, a training-data company, focusing on annotating data for artificial intelligence algorithms. Sama in turn employed the Kenyan workers on behalf of OpenAI.

    According to TIme Magazine, the workers were paid anywhere between $1.32 and $2 per hour depending on seniority and performance. This disparity between the job and the pay passes the job off as exploitative even as their work contributes to billion-dollar industries.

    The Time’s report was based on graphic details described by Sama employees who disclosed some of the horrid things they had come across while working for OpenAI.

    An extract from the report reads, “The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.”

    Source:
    www.pulse.com.gh
    Source link

    Latest articles

    spot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_img