ChatGPT, introduced in November 2022, quickly caught the tech world’s attention.
Developed by OpenAI, this powerful artificial intelligence (AI) chatbot became known for creating varied text responses, from fun creative pieces to technical explanations.
Its rapid popularity was undeniable, reaching over a million users within a week.
OpenAI’s journey to making ChatGPT user-friendly involved more than just high-tech innovation though. To make the chatbot less harmful, OpenAI relied on Kenyan laborers through a partnership with Sama.
These workers, earning less than $2 per hour, reviewed and labeled vast amounts of toxic and explicit content.
Their crucial yet challenging work highlights a hidden part of the AI industry, emphasizing the often unrecognized human effort behind advanced AI tools.
The Sama Contracts
OpenAI signed several contracts with Sama in late 2021, totaling about $200,000.
These contracts were aimed at labeling textual descriptions related to sexual abuse, hate speech, and violence.
Sama employees, divided into three teams, focused on these specific subjects. The workers in each team had to read and label between 150 to 250 text passages within a nine-hour shift.
Despite being allowed to attend wellness sessions, many workers found them unhelpful. These programs, designed to support their mental health, were not frequently available due to the high pressure to meet productivity targets.
Individual sessions were supposed to be available, but some employees reported that requests for one-on-one counseling were denied. They were only given the option to attend group sessions.
Sama’s management claimed that employees had access to both group and individual counseling sessions. They also said that licensed mental health professionals provided these sessions.
However, this did not align with the experiences shared by the workers.
The Sama contracts outlined that OpenAI would pay $12.50 per hour to Sama. Despite this, workers received significantly less.
The most junior data labelers, called agents, earned a basic salary of 21,000 Kenyan shillings (about $170) per month, plus a $70 bonus for handling explicit content. With taxes, their hourly rate came to about $1.32 to $1.44, depending on their performance.
Role | Pay Rate (Hourly) | Monthly Salary | Monthly Bonus | Responsibilities |
---|---|---|---|---|
Agent | $1.32 – $1.44 | $170 | $70 | Labeling text passages |
Quality Analyst | Up to $2 | Not specified | Not specified | Checking agents’ work |
Quality analysts, who reviewed the agents’ work, could earn up to $2 per hour if they met all their performance targets.
There were discrepancies between the workers’ reports and Sama’s statements. Workers claimed they had to label up to 250 text passages per shift, while Sama stated the number was 70.
Regarding pay, Sama noted that employees could earn between $1.46 and $3.74 per hour, yet did not specify which roles earned towards the higher end of this range.
Also, Sama emphasized that the $12.50 paid by OpenAI covered various costs, including infrastructure and benefits for workers.
According to the Chatgpt owner, OpenAI did not set productivity targets and entrusted Sama with managing payments and mental health services for employees.
OpenAI emphasized its commitment to the wellness of its contractors, mentioning that there were supposed to be limitations on exposure to explicit content. Workers should have been able to opt-out without penalties.
How OpenAI’s relationship with Sama collapsed
In February 2022, the working relationship between OpenAI and the San Francisco-based firm Sama grew deeper but soon encountered severe issues.
That month, Sama started a pilot project for OpenAI to gather sexual and violent images, some illegal in the United States.
These images were part of a labeling task aimed at making AI tools safer, according to OpenAI.
By mid-February, Sama had labeled and delivered a batch of 1,400 images to OpenAI, including categories like child sexual abuse (C4), bestiality, rape, sexual slavery (C3), and graphic violence (V3).
OpenAI compensated Sama $787.50 for this work. Yet, within weeks, Sama called off all its projects with OpenAI, ending their collaboration eight months before the contracts were up.
Sama cited its lack of agreement to handle illegal content and blamed OpenAI for providing additional instructions mentioning illegal categories only after work had begun.
Concerns were raised immediately by the East African team to executives. Consequently, Sama terminated involved individuals and introduced new vetting policies.
Categories and definitions used:
Category | Description |
---|---|
C4 | Child sexual abuse |
C3 | Bestiality, rape, sexual slavery |
V3 | Graphic violence, death, serious physical injury |
OpenAI acknowledged receiving the 1,400 images, but they asserted that there was a miscommunication regarding the inclusion of C4 content.
They claimed they hadn’t viewed the contested content to verify its nature after being informed of the collection attempt.
This miscommunication and the explicit nature of the content were key factors in Sama’s decision to terminate their partnership.
After this fallout, Sama informed their workers of the company’s decision during a meeting in late February 2022.
While some employees were reassigned to lower-paying tasks, others lost their jobs. The last batch of data from Sama was delivered in March 2022.
The early termination of contracts also meant that OpenAI and Sama did not fully exchange the agreed $200,000. The work performed was valued at around $150,000 over the entire period of their partnership.
Economic impact on workers:
- Lower-paying assignments
- Loss of jobs
- Absence of the $70 monthly bonus for explicit content
Additional reasons for the breakup surfaced when TIME published a report titled Inside Facebook’s African Sweatshop on February 14, 2022.
This investigative report detailed Sama’s employment of content moderators for Facebook who dealt with images and videos of executions, rape, and child abuse while earning just $1.50 per hour.
This negative publicity reportedly influenced the decision to end the collaboration with OpenAI.
Internal communications showed Sama executives scrambling to manage the backlash, including addressing concerns from clients like Lufthansa, who wanted evidence of their business relationship scrubbed from Sama’s website.
In response to this controversy, Sama CEO Wendy Gonzalez announced via Slack on February 17, 2022, that they would be winding down their work with OpenAI.
By January 2023, Sama had decided to exit all content moderation work, including their $3.9 million contract with Facebook, resulting in the loss of about 200 jobs in Nairobi.
The firm chose to focus instead on computer vision data annotation solutions.
Despite the upheaval, the need for human labor in AI data labeling remains. The complexity of language and the nuanced nature of many tasks mean that AI systems still rely heavily on human input.
Ethical concerns about labor conditions and the nature of the work continue to be significant issues in the AI industry.
Key points on the collapse:
- Miscommunication on content categories
- Cancellation of projects due to exposure to harmful content
- Economic ramifications for employees
- Influence of negative publicity from the Facebook report
- Decision to exit content moderation to avoid further controversy