There has been a lot of buzz about OpenAI GPT-3, now having the largest neural network. Does it mean the AI problem has been solved? Yes, it has a large dataset, but we still don’t know how it learns.
OpenAI Inc is a non-profit arm of Open.AI LP whose goal is to create a ‘friendly AI’ that will benefit humanity.
Open.AI has several different offerings:
OpenAI GPT-3 is trained on 500 billion words by using the following datasets:
Dataset |
Tokens |
Weight in Training |
---|---|---|
Common Crawl |
410 billion |
60% |
WebText2 |
19 billion |
22% |
Books1 |
12 billion |
8% |
Books2 |
55 billion |
8% |
Wikipedia |
3 billion |
3% |
Training Models can be done using the following methods:
Few-shot (FS). This is where we give between 10-100 contexts to a model and expect the model to determine what comes next.
One-shot (1S). This is quite similar to FS. However, an example is given without any training. Context is given to the model to determine what word comes next.
Zero-Shot (0S)
The model predicts the answer given. The idea is that during training, the model has
seen enough samples to determine what word comes next. Only the last context is allowed, making this setting difficult.
Training the model involves taking large bodies of text for GPT-3 and images for DALL•E from the internet. This is where the problem occurs. The model encounters the best and worst. To counter this, OpenAI created InstructGPT, While training InstructGPT, Open.ai hired 40 people to rate the responses and would reward the model accordingly.
Open.ai outlines the Risks and Limitations they currently encounter:
“Use of DALL·E 2 has the potential to harm individuals and groups by reinforcing stereotypes, erasing or denigrating them, providing them with disparately low quality performance, or by subjecting them to indignity.’’
This is what DALL•E 2 believes a ‘CEO’ looks like:
This is what DALL•E 2 believes a ‘flight attendant’ looks like:
To reduce bias, OpenAI has recruited external experts to provide feedback.
To test bias, I borrowed a list of Gender bias prompts from Jenny Nicholson. You can use the OpenAI playground to test it for yourself. The results prove to be quite interesting.
Gender and Race are biases that have been studied in the past. However, a recent paper reveals that GPT-3 also has religious bias. The following was found:
CLIP performs well on classification tasks, as you have already seen in this article. It uses ImageNet as its dataset to train the model. This is due to the images it is scraping from the internet. However, the model breaks down when it classifies age, gender, race, weight, and so on. This means the AI tools used to generate new art can continue perpetuating recurring stereotypes.
OpenAI can be used to improve content generation. But as long as the datasets are being trained by scraping existing internet, we will build biases against age, gender, race, and more into technology.
We must take precautions when using the internet. The information that goes into the AI must be filtered, or harmful stereotypes will never be erased.