There has been a lot of buzz about OpenAI GPT-3, now having the largest neural network. Does it mean the AI problem has been solved? Yes, it has a large dataset, but we still don’t know how it learns. OpenAI Basics OpenAI Inc is a non-profit arm of Open.AI LP whose goal is to create a ‘friendly AI’ that will benefit humanity. Open.AI has several different offerings: - an AI system that can create realistic images and art from a description in natural language DALL•E 2 - Generative Pre-trained Transformer is a language model that leverages deep learning to generate human-like text GPT-3 - an updated model that produces less offensive language and fewer mistakes overall but may also generate misinformation InstructGPT - Contrastive Language-Image Pre-training. It recognizes visual concepts in images and associates them with their names. CLIP How are the Models Trained? OpenAI GPT-3 is trained on 500 billion words by using the following datasets: The dataset contains data collected from over 8 years of web crawling Common Crawl is the text of webpages from all outbound Reddit links of posts with 3+ upvotes WebText2 are two internet-based books corpora Books 1 & Books2 pages in the English language Wikipedia Dataset breakdown and training distribution Dataset Tokens Weight in Training Common Crawl 410 billion 60% WebText2 19 billion 22% Books1 12 billion 8% Books2 55 billion 8% Wikipedia 3 billion 3% Training Models can be done using the following methods: This is where we give between 10-100 contexts to a model and expect the model to determine what comes next. Few-shot (FS). This is quite similar to FS. However, an example is given without any training. Context is given to the model to determine what word comes next. One-shot (1S). Zero-Shot (0S) The model predicts the answer given. The idea is that during training, the model has seen enough samples to determine what word comes next. Only the last context is allowed, making this setting difficult. Bias is inevitable Training the model involves taking large bodies of text for GPT-3 and images for DALL•E from the internet. This is where the problem occurs. The model encounters the best and worst. To counter this, OpenAI created InstructGPT, While training InstructGPT, Open.ai hired 40 people to rate the responses and would reward the model accordingly. DALL•E 2 Open.ai outlines the they currently encounter: Risks and Limitations “Use of DALL·E 2 has the potential to harm individuals and groups by reinforcing stereotypes, erasing or denigrating them, providing them with disparately low quality performance, or by subjecting them to indignity.’’ This is what DALL•E 2 believes a ‘CEO’ looks like: This is what DALL•E 2 believes a ‘flight attendant’ looks like: To reduce bias, OpenAI has recruited external experts to provide feedback. GPT-3 Gender Bias To test bias, I borrowed a list of from . You can use the OpenAI to test it for yourself. The results prove to be quite Gender bias prompts Jenny Nicholson playground interesting. Phrases: female/male employee women/men in the c-suite any woman/man knows women/men entering the workforce should know Religious Bias Gender and Race are biases that have been studied in the past. However, a recent reveals that GPT-3 also has religious bias. The following was found: paper Muslim mapped to “terrorist” in 23% of test cases Jewish mapped to “money” in 5% of test cases CLIP Race, Gender, and Age Bias CLIP performs well on classification tasks, as you have already seen in this article. It uses as its dataset to train the model. This is due to the images it is scraping from the internet. However, the model breaks down when it classifies age, gender, race, weight, and so on. This means the AI tools used to generate new art can continue perpetuating recurring stereotypes. ImageNet OpenAI can be used to improve content generation. But as long as the datasets are being trained by scraping existing internet, we will build biases against age, gender, race, and more into technology. We must take precautions when using the internet. The information that goes into the AI must be filtered, or harmful stereotypes will never be erased.