Hey Hackers, this is an article I wrote in 2020 and I thought I had moved it to HackerNoon a long time ago, but I didn’t, so here we are. The reason I’m publishing it here is because I wanted to show people that GPT is something that’s been around for years. ChatGPT is more powerful of course, as all newer models are trained on more data. However, I wasn’t as mindblown by it as others were because I’d been playing around with GPT since 2020.
As a historical reference, here is what ChatGPT’s grandfather, GPT2 was able to produce all the way back in 2020. It’ll be interesting to compare it to what ChatGPT is able to produce today.
Disclaimer: The following content was generated by OpenAI’s GPT-2. All headings are prompts fed to the system. While some of the information presented in this article may be true, none of the facts have been verified.
The output from the neural network sometimes refers to real companies or journals, like Science Advances or Google. However, everything in this article is fake. The only edits we have made were to clean up some of the grammar, most of which was 100% perfect. Aside from the title, headings, conclusion, and this disclaimer section, all of the text in this article was generated by the Open AI GPT-2 neural network hosted on talktotransformer.com. The text was generated using our headings as prompts given to the neural network.
Text-Generating AI Systems, such as the GPT-2 system developed by Open AI and unveiled last week, may be more likely to evolve into human-like machines than traditional AI, says Open AI researcher James Kuffner. “If these systems can be trained to do certain tasks that are similar to humans, then we can expect to see a human-level intelligence emerge, not just in the short run but in the long run,” says Kuffner.
In a study published in the journal, Science Advances, the researchers examined the future of jobs, including jobs where human workers are no longer necessary, and jobs where a machine can perform the same task as a human, using a combination of artificial intelligence and machine learning.
They found that while more than 90 percent of the U.S. workforce are susceptible to being replaced by AI in the future, only 20 percent of those jobs are highly automated.
“The risk of being replaced by AI depends on the job”
There are two problems with this AI approach, researchers say. First, there is a risk that artificial intelligence could cause a major shift in the way work is done. While most AI research centers tend to favor short-term, technical goals, Stanford has been working on ways to generate more long-term, societal impacts of AI.
A new study by Stanford Professor of Information and Computer Science James Hughes shows that, if it is allowed to grow unchecked, AI will have a significant social impact. The paper, “Is there a social cost of AI?”, has just been published in the journal, Science, and describes how AI can improve people’s lives, increase economic growth, reduce inequality, and potentially change the way society operates.
“If we are willing to accept some negative effects of AI in our lives, why can’t we accept some positive effects as well?” asked Hughes. “If the human species is to survive and prosper, we must be able to use artificial intelligence technology.”
If you haven’t heard, the world is in the middle of a massive debate and the AI debate is a major part of that. A major issue is whether human beings, who are by nature imperfect, are good enough, smart enough and intelligent enough to be the ultimate solution to humanity’s challenges and problems.
There’s a group of people that think they know the answer to this very issue: AI.
The first question is whether it should be possible to replace human writers with computer-generated texts. There are many people who believe that this is possible within the next century, even within the next few years.
The next question, of course, is what is to replace the human writer and how. There are two schools of thought:
Writers can be improved by using AI, such as by incorporating computer vision, pattern recognition, and natural language processing to write better.
There is a lack of creativity in the human population as a whole and there is a need for humans to be able to produce new ideas in a more efficient manner, which requires a more intelligent writer.
The first position is probably more viable, because in an age of information overload and rapid technological advancement, writers can often have an overload of information.
AI is a more difficult one to figure out. If the problem is a lack of creativity, then it seems the answer may be an AI that does more of what you do, but this is not going to work in a world where most people use text messages to communicate. It depends on how we use AI.
We know that AI is going to replace many jobs in the coming years. As the pace of automation increases, it will start to happen to human jobs at a faster rate than we can create new jobs in the workforce. AI systems will become so advanced that they will be able to learn, to adapt, and to make decisions on their own. We can’t possibly understand the full implications of these technologies for human society.
In many ways, AI is a natural extension of our ability to communicate. This can be seen in two ways. The first is a direct implication of the second: the ability to read the environment. The ability to recognize that another character is in a certain situation is a direct result of having a basic understanding of language.
The second is a more abstract concept and one which can be applied to almost any medium: a good writer can tell a good story even if they don’t understand how the story works.
But, if we’re going to ask whether AI is the enemy, it’s important to ask why we are in an environment in which it exists. I think we’re on the cusp of something interesting here. For the first time in human history, a new kind of technology has arisen that is creating real value and excitement.
We’ve long seen the exponential growth of AI systems. We’ve been doing research on AI for decades. But now, in the wake of the massive success of Google Translate and Siri, I think we’re finally at a point where people are seeing the value.
We’re at the end of the year now and there’s still so much to accomplish. AI is not going to be the end of our industry. We still have a ton of people that are working on AI and most of these people are doing so in secret to avoid the legal ramifications of AI being misused.
“AI will help us.”
I see the big tech companies, the big social media companies, the big pharma companies, the big finance companies and all these are using AI to get their products to market. They are not using AI as a weapon. They are using it as a tool to help us make the product and get the job done.
AI is not a new problem. We are just getting to the point where we are really using it.
-End of GPT-2 Output
IMPORTANT NOTE: All references and information written by GPT-2 regarding real companies or people are coincidental and not true. The text above has been auto-generated by Open AI’s GPT-2 through talktotransformer.com and is not factually accurate.
This article was an exploration of GPT-2 from Open AI and the results were astounding. To generate the text above and edit it into a coherent article took just over one hour. GPT-2’s potential shown in this article brings rise to questions around the ethics of using such technology. For example, who is credited with the article in the end? Is it the human writer who fed prompts to the neural network or is it the data scientist who created the neural network itself?
Along with the recently reported threats posed by deepfakes, there are also worries that this technology could be used to quickly and easily spread fake news. We are living in an incredibly interesting time where technologies are emerging before we know how to deal with them. With that said, GPT-2 is likely to spark a lot of debate both on the legality and ethics of GANs and other generative models.