Within the last time, AI-powered writing has boomed, making it possible to generate pages of more or less meaningful text in seconds. All you need is to choose the writing tool, write a prompt containing some details about the task, launch the generation process, and wait a bit.
Sounds cool but reality, as you may know, turned out to be much harsher. Without skilled guidance and significant editing, all these one-shot AI-generated content pieces come out too vague, generic, and lacking any valuable insights.
Does it mean that we're doomed either to edit manually each text piece or accept poor content quality?
My answer is NO! Keep reading to see for yourself how I approach humanizing AI text and enhancing its quality without adding a single manual edit.
Of course, I edit my texts manually, too. My mission here is to show that, without manually editing, it's possible to achieve a way higher quality of AI-generated text than you think. Despite that, I strongly recommend allocating enough time for editorial work.
So, I did the following:
The main rule was not to change a word manually. At the same time, I could hit "Regenerate" as many times as it was needed.
The text I selected was the last content piece published in a corporate blog of an online academy registered in the USA. The main reason for choosing was that all their texts were created with minimal human effort, which becomes clear from the very first sight.
An article called “Celebrating Milestones: Do Online High Schools Have Graduation Ceremonies?” covers the essence of graduation ceremonies in online high schools. It aimed to demystify the existence of virtual commencements, a significant leap from traditional ceremonies, yet faltered in its execution.
The text was marred by a lack of direction and clarity, its paragraphs meandering without purpose. It lacked the robust support of data, with statements floating unanchored, devoid of the weight of evidence. Sentences echoed each other, monotonous in their structure, and the formatting presented a disjointed array of numbered lists, disrupting the flow. Generic assertions clouded the narrative, and the transitions between topics jolted the reader, lacking a smooth passage from one idea to the next.
And a cherry on the cake was tons of typical AI cliches you probably know well. Below I’ll present you my vocabulary of these words and phrases which should be avoided anyway.
As a result, I’ve got several introduction paragraphs, a bit below 200 words total:
As you can see in a screenshot above, my zoo is relatively big, currently covering 17 custom GPTs for different purposes.
Only two of them are public, others are strictly for internal usage. This is because it turns out to be too easy to break them down and access their knowledge. Maybe sometimes OpenAI will fix this issue, and I hope they will.
Now, let’s take a look at all the steps I’ve conducted within my experiment and review each of them in detail.
This is the first step, which is part of a bigger, more complex one. The idea is to form a comprehensive list of refinements based on various factors, combine them, find the data required, make the refinements specific, and then implement them in a single run.
For this step, I used
For E-E-A-T evaluation, I have a tool named SzonKonery. If you wondering why the heck I named it in such a weird way, I'll tell you that this strange name makes specific sense for me only.
As a result, SzonKonery raised a flag on ethical considerations, ensuring the content's integrity, and suggested ways to infuse the narrative with authoritative evidence, compelling examples, and a language that resonates.
The next step is to merge the results of two previous evaluations. It’s an easy step required for the next evaluation.
The merged list is missing important supporting data that underscore different aspects of the article like statistics, examples, or sources for highlighted factual information.
For this purpose, I have another tool named ProofSupplier. The tool uses the Bing browsing feature to search for relevant sources, statistics, studies, and expert insights, integrating them into the draft, enriching the content, and lending it both depth and credibility.
Once the data is gathered, it’s time to generate specific refinements. Another action I’ve conducted at this stage was a correction of firm AI formatting.
In this symphony of AI efficiency, the draft risked losing its human touch. My Human Among Robots tool is at the service here, meticulously weaving in natural phrasing and ensuring the text resonates with warmth and relatability.
The humanization process is often repetitive. One of the crucial features of Human Among Robots is its randomness, which is a solid feature of human writing. That’s why the result is always unpredictable, often requiring several runs and then choosing the best-fitting one.
Below you can compare the initial text with takes 1, 2, and 3.
What about the AI text detector, will the refined text pass it or not? Let’s find out.
For testing purposes, I use Originality.ai which showed the highest accuracy among similar services.
First of all, a test check of the initial piece of text. Failed:
Take 1 was insufficiently better:
Take 2 showed the same 100% AI as the initial text:
But this is not the end! I got the last output before humanizing, fed it to Claude 2, supplied it with the guidelines from my Human Among Robots, and prompted it to humanize the text. You can find the result below.
The only thing remaining is to check if it will pass the AI text detector. Got it!
Below you can review a kind of analysis of what was changed in the text, and how.
Before: The initial draft presented a disconnected narrative, "As the landscape of education continues to evolve, students exploring online high school options often wonder about a significant milestone: graduation ceremonies." It set the scene but failed to engage the reader meaningfully.
After: Post-refinement, the narrative transformed, "The way we learn is evolving, and so is our celebration of academic milestones. Online high school students often ponder: 'Will we experience graduation ceremonies?'" This revision not only captures attention but also invites the reader into an immersive discussion.
Before: The draft wandered through abstract notions, "Let’s explore the possibilities, traditions, and unique aspects of celebrating academic achievements in the digital realm."
After: The refined content offers clarity and depth, "Yes, the celebrations exist, and they're more impactful than one might imagine. Virtual ceremonies symbolize a significant leap, embracing modern educational strides and challenges." It directly addresses the reader's curiosity, enriching the narrative with meaningful insights.
Before: The draft blandly addressed misconceptions, "Many believe online high schools lack traditional graduation ceremonies."
After: The revision confronts these misconceptions with a compelling narrative, "It's a myth that online schools skimp on graduation celebrations. They do celebrate but with a unique twist." It not only corrects the misconception but also enriches the narrative with a distinctive perspective.
The goal of this article was to show that AI-generated text of a decent quality isn’t a myth. For some, it means unreachable height, meanwhile for others, it’s quite achievable. As for me, it was a challenge but now it’s a target for further optimization.
Along with a bunch of custom tools, I have proven processes and know-how, which I’ve combined into a comprehensive editorial outsourcing service. I’m offering TA-focused content super-pillars strategizing and development. Faster than your in-house team can, cheaper than your wildest dreams.
Feel free to connect with me, I have some valuable surprises for you!