AI-Generated Blog Limitations And Their Workaround
Limitations of AI/Machine learning generated blogs
Our previous post noted that a machine-learned model can produce text based on an input. This can be used in a process to automate blog content generation that can help you to provide up-to-date content to users with the machines doing the heavy lifting. However, a simplistic approach outlined in the previous post has its limitation because the model isn’t fine-tuned to be aware of all of the relevant context and it’s still crude in the acceptable output.
Here’s a real-life example. This week, the number one hacker’s news thread was about Robinhood Lays off 23% of Staff. Taking the one-liner and giving it as an input text to the pre-trained gpt2 model from the previous blog post outputs the following:
Robinhood Lays Off 23% of Staff Recruitment
10.43 42% of Officers Recruitment
09.47 33% of Outfits
20.15 47% of Weapons
18.58 28% of Rifles
15.58 47% of Drones
22.38 20% of Support Recruitment
10.43 32% of Drones
07.28 24% of Ranged Recruitment
12.02 30% of Support Recruitment
12.02 30% of Shotgun Recruitment
8.54 36% of Support Recruitment
18.58 27% of Drones
15.58 41% of Melee Recruitment
11.50 41% of Support Recruitment
09.47 37% of Ranged Recruitment
16.01 36% of Support Recruitment
11.50 38% of Melee Recruitment
15.58 49% of Melee Recruitment
11.50 41% of Weapon Recruitments
As we can see, not only is the generated content orthogonal to the context of the layoffs, some of the output categories aren’t even job departments. Not only is the generated content inaccurate, but it’s also misleading and false. The ML model isn’t sophisticated enough to be economy journal aware. If the model was pre-trained with a lot of articles that are of similar nature, then the content could be more close to noting the event that occurred in the input text.
The confusing part of the model is the "23%", which is drawing out relationships to other articles that had numbers and percentages. If we replace it with "quarter" instead, we get something like the following:
Robinhood Lays Off quarter of its staffs had quit and
its stock rose by 3 percentage points. Its total retail sales
declined 7 percent. The U.S. retail share trade was lower at 4
percent.
A CFO explained that "the company was in a position to meet its
shareholder expectations at the moment."
The stock lost its value by 33 percent in the first quarter
of 2015, and was now down 8 percent at over $6.16 an share.
It's unclear how much the stock will return for 2013, when
the stock should return below its current valuation of
about $70 million, according to investor analyst Andrej Nylander.
Sudden loss among industry leaders
For financial services, the Dow Jones Industrial Average index
rose 3 percent, before falling 4 percent.
The S&P 500 closed as high as its all-time high of $46.75 in
its June quarter.
A Bloomberg New Energy Finance survey found that for the
first time, the Dow had lost more than 7 percent of its
value after rebounding 12 percent following the U.S.
government bailout of Lehman Brothers in August 2003.
That's mostly due to a weakening dollar, and a plunge
in domestic oil production.
The S&P 500 has been performing so bad for such a long
time the US Treasury Treasury decided to sell off the
rest of the group.
As we can see, the text is more relevant to the layoff story. There are obviously fabricated numbers like the share value & the total market values. However, the model generated stories linking the financial sector & even noted the relevance to the overall market (S&P500) being impacted this year.
How to workaround the limitations
- Replace the percentage values with quantiles where applicable
- Add more context to the input text by capturing both cause and the effect of the event (e.g., company layoff due to customers moving to competitors)
- Train the model further using the related articles
- Dedicating a model per blog category will help the generated text to stay on topic & bring out more relevant keywords
- Generate more than 1 output & cherry-pick the relevant sentences, then modify them to curate the final content.