How news publishers can use AI tech like ChatGPT without upsetting Google


More publishers have begun experimenting with automation to produce content after news of open AI chatbot ChatGPT went viral.

But publishers should be wary of how they use AI if they don’t want to displease Google. And ChatGPT, which went live at the end of November, has said itself in an automated conversation that it “cannot replace human journalists”.

It emerged this week that personal finance site Bankrate and tech news and reviews site CNET have both begun using AI to produce content.

The former tells readers that content published under the “Bankrate” byline is “generated using automation technology”.

The website adds: “A dedicated team of Bankrate editors oversees the automated content production process — from ideation to publication. These editors thoroughly edit and fact-check the content, ensuring that the information is accurate, authoritative and helpful to our audience.”

Bankrate’s sister site Creditcards.com is similarly using AI under the byline “CreditCards.com Team”.

Content from our partners

Meanwhile CNET’s experiment was first widely revealed by marketing and SEO expert Gael Breton and then The Byte on Wednesday. The website has subsequently published an explanation as to why it had decided to try out publishing 75 money articles using automated technology tech since November.

Editor in chief Connie Guglielmo wrote: “Conversations about ChatGPT and other automated technology have raised many important questions about how information will be created and shared and whether the quality of the stories will prove useful to audiences.

“We decided to do an experiment to answer that question for ourselves.”

Guglielmo said their goal was to find out whether an AI engine could “efficiently assist” their journalists “in using publicly available facts to create the most helpful content so our audience can make better decisions”.

The AI tool has been either writing the stories or gathering information for certain stories but they are always “reviewed, fact-checked and edited by an editor with topical expertise” before publication, she added.

This week following the revelation of CNET’s use of this tech, it changed the relevant byline to CNET Money and made the disclosure of the tech easier to find. It reads: “This story was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff.”

Guglielmo said: “We’ll continue to assess these new tools as well to determine if they’re right for our business. For now CNET is doing what we do best – testing a new technology so we can separate the hype from reality.”

A Press Gazette webinar in October heard mixed views about how open publishers should be about their use of automation. UK news agency PA, which has used its Radar (Reporters and Data and Robots) service since 2017 for localised data stories, does not typically announce the involvement of automation in its articles whereas US local publisher McClatchy uses bylines to tell the reader.

PA editor-in-chief Pete Clifton said the knowledge “might really unnerve the readership” while McClatchy’s vice president for audience growth and content monetisation Cynthia DuBose said she thought openness had helped her group’s sites perform well on Google.

“We have not seen any penalisation… Google wants [automated content] to be identified, which we do, and we feel we do very well – with the bot byline, with the footer that we have on the bottom, and also [making sure it’s] not repetitive,” she said.

How could the use of ChatGPT impact Google visibility?

SEO experts concerned by the use of AI by publishers asked Google’s in-house expert how it would impact on their search visibility.

Google’s search liaison Danny Sullivan said on Twitter it depends on the quality and intent of the content. He said: “…content created primarily for search engine rankings, however it is done, is against our guidance. If content is helpful and created for people first, that’s not an issue.”

He has previously said that using 100 journalists to crank out copy aimed at boosting Google rankings would have the same effect as using something like ChatGPT for the same purpose. Google has been prioritising “original, helpful content written by people, for people” since August when it introduced its helpful content update.

For many years Google has followed guidelines known as “E-A-T” – meaning its goal is to ensure its search results offer users expertise, authoritativeness, and trustworthiness (as outlined in this piece on SEO tips for publishers). In December it added an extra “E” to the start, standing for experience.

Luke Budka, director of PR and SEO at B2B agency Definition, told Press Gazette this made transparency on the human editing process of AI content especially important for the likes of Bankrate and CNET. He described the addition as “an obvious way for Google to combat AI generated copy. It makes what Google has previously said about author ‘reconciliation’ even more important – consolidation of expertise signals to a single author profile denoting experience.”

Google’s guidelines on spam content show the need for care: they penalise content that has been “generated programmatically without producing anything original or adding sufficient value; instead, it’s been generated for the primary purpose of manipulating search rankings and not helping users”. This can include text that was “translated by an automated tool without human review or curation before publishing” or “generated through automated processes without regard for quality or user experience”.

Overall though, Budka said that “in reality it’s fairly simple… Google wants quality, trustworthy, original content, written by bonafide experts, for its index, because this is what users want,” he said. “This is what will keep users coming back to Google.

“The question is not ‘Will ChatGPT/AI content be penalised?’ but ‘Can ChatGPT/AI produce quality original, accurate content?’ The answer is no. It’s been trained on a huge dataset of existing sources (ChatGPT was trained on a dataset of text called the ‘WebText’ dataset which includes a lot of unverified information like forum posts, personal blogs and Wikipedia so its accuracy is extremely questionable and by its very nature it has no ‘experience’.

“This doesn’t mean it cannot be part of the content production process, and new versions will be even better ‘assistants’ to content creators.

“In the meantime, inevitably, some people will use it as the sole source of content for their sites, but… Google will catch up with these actors and apply the appropriate penalties (detection techniques are already a big topic).”

To prove his point that ChatGPT cannot produce quality content, Budka asked the bot itself “Can ChatGPT replace journalists?”

Here is what it said: “ChatGPT is a powerful language generation model that can produce coherent and well-written text on a variety of topics, including news articles. However, it is important to note that ChatGPT is only a tool and it cannot replace human journalists.

“While it can assist in generating text quickly, it does not have the ability to fact-check, verify sources, or provide context and analysis that a human journalist can offer. Additionally, it doesn’t have the ethical consideration that humans have, and it’s not able to understand the nuances of human society and culture. Therefore, the use of ChatGPT in journalism should be seen as an aid rather than a replacement for human journalists.”

Email [email protected] to point out mistakes, provide story tips or send in a letter for publication on our “Letters Page” blog





Source Link

Comments are closed, but trackbacks and pingbacks are open.