CNET used AI to write articles. It was a journalistic disaster.

comment

When internet sleuths discovered last week that CNET had quietly published dozens of feature articles generated entirely by artificial intelligence, the popular tech site confirmed this was true – but described the move as a mere experiment.

But now, in a scenario familiar to every sci-fi fan, the experiment seems to have run amok: the bots have betrayed the humans.

In particular, it turns out that the bots in journalism are no better – and maybe a little worse – than their would-be humans.

On Tuesday, CNET began adding lengthy corrections to some of its AI-generated articles after another tech site, Futurism, called the stories “very stupid mistakes.”

For example, an automated article on compound interest incorrectly said that a $10,000 deposit with 3 percent interest would yield $10,300 after the first year. nope Such a deposit would actually bring in only $300.

More broadly, CNET and sister publication Bankrate, which has also published bot-written stories, have now raised concerns about the accuracy of the dozens of automated articles they’ve published since November.

New releases, accompanying several other AI-generated works, say that “we are currently reviewing this story for accuracy” and that “if we find errors, we will update and correct them.”

Artificial intelligence was used to handle face recognition, recommend movies and auto-complete your input. However, news that CNET had used it to create entire stories sparked a wave of concern from the news media over its apparent threat to journalists. Robot brained yet talkative ChatGPT can make copies without taking lunch or bathroom breaks and never goes on strike.

Until last week, CNET had coyly credited its typewritten stories to “CNET Money Staff.” Just by clicking on the byline, a reader would learn that the article was produced by “automation technology” – itself a euphemism for AI.

The company came clean after a sharp-eyed marketing director named Gael Breton drew attention to the labels on Twitter. CNET then changed the bylines to “CNET Money,” added some explanation (“this article was powered by an AI engine”), and further noted that the stories “were thoroughly edited and fact-checked by an editor from our editorial board.” .

If that’s true, “then it’s primarily an editorial error,” said Hany Farid, a professor of electrical engineering and computer science at UC Berkeley and an expert on deepfake technologies.

“I wonder if the seemingly authoritative AI voice has caused editors to lower their vigilance,” he added, “and [were] less careful than they might have been with the writing of a human journalist.”

CNET’s robot-written copy is generally indistinguishable from the human-produced kind, though it’s not exactly snappy or glitzy. It’s, well, robotic: serviceable but clumsy, riddled with clichés, without humor or cheekiness or anything resembling emotion or quirks.

“Choosing between a bank and a credit union isn’t a one-size-fits-all,” reads an AI-authored story published by CNET in December. “You need to weigh the pros and cons of your goals to determine your best fit.”

Another story written by bots advises, “The longer you leave your investment in a savings or money market account, the more time you have to harness the power of compound interest.”

Other material from CNET’s bots includes stories like “Should you break an early CD for a better price?” and “What is cell and how does it work?”

The technology’s deployment comes amid growing concerns about the use and potential abuse of sophisticated AI engines. The amazing capabilities of the technology have prompted some school districts to consider banning students from using them to cut corners on class and homework.

In a statement released last week, CNET editor Connie Guglielmo called the use of AI on her website “an experiment” designed not to replace reporters but to aid their work. “The goal is to see if technology can help our busy reporters and editors in their jobs to cover issues from a 360-degree perspective,” she wrote. Guglielmo did not respond to a request for comment.

Bankrate and CNET said in a statement on Tuesday that the publications “actively review all of our AI-assisted articles to ensure no further inaccuracies pass through the editing process, as humans make mistakes too.” We will continue to make any necessary corrections.”

Even before CNET’s big experiment, other news organizations had used automation to a more limited extent to augment and analyze their work. The Associated Press started using AI to report on corporate earnings in 2014. It has also used the technology for sports recaps.

But AP’s system is relatively crude — essentially inserting new information into pre-formatted stories, like a game of Mad Libs — compared to CNET’s machine-creation of feature-length articles.

Others have developed internal tools to rank human labor — such as a Financial Times bot that checks if their stories quote too many men. The International Consortium of Investigative Journalists has unleashed AI on millions of pages of leaked financial and legal documents to identify details that deserve a closer look from its reporters.

Aside from flawed reporting, AI-written stories raise some practical and ethical issues that journalists are only just beginning to ponder.

One is a plagiarism: writer Alex Kantrowitz found last week that a substack post, written by a mysterious author named Petra, contained phrases and sentences taken from a column Kantrowitz had published two days earlier. He later discovered that Petra had used AI programs to “remix” content from other sources.

Given that AI programs assemble articles by digging through mountains of publicly available information, even the best automated stories are essentially clip jobs, with no new insights or original reports.

“These tools can’t go out and report or ask questions,” said Matt MacVey, who leads a project on AI and local news at New York University’s NYC Media Lab. So your stories will never break new ground or deliver a scoop.

However, the bigger fear of AI among journalists is whether it poses an existential threat. News media employment has been shrinking for decades, and machines may only accelerate the problem.

“This is perhaps the classic story of automation reducing the need for human labor and/or changing the nature of human labor,” said Farid. “The difference now is that automation doesn’t disrupt manual work, it disrupts highly creative work that was thought to be beyond the reach of automation.”

Social media trolls have long taunted newly fired reporters with the epithet “learn to code.” Despite obvious shortcomings, the rise of AI reporting suggests that the codes created could one day be the very thing that drives journalists from their newsrooms.



Source

Leave a Comment