Study finds that journalists are using generative AI without company oversight.
By Sara Guaglione * March 4, 2025 *
A new report reveals that nearly half of the journalists surveyed use generative AI tools that are not approved or purchased by their organization.
This is according to a Trint survey, a transcription software platform that uses AI, which asked 29 global newsrooms to share their plans for using AI in the coming year.
This report found that 42.3% of journalists surveyed use generative AI tools that are not licensed by the company. Journalists reported that their newsrooms are adopting AI tools in order to improve efficiency and keep up with their competitors. They also expect to see an increase in the use of AI for processes such as transcription and translation, data gathering and analysis, and information gathering.
Trint’s survey also found that just 17% of those interviewed found “shadow AI” — or the use of AI tools or apps by employees without company approval — to be a challenge newsrooms face when it comes to deploying generative AI tools. That was far below issues like inaccurate outputs (75%), journalists’ reputational risks (55%) and data privacy concerns (45%).
“Plenty of editorial staff here use AI from time to time, for example to reformat data or as a reference tool,” said a Business Insider employee, who spoke to Digiday on the condition of anonymity. “Some of them do pay for it out of their own pockets.”
Making efficiency gains was the main reason newsrooms were adopting generative AI in 2025, according to Trint’s report, according to 69% of respondents.
But the Business Insider employee said these use cases for generative AI tools fall into a gray area. The guidance from company management has been focused on principles, rather than specific orders on what employees can and can’t do with the technology, they said.
“We encourage everyone at Business Insider to use AI to help us innovate in ways that don’t compromise our values. We also have an enterprise LLM available for all employees to use,” said a Business Insider spokesperson. (Business Insider’s previous editor in chief Nicholas Carlson published A memo will be sent out in 2023 describing these newsroom guidelines. ((19459032)
The employee said that the guidelines were not approved [tools]but they weren’t disapproved. They also said that they had been told not to enter any confidential information into the generative AI system and to be skeptical of the output.
An anonymous publishing executive said that AI technology is changing so rapidly that companies will have a difficult time keeping up with corporate compliance infrastructures. This is especially true when it comes to data security and legal issues.
The exec said, “I don’t think there is much risk to individual staffers who use these tools… and it will be hard to get them to stop using tools which make their job easier and work well.” Felix Simon, an Oxford University research fellow who studies the implications AI has for journalism, told Digiday that it all comes down to how journalists use the technology.
According to Simon, “Not all AI that is not approved must be dangerous.” He said that if an employee downloaded a large language and used it locally, this would not necessarily pose a security threat.
Using a non approved system connected to the Internet would be “more difficult if you feed it sensitive data,” he said.
According to the publishing executive, the best approach is to explain the risks “in a way that includes risks for them personally.”
The New York Times Semafor reported that Semafor approved the use AI toolstwo weeks ago for its editorial and products teams. The company explained what its editorial staff could and couldn’t do using the technology. It also warned that unapproved AI tools may leave sources and information vulnerable.
https://digiday.com/?p=570795