As technology develops, so must journalists’ codes of ethics

Robotic hand using a laptop computer, illustration.
‘Software that ‘thinks’ does not necessarily gather or process information ethically.’ Photograph: KTS Design/Getty Images/Science Photo Library RF

Journalism is largely collaboration: reporters with sources, writers and editors, lawyers advising publishers, producers with distributors, and audiences feeding back their knowledge. Rapid development of artificial intelligence means journalists are likely to collaborate more and more with machines that think. The word itself, machines, feels so industrial era, but “robots” feels too limited. Humans are busy building brains, if not yet minds. So my shorthand for now is AI.

Uses of AI in journalism are being explored, analysed and launched. Like humans – sometimes better than humans – AI can absorb information, sift it, analyse it, and make decisions and speak or write. But decision-making in journalism often involves ethical choices based on long-held values, guided by published editorial standards. The results can be held accountable against those standards. Authentic accountability processes help maintain the trust on which the public democratic usefulness of journalism depends.

Our ideas about accountability are based on the assumption that explanation is possible. Called to account, we explain. A court gives reasons for judgment; ministers answer questions in parliament; statutory regulators make annual reports; and in journalism, someone like a readers’ editor handles complaints.

Some AI may be unable to meet this expectation. As the author of “Can AI be taught to explain itself?” in the New York Times put it: “Artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us ...[The] inability to discern exactly what machines are doing when they are teaching themselves novel skills ... has become a central concern in artificial-intelligence research.”

The audience could lose trust on two grounds: provision of poor quality information, and failure to manage the AI

AI collaboration poses ethical issues for, among others, courts that use it in sentencing, for operators of weapons systems, and for medical specialists. The potential benefits of AI, together with the widespread recognition that the accountability of AI decision-making matters greatly, give me confidence that the challenge of making AI accountable to humans will be met. Until it is, each collaboration requires attention. In journalism, the long-unchanging codes of ethics need to be revisited to make specific provision for this transitional era. A clause something like: “When using artificial intelligence to augment your journalism, consider its compatibility with the values of this code. Software that ‘thinks’ is increasingly useful, but it does not necessarily gather or process information ethically.”

Developments in AI are too rapid and varied. For the time being, codes could simply require that when AI is used the journalists turn their minds to whether the process overall has been compatible with fundamental human values.

To illustrate how AI issues might engage standards found in most journalism codes - some AI is “trained” with large datasets that may themselves be of dubious accuracy. If journalism based on AI-assisted research turns out to be significantly in error, the audience could lose some trust in the media outlet on two grounds: provision of the poor quality information; and failure to manage properly the AI through which the information was partly generated.

Some AI-processed data can be tainted with ethnic, gender or other types of bias because of assumptions that humans built in, consciously or otherwise. Journalism that relies on such output might also adversely discriminate.

AI might produce newsworthy material from data that it gathers in invasive ways. That doesn’t necessarily preclude use of the information, depending on its public interest value. But it does require careful consideration in context. For many years, journalism codes have required that use of subterfuge should pass high tests of necessity and public interest. This is because trust is put at risk when people who say they are pursuing truth use deceit. One way to address the risk to trust is transparency about methods, including the way AI plays its part in the story.

• Paul Chadwick is the Guardian readers’ editor