Devils in the detail: Confronting AI imagery in news reporting

The first port of call is understanding what is actually real
Share this story
By Kyle Fawkes
Crisis Communications Director
Email

AI imagery is everywhere. From celebrity parodies to exaggerations of wildfires, artificial intelligence is illustrating marketing campaigns, social media content and news articles with shocking realism. In less than a decade, it has become “mainstreamed” into our most important information platforms. Just in the past two years, several well-known news organisations – including the New York Times and the International News Media Association – have advertised positions for “generative AI” content producers.

For journalists, influencers and companies looking to draw an audience, AI imagery is another tool to attract interest. It is a way to visualise the inaccessible or invisible – providing graphic depictions of warzones or offering a schematic of complex processes.

While AI has driven a new path for creativity, concern is mounting. Artists assert copyright infringement as AI trains on existing works. Photojournalists worry about work displacement as AI images divert public attention away from real photographs. And the public is showing signs of wavering trust in images more generally.

For the first responder community though, there is a more specific risk: AI misrepresentation leading to public panic. That risk is legitimate. AI misrepresentation has already influenced the public perception of many incident responses in recent years – from wildfire containment in Canada to flooding response in the United States.

Canadian authorities have been abuzz in recent weeks as they try to discredit AI imagery of wildfires (ex above), which have caused no end of concern for local communities.

So how should organisations address suspected AI imagery when managing an operational or reputational incident? The first port of call is understanding what is actually real. There are several strategies that can be used to discern whether an image was generated by AI or photographed by a human:

1. First and foremost is a disclaimer. Many AI images come with a label – often something to the effect of “For representation only” or “auto-generated” – which indicates the image is not a real photograph of the subject event.

2. Common sense: Is there anything about the image that would naturally raise suspicion? At a very basic level: Does the image look animated like a cartoon? More detailed inferences based on personal expertise or local knowledge will offer a better filter but won’t be possible for general audiences.

3. Compatibility: Does the scene in the image match descriptions of the event elsewhere? For instance, does the weather in the subject image correlate to the conditions described in reporting?

4. AI imperfections: Many AI images will leave telltale signs of their manufactured origins. Most commonly, AI images will include inconsistent blur/colour patterns. AI also struggles with scale – giving some objects unrealistic size in comparison to others (humans can often look larger than they normally would next to equipment).

5. Technological tools: consumers can also use AI detection sites such as AI or Not, which train on the outputs of known AI image generators to learn subtle differences in the patterns of pixelation and digital formatting of AI images. However great these platforms are, they are only as good as their programmed inputs. With the AI landscape evolving quickly, these platforms are reportedly still not performing with 100% accuracy. False positives and failed detections do occur.

6. Content credentials: Professional journalists, photographers and AI platforms alike generally attach content credentials to their work. These credentials include details about when, how and who created the image. These credentials are cryptographically authorised such that further manipulation will be evident to viewers.

 

In July 2025, a small galley fire broke out onboard the cruise vessel, ISLAND PRINCESS, which was en route to Halifax, Canada. The fire was quickly contained by the crew, and most passengers didn’t even realise an incident had occurred. Yet within hours, sensationalised AI images (such as title image) circulated on social media, which depicted a far more severe incident than occurred.

So once AI imagery has been identified, how should it be addressed?

Any approach to address concerning imagery should be tailored to the incident and the chosen media strategy. With that said, there are some general strategies:

1. Requesting image removal: Requesting a media outlet or social media user to remove an image has the potential to directly limit the damage of public misperception. However, the approach should be used sparingly and only in certain circumstances. For instance, when media exposure is high and the image is clearly motivating stakeholder panic. It should also be noted that there is also no guarantee an image will be removed on request – especially those that belong to social media users. Many news outlets will resist. Even if an AI image is removed, it’s likely it will have already spread beyond the specific news page or user profile.

2. Issuing a warning: Releasing a warning about AI imagery may help to educate an audience about the origins of AI imagery, but it may also draw new attention – increasing the exposure and reach of the imagery to external audiences. This approach should also be used sparingly.

3. Ignoring AI images: Simply ignoring AI imagery is often the best approach. If the imagery is not raising any specific reason to panic and media traction on the incident remains low, there is generally little risk in leaving the imagery.

 

There is also a fourth less obvious strategy: strike a strong proactive stance to communicate early and often. During large headline-leading incidents, media and social media activity is going to take place – whether the responding parties communicate or not. Slow or incomplete communications will leave the door open for AI imagery and misinformation to narrate the story. On the contrary, communicating well and – when the situation calls for it – distributing real photographs of the incident will limit the space and need for journalists or social media users to generate AI illustrations.

While frustrating for first responders and stakeholders, and not least photo-journalists and picture editors, AI imagery is here to stay. It is now another frontier in the public relations minefield of incident response. Like all aspects of crisis communications, the decision about how to respond in any one case is not an easy one. But through careful consideration, monitoring and expert guidance, responding parties can navigate around the worst risks that AI imagery poses.

 

Sign up for our Newsletter

NEWSLETTER REGISTRATION

Please submit this form to receive our email newsletter

[gravityform id="1" title="false" description="false"]