In the digital age, where images can be conjured, enhanced, or manipulated with a few keystrokes, a single photograph has ignited a firestorm of debate in Nigeria and beyond.
Four days into the new year, the Nigerian Presidency shared a picture on X depicting President Bola Tinubu in a private lunch meeting with his Rwandese counterpart, Paul Kagame in Paris, the French capital. The image showed the two leaders seated amicably, discussing global affairs and Africa’s future.
Yet, what should have been a routine diplomatic snapshot quickly spiraled into controversy due to a subtle but unmistakable detail: a “Grok” watermark embedded in the corner, hinting at AI involvement.
Social media erupted with hashtags like #ForgerInChief and #ParisLunchHoax. One angry user wrote: “What they cannot forge does not exist, artful forgers”.
X posts from critics question the very occurrence of the meeting, linking it to broader allegations of deception in governance.
Even as the Presidency swiftly clarified through a Senior Special Assistant, Temitope Ajayi that the photo was authentic, captured on a phone in poor lighting and merely enhanced using Grok AI for clarity, the damage was done.
Reactions remained polarised: some accepted the explanation as a harmless tech boost, while others dismissed it as a cover-up, fueling cynicism about government transparency.
This incident isn’t isolated; it’s a microcosm of the ethical quagmire surrounding AI in Public Relations. In my 2025 article, AI, Ethics and the Soul of Public Relations, I delved into how responsible AI use upholds the integrity of communication, drawing on frameworks like the International Public Relations Association’s (IPRA) Five AI and PR Guidelines, which emphasise honesty, transparency, and avoiding harm, and the Global Alliance for Public Relations and Communication Management’s Seven Responsible AI Guiding Principles, including human-led governance, accountability, and ethical innovation. These principles, born from global consultations, underscore that AI should augment, not undermine communication.
The Tinubu-Kagame photo saga vividly illustrates why ignoring them invites chaos. Here are two quick lessons from this self-inflicting paracrisis.
1. Transparency is the antidote to suspicion
A simple observance of transparency guidelines could have averted this hullabaloo. The Global Alliance’s principle of “Transparency and Explainability” mandates clear disclosure when AI touches content, much like labelling genetically modified foods. In this case, had the Presidency preemptively noted the enhancement perhaps with a caption like “Photo enhanced for clarity using Grok AI”, the watermark might have been a non-issue. Instead, its unannounced presence bred doubt.
PR practitioners should disclose AI interventions to “inform” the public. Without it, trust erodes. Statistics bear this out: A 2024 Getty Images study found that 98% of global consumers view authentic visuals as crucial for building trust, with 87% demanding transparency on AI-generated or altered images. In Nigeria, where misinformation already plagues politics, such opacity only amplifies skepticism.
Borrowing from psychology, this ties into “confirmation bias” — people interpret ambiguous evidence to confirm preexisting beliefs. Critics of the administration saw the watermark as proof of forgery, while supporters rationalised it away. As AI ethicist, Timnit Gebru warns: ‘If we don’t build transparency into AI systems from the start, we risk amplifying existing inequalities and eroding societal trust’.
2. Strategy must lead technology: The value of certified communication experts
Too often, organisations adopt tools before they adopt frameworks for ethical use. AI can enhance imagery, but without strategic judgement, it can also undermine messaging. This is where professional communication expertise becomes indispensable.
Trained communicators bring two essential assets:
1. Contextual judgement
The ability to anticipate how a message will be interpreted, not merely how it is constructed. This requires situational awareness cultural, political, and psychological. For example, the Nigerian Institute of Public Relations’ adoption of the Global Alliance Responsible AI Guidelines emphasises “Expertise and Professionalism”, urging practitioners to understand not only AI’s capabilities but also its limitations, risks, and ethical implications within sensitive communication environments.
2. Ethical foresight
The discipline of applying ethical standards proactively to protect credibility and public trust. This includes early disclosure of AI involvement and ensuring narrative alignment. A certified PR professional trained under frameworks such as the IPRA and Global Alliance AI guidelines would likely have anticipated the reputational risk of an unexplained AI watermark and recommended either its removal or transparent disclosure, thereby averting the backlash before it escalated.
Conclusion
When strategy lags behind technology, credibility becomes collateral damage
The Tinubu–Kagame photo did not fail because it was enhanced; it failed because its creators ignored how it would be perceived. In public communication, perception is not secondary, it is decisive.
The danger today is not AI, but communication without judgement. Technology magnifies both message and mistake. Without strategic oversight, AI doesn’t clarify truth, it multiplies doubt.
Credibility cannot be edited back in.
This Grok image saga reminds us that AI cannot confer trust; only authenticity can. The digital public is not irrational, its skepticism is earned. In an ecosystem crowded with manipulated media, ambiguity becomes the currency of doubt.
