Glass box manifesto: Why PR needs protocol to enforce AI’s promise

Celestine Achi
6 Min Read

In the rapidly evolving landscape of global communication, the line between strategy and manipulation is becoming increasingly blurred by algorithms. We are in an era where artificial intelligence can craft a press release in seconds, analyze sentiment in milliseconds, and predict crises before they occur. Yet, as the capabilities of our tools expand, a fundamental question remains unanswered: Who is governing the machine?

Last week, during the “PR Power Lunch”, organised by the Nigerian Institute of Public Relations (NIPR), the industry gathered to address this precise challenge. The theme, Advancing Responsible AI Practice in Global Public Relations was not merely a seminar topic; it was a call to order.

Anchored on the Venice Declaration and the Global Alliance’s 7 Responsible AI Guiding Principles, the session highlighted a critical juncture for our profession. We heard from global leaders like Bonnie Caver and Adrian Cropley, who eloquently outlined the ethical boundaries of our new reality. But as we parsed through the ethics, one stark reality became clear to me: A pledge without a protocol is just poetry.

The “Black Box” dilemma

The Global Alliance’s principles — ranging from “Ethics First” to “Human-Centered AI” — effectively serve as the “Constitution” of responsible practice. They tell us what is right. However, as an AI educator and innovator, my concern is the how.

Currently, the vast majority of public relations professionals rely on what we call “Black Box” AI tools. These are systems where data goes in, and a decision comes out, but the internal processing is opaque. We do not know how the machine reached its conclusion.

This poses a fatal conflict with Principle 4 (Openness and Transparency) and Principle 2 (Human-Led Governance) of our new code. How can a PR professional claim to be transparent if the tool they use is a mystery? If an AI tool generates a crisis management strategy based on biased training data, and you cannot see that bias, you are not managing a reputation; you are risking it.

The “Glass Box” solution

The solution I proposed to the panel — and now propose to the wider industry — is a shift to the “Glass Box” approach. This is the domain of Explainable AI (XAI).

For the ethical principles of the Global Alliance to survive contact with reality, practitioners must begin demanding tools that “show their work”. We must treat AI not as a magician, but as a junior analyst whose work must be auditable. In a “Glass Box” system, every claim, every sentiment analysis, and every generated draft can be traced back to its source logic.

In the coming age of deepfakes and algorithmic disinformation, communication without explainability is not strategy; it is a gamble.

 The human shield: Beyond creativity to liability

Furthermore, we must operationalize Principle 3 (Responsibility) by enforcing a strict “Human-in-the-Loop” (HITL) protocol. The excuse that “the algorithm made a mistake” is effectively dead. In the eyes of the public and the law, the human operator is liable.

Therefore, the new workflow for the African PR professional must be rigid: AI is the drafter; the human is the editor, the auditor, and the final certifier. We must vet the machine before we vet the message.

The 2026 imperative: A forecast

This conversation is not academic; it is predictive. As we approach 2026, I foresee a fundamental shift in how public relations services are bought and sold.

First, we will see a shift from Virality to Verification. The premium service of the future will not be generating content (AI can do that for free), but authenticating it. Trust will be the most expensive commodity in the market.

Second, we will witness the death of the “Black Box Agency”. Clients, increasingly aware of liability risks, will begin mandating “Algorithmic Audits” as part of their procurement process. Agencies that cannot explain their tools will simply not win the pitch.

A Renaissance of Leadership: It is impossible to discuss this strategic pivot without acknowledging the renaissance currently underway at the NIPR. Under the presidency of Dr. Ike Neliaku, the institute has witnessed a quality of visionary leadership that is arguably second to none in its history.

By actively aligning the NIPR with the Global Alliance and domesticating these global standards, Neliaku and his supportive Vice President, Prof. Emmanuel Dandaura have ensured that Nigerian practitioners are not just spectators in the AI revolution, but active participants shaping its governance.

 Conclusion

The NIPR has provided the moral compass. The Global Alliance has provided the map. But it is up to us—the practitioners—to navigate the terrain.

We must embrace the tools of the future, but we must do so with our eyes wide open. We must move forward not just as storytellers, but as technocrats with a conscience. Ignorance of the algorithm is no longer just a gap in knowledge; it is a professional risk we can no longer afford to take.

Dr. Achi is an AI Strategist, AI educator, and author

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *