What is AI?
The Oxford learner’s dictionary defines AI as the study and development of computer systems that can copy intelligent human behaviour.
But come to think of it, what if Artificial Intelligence is not just artificial, and isn’t merely intelligent? What if AI is actually “augmented intention”. An external mind that absorbs our communication patterns, magnifies our cognitive biases, and mirrors our ethical blind spots back at us.
For instance: YouTube, TikTok, Facebook and Spotify augments our intention by learning from our digital behaviour (what we watch, like, and share) and then feed us more of the same, reinforcing our beliefs and biases.
Amazon’s AI doesn’t just show you options, it predicts what else you might want (matching socks, a cleaning kit, similar brands). It is augmenting your shopping intention before you even articulate it fully. The AI completes the sentence you didn’t finish.
When users ask ChatGPT to draft a pitch, summarize a report, or generate business ideas, it is not replacing their intelligence but scaling their ambition, creativity, and clarity.
All these tools become a projection of the user’s inner intention, working faster and broader than the human mind alone could. Therefore, act like a digital subconscious, feeding off our impulses and biases, then projecting them back in a way that shapes our perception of reality. Thus, becoming our “Augments Intentions”.
What is strategy?
Strategy, in its most misunderstood form, is often confused with planning. But planning is linear, strategy is existential. It is the way we orient ourselves toward uncertainty. Communication scholars call this proleptic framing: the act of shaping meaning in the present with the future already in mind. Miyamoto Musashi, the great 17th-century Japanese Samurai general and philosopher captured it profoundly when he said, ‘In strategy it is important to see distant things as if they were close and to take a distanced view of close things’.
The PR strategist, then, is not just a planner or technician but a philosopher of consequence. He must look at a press release and ask, ‘What will this sentence cost me five years from now, in a scandal I can’t yet see’? This is why I define I argue that, “strategy transcends planning, it is the clarity of vision to see the destination at the journey’s start, and the wisdom to choreograph actions and choices that make that vision inevitable’.
What is reputation?
Reputation is the total sum of the perception a particular stakeholder have about a particular organisation, brand or leader at a particular time.
It is the emotional residue people carry forward from every touchpoint, every silence, and every inconsistency. In a revealing interview, renowned English defender Ashley Cole once said that whenever he hears Cristiano Ronaldo’s name, what he remembers is the humiliating dribble Ronaldo performed against him. That moment became Ronaldo’s signature in his mind, not a press release, not a statistic, but an emotional imprint.
From a psychological perspective, reputation works like schema theory our brain’s shortcut for making sense of complex personalities or brands. Once the public forms a schema about you (trustworthy, arrogant, evasive), everything you say or do is interpreted through that lens.
But here is the twist: AI now helps audiences form these schemas faster and more rigidly whether rightly or wrongly but most dangerously unconsciously. For instance,
During the #EndSARS protests against police brutality in Nigeria, on 20 October 2020, the Nigerian Army opened fire at protesters gathered at the Lekki Toll Gate in Lagos. Initial confusion surrounded how many people died, if any and whether it was a massacre.
Social media algorithms (Twitter, Facebook, Instagram) prioritized graphic, emotional, and unverified content (videos and photos) claims of mass death (even when some were later proven misleading, misattributed, or unverifiable) because it drove engagement. As users liked, commented, and shared, AI locked audiences into emotional echo chambers, reinforcing the schema of a government-led massacre.
The Nigerian government was instantly branded as murderous and genocidal in global media narratives. Even after forensic panels found conflicting evidence (with some reports disputing mass deaths), the “massacre” label stuck due to AI-shaped public belief. This is because AI didn’t just spread the news, it selected, magnified, and cemented a narrative before full facts emerged. The simple truth is, ‘if you don’t manage your reputation, your algorithms will. And they are far less forgiving’.
AI as a strategic tool
In the recent MIT study, over 80% of people using ChatGPT couldn’t remember a single sentence of what they “wrote”. Their brains were less engaged, and even their essays while fluent, were emotionally flat. Efficient, yet empty.
This study underscores why we must stop treating AI as a tactical assistant and start seeing it as a strategic tool. This forces us to externalize our ethical assumptions, to script empathy, to pre-test public trust. The strategic use of AI, therefore, is not about efficiency or novelty. It is about what I call “principled foresight”.
Principled Foresight refers to the capacity to anticipate the future consequences of decisions by combining value-driven conscience (moral imagination) with structured ethical reasoning (ethical anticipation).
Hence Strategic use of AI = Principled foresight
To achieve this we must think in three dimensions as far as the use of AI is concern:
- HEAD
- HEART
- HAND
Let us take a deep dive into my 3H Model in-depth with analogies and real-world examples.
HEAD: Strategic thinking and intelligence
In the AI revolution, strategic thinking must begin with the mind of man before the machine. As Prof. Sherry Turkle (2011) cautioned in Alone Together, “Technology doesn’t just change what we do; it changes who we are”. Hence, before we prompt AI, we must prompt ourselves with intentionality. Here is how to do it:
- Begin with Human Intelligence (HI) before Artificial Intelligence (AI)
Artificial Intelligence should never precede authentic human intent. Strategy, context, tone, and empathy must emerge from the human core before being interpreted by algorithms. In the words of Daniel Kahneman, Nobel laureate in behavioral economics, “Intuition is nothing more and nothing less than recognition”. Recognition requires lived experience something AI lacks.
Take President Barack Obama’s speeches renowned for their emotional weight and cultural resonance. His speechwriting team began with human-centered ideation: identifying collective anxieties, values, and aspirations. Only then were digital tools used for refinement not initiation.
Thus, the PR professional, like the sculptor, must first visualize the outcome, feel its emotional texture, then allow AI to assist not lead.
- Use AI for pattern recognition, not final judgement
AI is efficient at identifying “what” is trending, but not “why” it matters. Human cognition is needed for interpretation, prioritization, and ethical calibration. As Shoshana Zuboff warned in ‘The Age of Surveillance Capitalism (2019),’ “AI feeds on behavioral surplus, not meaning.” Meaning-making is the domain of the human mind.
For instance, Coca-Cola utilizes AI to analyze customer sentiment across platforms. However, decisions to pivot campaigns like adapting to regional sensitivities remain in the hands of human strategists who interpret culture, context, and mood. Just as the Hermeneutic Circle, a theory from Martin Heidegger (1927) and Hans-Georg Gadamer (1975), has argued that “understanding is always context-bound.” An ability AI does not possess. In essence, let AI acts as a compass, not a captain.
3. Integrate psychology and audience insight into AI outputs
While AI can simulate intelligence, only psychologically grounded humans can spark resonance. Use theories like the Elaboration Likelihood Model (Petty & Cacioppo, 1986) to tailor messaging to either central or peripheral routes of persuasion.
A good example is the Nike’s “You Can’t Stop Us” campaign during the COVID-19 lockdown was not merely stitched together with AI video analysis. The emotional arc struggle, hope, unity was sculpted by strategists who understand identity theory and Maslow’s hierarchy of needs.
Antonio Damasio, the David Dornsife Chair in Neuroscience, as well as Professor of Psychology, Philosophy, and Neurology, at the University of Southern California captured it masterfully when he said, “We are not thinking machines that feel, we are feeling machines that think.” Therefore, Use AI as a draftsman, not as the architect.
- Test and A/B optimise continuously
AI can rapidly generate multiple versions of a headline, CTA, or banner, but only testing reveals what truly connects with real people. Paraphrasing the words of the renowned Organisational Psychologist Adam Grant that, “Originals” are not necessarily the first to an idea, but those who improve upon existing concepts by testing and retesting until they create something new and valuable.
Like a chef tasting food while cooking. PR professionals must test message “flavours” in real-time using AI plus behavioural data.
For instance, at HubSpot, AI is leveraged to A/B test email subject lines by generating multiple variations and predicting which ones will drive higher open rates. The system analyzes user behaviour, timing, and language patterns to refine messaging in real time. However, human marketers ensure each variation aligns with the brand’s tone, audience expectation, and campaign objectives.
As our elders say, “You do not test the depth of a river with both feet.” As PR professionals using AI we must test incrementally. Remember, AI is your tool, not your blindfold.
- Train your team on AI literacy and ethical discernment
AI tools are only as intelligent as the ethical and cognitive depth of their users. Training your team to ask better questions, interpret results critically, and recognize algorithmic bias is no longer optional it is strategic armor.
IBM’s communications unit holds quarterly AI ethics training, instilling a culture of discernment, not dependency (IBM Communications Insights Report, 2024).
HEART: Human Values, Ethics, and Empathy
Artificial Intelligence may process data, but only human beings can process dignity. Ethical communication demands not only accuracy but empathy, transparency, and respect for cultural narratives. This involves humanizing the AI tools. Below is how you do that:
- Set ethical and cultural guardrails: The moral compass of strategy
AI must never function in isolation from ethics. In a multicultural, hyperconnected world, the absence of cultural sensitivity can transform innovation into offense. Ethics in communication, as Immanuel Kant’s categorical imperative suggests, must be universalizable. That is, what we apply in one context should not violate human dignity in another.
A relevant example is Adidas during the 2022 FIFA World Cup in Qatar, where the brand deployed AI-driven real-time engagement tools to track fan sentiment and engagement across regions. However, human reviewers were embedded in the workflow to vet content, ensuring that visuals or language did not clash with the religious and social codes of conservative Middle Eastern audiences (Adidas Global Strategy Report, 2023).
In PR, AI is a tool but ethics is the compass. And where there is no compass, even the fastest ship may run aground.
- Disclose AI use where appropriate: The ethics of transparency
In the words of Kevin Plank, Founder of ‘Under Armour,’ “Trust is built in drops and lost in buckets.” Therefore, in trust-driven professions like public relations, transparency is not optional it is existential. When audiences discover they were “engaging” with AI unknowingly, it triggers cognitive dissonance and erodes trust. According to the Communication Privacy Management Theory by Sandra Petronio (2002), people feel violated when their expectation of control over information boundaries is breached.
During the COVID-19 pandemic, the World Health Organisation (WHO) deployed AI-powered chatbots across WhatsApp and Facebook to disseminate life-saving public health information. However, each bot was clearly labeled as AI a small gesture that had big implications. Trust in WHO’s messaging increased, and misinformation was countered more effectively (WHO COVID-19 Response Report, 2021).
Transparency around AI use not only preserves credibility but also reflects deontological ethics where rightness is not judged by outcome alone, but by the integrity of the process.
- Feed the machine with soulful inputs: Humanising the algorithm
In The Shallows Nicholas Carr argued that, “Garbage in, garbage out is not just a computing phrase it’s a human failing.” Hence, it is safe to say, AI outputs are only as insightful and resonant as the inputs it is trained on.
From a psychological standpoint, social presence theory reminds us that the feeling of being understood and emotionally connected in communication increases with perceived human presence. AI can simulate language, but it cannot simulate genuine empathy.
Therefore, to generate content that connects emotionally and culturally display empathy, PR professionals must train AI using soulful, human-centered data: literature, testimonials, historical speeches, and culturally embedded narratives.
Duolingo’s AI chatbots, for instance, go beyond textbook grammar. They are trained on idioms, humour, and culturally specific dialogue, making their tutoring experience more relatable and humanlike (Duolingo AI Research, 2023)
Just as our elders say, “Until the lion learns to write, every story will glorify the hunter.” Only by embedding our own stories and perspectives into the machine can we preserve cultural dignity in digital storytelling.
HAND: Execution, Co-creation, and Action
Great ideas must translate into action. In the third dimension, the HAND AI becomes a tool of execution, not autonomy. Here, communicators blend craft with code, using AI to scale creativity but retaining human originality and responsibility. Below is how to achieve this
- Co-create content with AI, not from AI alone
Think of AI as your junior associate fast, consistent, and capable of assisting with repetitive tasks. But like every apprentice, it lacks the soul, sensitivity, and subtlety that make communication impactful. In the words of Yuval Noah Harari, “AI has no consciousness, no understanding it predicts patterns, but it cannot care.” Communication, especially in the realm of PR, is not merely about sending messages, it is about meaning-making.
This is where the theory of moral imagination comes in. As noted by Patricia Werhane (1999), moral imagination is the ability to perceive that a situation presents an ethical issue and to envision alternative possibilities for dealing with it. AI cannot envision the moral, emotional, or contextual dimensions of storytelling it can only mimic syntax, not significance.
Take for instance, LinkedIn’s AI-generated personalized message feature. On face value, it creates relevance. But smart professionals know the power of authentic emotional connection. So, they edit these suggestions infusing “warmth of presence”, cultural touchpoints, or humour thus transforming what could have been a cold pitch into a conversation starter.
If I am to use Daniel Kahneman’s dual-system theory to explain this better, I would argue that AI predominantly operates through System 1 (fast, instinctive, and pattern-driven), ideal for processing large datasets rapidly, while human communication often demands System 2 thinking (deliberate, context-aware, and emotionally attuned), essential for interpreting nuance, cultural undercurrents, and ethical dilemmas.
Our Ancestors in Africa understood this well when they said, “Words are sweet, but they can’t replace food.” A beautifully written AI message cannot replace the substance of human presence in communication (never forget that).
- Keep a Human-in-the-Loop (HITL) system
In sensitive communications crisis, healthcare, governance, or inter-ethnic relations AI must not be left alone in the control room. The principle of Human-in-the-Loop (HITL) is not just a technical requirement; it is a moral necessity. The communication of care, caution, and credibility cannot be fully automated.
For instance, the UK’s National Health Service (NHS) has AI chatbots which handle routine symptom checks, but serious or ambiguous cases are escalated to human doctors. Why? Because a wrong tone, a misunderstood phrase, or a misinterpreted symptom can cost a life.
In PR, this translates directly to reputational risk. A delayed apology, a misworded statement, or a culturally insensitive remark can ignite a public backlash which may cause irreparable damage to reputation, financial loss, disruption of operation or threatens existence of the organisation.
The Facebook-Cambridge Analytica scandal is a very good example of this. The company’s overreliance on data-driven profiling without ethical oversight led to global outrage, regulatory fines, and deep reputational damage.
Conclusion
In an era where AI accelerates information and public scrutiny intensifies by the second, the future of Public Relations demands more than tools, it demands principled foresight. This means anticipating not only what is possible, but also what is ethical, empathetic, and culturally appropriate. I offer the 3H Model Head, Heart, and Hand as a strategic compass for PR professionals: to think clearly, feel deeply, and act decisively. When guided by this triad, AI becomes not just a technological asset, but a force for meaningful, human-centered communication
Ayodele is a distinguished and multiple award-winning strategic communication expert who specialises in Message Engineering. He helps Organisations, Brands and Leaders Communicate in a way that yields the desired outcome. He is the author of the seminal work, ‘PR Case Studies; Mastering the Trade,’ and Dean, the School of Impactful Communication (TSIC). He can be reached via ishopr2015@gmail.com or +2348077932282