There’s a lot of discussion about artificial intelligence right now. Depending on who you ask, AI is either a threat to every creative job on the planet or the productivity saviour we’ve all been waiting for.
The truth, at least for those of us working in strategic communications, lies somewhere in the middle. AI can help us move faster. It can speed up drafting, assist with document summaries, and even offer decent working versions of media releases. But it still can’t do the hard parts: relate to clients, keep on the pulse of politics, connect with the right people, or even understand local media landscapes well enough to get the story right.
At a recent forum in Perth hosted by Communication and Public Relations Australia, the discussion turned to exactly this point. The session “How AI is Reshaping Communication” featured insights from Curtin University academics Bridget Tombleson, Dr Katharina Wolf, and practitioner Nikki Milne.
What struck me most from that discussion was how AI tools often fail to capture the angle — that elusive hook that journalists are always in search of. AI doesn’t have a feel for the news cycle. It doesn’t read the room. It can’t see how a federal policy announcement, a change in ministerial priorities, or a shift in public sentiment could shape how a story should be framed.
Public affairs professionals do all of that instinctively. Because we’ve built relationships with journalists, companies and politicians, we know what quote will work and what might offend. We understand the difference between a line that’s technically correct and one that will land with an audience.
Another major blind spot for AI is voice. Every spokesperson has a unique way of communicating – some are cautious, others are bold. Some speak in plain, direct language, while others rely on metaphors, detail or vision statements. Getting that voice right is what turns a good quote into a powerful statement. AI can mimic tone, but it doesn’t know you. It doesn’t understand your history, your role, your priorities or your leadership style. A quote generator won’t work when your reputation is on the line.
This forum prompted some internal reflection at ReGen Strategic. Like many in our field, we’ve been trialing various AI tools to enhance productivity, but as ethical communicators we also needed to set the ground rules and formalise the expectations and responsibilities of our people.
We took the step of introducing a formal AI Use Policy to ensure we embrace this technology responsibly. We don’t see AI as a threat, but we do believe the way we use it matters. Especially when we’re working with sensitive client material and high-stakes media environments.
Our policy is guided by four key principles:
- Confidentiality: No client data or commercially sensitive material can be uploaded into public AI platforms. Client trust and information security must always come first.
- Human Oversight: AI-generated content must be reviewed and refined by a qualified team member before anything is used externally. These tools are a starting point, not a substitute for professional judgement.
- Transparency: If AI is used to support any client-facing material – whether it's a media release, briefing pack or report summary – its use must be disclosed to the project manager. Being upfront about the process is essential.
- Accuracy and Accountability: No matter what tools we use, our team is still responsible for the final product. That means fact-checking, editing, and standing by the advice and outputs we deliver.
The value we bring as communicators isn’t in how fast we type or how cleverly we summarise information. It’s in the thinking, the context, and the ability to shape narratives that resonate. AI is good at generating language but it can’t build trust, manage relationships, or advise a client on how to navigate a volatile political or media environment.
For us, AI is just another tool in the kit. One that helps us move faster but not one that replaces the human edge.