Skip To Content

Using AI in PR: Strategic Considerations for PR Practitioners

by M&Co. Staff

Whether it is Large Language Models (LLMs), Artificial Intelligence (AI), or generative AI, the past year has given content creators a glimpse into how automated systems may shape what they do and how they do it. The implications for this are profound for any marketing and Public Relations (PR) practitioner for whom content creation is a core skill.

What should PR practitioners be thinking of as we contemplate using AI? Firstly, there’s the question of transparency. It’s clear that LLMs can be used for research purposes, but they can also be used to create content. However, that content is only generated at the direction of the user i.e., the content reflects the user’s prompts and directions. The question then is how, or should, PR agency practitioners disclose their use of AI to clients? The same question stands for in-house PR leaders and their stakeholders.

There are two key considerations here; security and transparency. PR practitioners specialized in corporate & financial communications would have a difficult time justifying their use of AI platforms to draft earnings releases or M&A announcements where there are regulatory restrictions on the disclosure of non-public material information. So, in some cases there will be a binary yes/no answer to the use of LLMs for PR content depending on regulatory and confidentiality constraints in place, including when the PR firm is covered by attorney-client privilege.

For in-house practitioners, and agency PRs, transparency is also important. A significant portion of the value of marketing communications and PR rests in the creativity of the people involved. In these use cases there will be gradations of acceptable use and benefit. For example, for brainstorming on social media posts and fleshing out the core ideas of a press release. In each and every scenario though it’s advisable that PR practitioners and the teams and firms they work in are transparent about how and when they use LLMs.

Both sets of considerations require PR practitioners to have clear policies in place governing their use. This should detail:

  • When LLMs can and cannot be used
  • How it is disclosed when LLM generated content is used
  • Which LLM platforms are approved for use by their firm
  • What systems are in place to prevent plagiarism i.e., where LLMs have been used extensively but there’s no reference to that during the sign-off process internally – how is that detected?

Looking further ahead, it’s certain that AI-powered platforms will change the way that PR practitioners work. We’re seeing that now already at the very basic level of creating press releases and pitches and we are only at the beginning. There are several issues with the current, publicly available platforms such as hallucinations where the AI platform convinces itself of things that are simply untrue. In addition, the fact that LLMs derive their knowledge from existing online content also makes them susceptible to the same prejudices prevalent in parts of the general population.

It seems for the time being that use of LLMs can offer some advantages in terms of speed and capacity to analyse data at scale. However, the human touch is still required, with clear guidelines on how PR practitioner and machine should work together being an important first step in seeing if LLMs can help boost your output and results. Of course, in the end this all may result in a Hobson’s Choice where if we don’t use LLMs we’ll have failed to incorporate its benefits. Getting on the right path there, however, needs to start and be driven by understanding the capabilities of the technology available and then putting in place the right policies and procedures.

Share