Artificial Intelligence (AI) has surged into the spotlight in 2023 thanks to the emergence of generative tools such as ChatGPT, Bing Chat, and Google's Bard, alongside image creators DALLE-2 and Midjourney. We need only look at ChatGPT, which in January this year, passed 100 million active users, making it the fastest-growing consumer application in history, to understand the pace of AI growth. (By comparison, it took TikTok nine months and Instagram two-and-a-half years to reach that same 100 million milestone.).
Generative AI is quickly filtering into many different sectors and industries, and the world of communications - one driven by words and images - has already started to embrace it. As adoption increases, so too does the creation of more and more synthetic media (AI-generated texts, art, video and audio). Meanwhile, as these content generators become more widely available for experimentation and product development, understanding how synthetic media might complement the human intelligence required for effective communications is vital.
These rapidly evolving—and already controversial—technologies will have a significant impact on the communications industry. In 2023 and beyond, strategic communicators and PR professionals must understand the opportunities and challenges generative AI and synthetic media present. The broader comms industry also has a responsibility to ensure it utilizes AI in an ethical, responsible manner, while protecting and investing in human-led creativity and professional development.
Changing the Dynamic of What's Possible
Where AI technologies can be effectively applied is in support of the way PR professionals carry out routine tasks such as developing the framework for a press release, announcement or catalog/e-commerce copy; incorporating SEO keywords into content; generating lists of media outlets and journalists to target; and even predicting pitching success.
Brands can also go further by employing AI tools to generate audio versions of press releases or web content, or support content accessibility through text-to-speech features, making it possible for those with visual impairments to communicate. Generative AI is also now being baked into many social media publishing platforms, allowing users to create text and images to be published across brand channels.
But AI content generators aren’t just tools that do existing tasks better: they also have the potential to do entirely new things with content. One such example is the BBC’s approach to creating “flexible media” through AI whereby content is delivered to users dynamically based on their environments. A news article on a smartphone may change between text, video or audio, allowing the publisher to tell stories in different ways to different audiences. AI takes care of generating the different version of that same story.
For communicators, staying up to date on trends in AI and how media organizations and brands are applying these technologies will be crucial to staying competitive.
As Cision Chief Product & Technology Officer Jay Webster wrote in an article for Public Relations Society of America , generative AI is evolving quickly but needs to retain human input. “Artificial Intelligence in the creation of content is moving beyond applications such as SEO, opening the door to data-driven narratives and full-fledged content generation. AI-based natural-language tools exist that can produce copy for a basic press release that a human writer can then refine.”
Though AI’s potential to reshape PR is significant, there are just as many ways it can be disruptive and potentially harmful. AI tools are “trained” on specific and historical data sets, many of which can be problematic in serving up meaningful content as AI is essentially “guessing” the next best response to a prompt based on that data. That guess is only as good as the data on which it has been trained. With that in mind, comms teams need to ensure that their data is robust enough to align with the task they're asking AI to do.
Imagine if an application developed to help content creators write marketing copy was trained only on social platforms such as Twitter and public-facing Facebook content. It would only create content based on the way that other brands have already communicated on social media (thanks to its limited training set). The danger lies in porting those AI-generated results over to other situations or use cases.
As reporting by outlets like the New York Times and PBS NewsHour has made clear, there is a long way to go before AI content generators – whether in the form of a chatbot designed to imitate conversation or a tool for creating long-form text – can be relied upon to deliver accurate and factual content.
For these reasons, humans who interact with synthetic media generators need to be mindful of the prompts they use to elicit responses. ChatGPT needs good quality prompts to create good quality output. And it can’t help you if you mistakenly feed it the wrong information.
The language or imagery used in requests may also include unconscious bias—and may lead professional-grade tools astray. Though improvements are likely to be made to limit query fallibility, taking the initiative to learn and understand best practices will reap the greatest benefits of this technology.
5 Things You Need to Know About Generative AI:
- AI-generated content can’t be copyrighted under existing intellectual property law; it instantly becomes part of the public domain. Only content created by a human being can be protected by U.S. copyright.
- Because AI algorithms are trained on huge amounts of existing content, there is a risk that the original creators of that content could bring copyright infringement claims based on the use of their intellectual property in training the AI or in the content it produces. (Getty Images and several individual artists have already filed suit against companies pioneering AI image generators over the use of their images and artistic styles.)
- AI that has been trained on flawed content may perpetuate bias and stereotypes, or generate content that is misleading or outright false.
- Synthetic media is already being used maliciously, such as to create fake news. Sophisticated media monitoring will be critical for identifying misinformation or fake news that could harm your brand’s reputation and responding to it before it gains traction.
- It can be difficult to verify the origin and authenticity of machine-generated content, which can undermine trust in the PR industry and in the media more broadly. It is too soon to tell what guardrails—if any—will be legislated in the U.S. or internationally. However, policymakers are already moving on this issue. In the U.S., the Department of Commerce's National Telecommunications and Information Administration (NTIA) recently launched a request for comment (RFC) regarding AI accountability. In the UK, meanwhile, a recent whitepaper outlined "responsible innovation and [maintaining] public trust in this revolutionary technology.”
Enhancing Not Replacing - Human Creativity
As AI becomes more ubiquitous, public relations practitioners will need to be aware of the benefits and caveats that innovations in AI provide. For the most effective applications of these emerging technologies, the human element will continue to be essential.
As Cision Executive Director of AI Strategy Antony Cousins told PRWeek , “AI lacks the empathy and creativity that only a human with an absolute understanding of a problem can solve, and it’s not even remotely ready to help us build and maintain the relationships that allow many of us to succeed.”
One might use an AI-backed software solution to write an outline for a case study or press release, or to generate social media imagery with the greatest potential for consumer engagement, for example. But to guarantee accuracy, any such content would still need to be reviewed, vetted and optimized by humans with subject matter expertise.
Remember that tools like ChatGPT can read your question and provide a response, but they won’t understand the context. Though it can appear you're having a human-like conversation with a chatbot, it's only providing responses based on what its existing data is telling it should be the next response. ChatGPT isn't able to think on its feet or interpret ideas, it also can’t go materially beyond what has already been created on a topic. So for that reason, it lacks the ability to exceed a brief in the way humans can.
However, AI can help you generate an answer if you're posing the right question. For example, ask AI to simply write a press release and you'll likely end up with something routine and uninspiring. Is it a product press release? Who is the audience? And who are the main competitors? The specificity of the question, coupled with the right data training set, will lead to better results.
No matter the use case, AI, when used responsibly and mindfully, with a guiding human hand, can empower practitioners to work smarter, not harder.
AI Definitions - Your Cheat Sheet of Need-to-Know Terms:
- Artificial Intelligence (AI): Computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
- Chatbot: A computer program designed to simulate conversation with human users, especially over the internet.
- ChatGPT: An AI language model developed by OpenAI, which is capable of generating human-like text based on the input given to it.
- DALLE-2: An AI application that generates images from a user-input text description.
- Deepfake: AI-synthesized media that is false, such as doctored videos where one person’s head has been placed on another person’s body, or surprisingly realistic “photographs” of people who don’t exist.
- Large Language Model (LLM): an algorithm trained on a corpus of content that’s been developed to produce text, respond to questions using natural language, or translate material from one language to another
- Predictive Analytics: the use of data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes based on historical data
- Sentiment Analysis: the use of natural language processing, text analysis, and computational linguistics to systematically identify and quantify subjective information such as positivity or negativity
- Synthetic Media: Artificially generated media content such as video, audio/voice, images, and text, where AI takes on part (or all) of the creative process (Oxford Forum on Synthetic Media)