The Leading in a Crisis Podcast

EP 48 Harnessing AI for Crisis Communication: a conversation with Philippe Borremans

Tom Episode 48

Send us a text

Discover how artificial intelligence is revolutionizing crisis communication management alongside our esteemed guest, Philippe Boremans. With his extensive experience in public relations, particularly in crisis and risk communication, Philippe unveils how AI tools like ChatGPT are reshaping the landscape. He shares invaluable insights on leveraging these technologies for efficient content creation and crisis planning while stressing the importance of ethical considerations. Whether you're a seasoned professional or just starting out in the field, Philippe's practical advice on using accessible AI tools will guide you in enhancing your crisis communication strategies.

You can learn more and reach Philippe Borremans at wagthedog.io or via LinkedIn.

We'd love to hear from you. Email the show at Tom@leadinginacrisis.com.

Tom Mueller:

Hi everyone. Welcome back to the Leading in a Crisis podcast. On this podcast, we talk with crisis leaders around the world and we share lessons learned and stories from the front lines of crisis management. I'm Tom Mueller. With me today is a special guest, philippe Boremans. Philippe is joining us from Portugal today and he is well. Philippe. Tell us a little bit about yourself. You have such a wide and varied history. How do you introduce yourself to people these days?

Philippe Borremans:

Hi Tom, thanks for having me on the show. So, um, yeah, my, my background is in public relations, so I studied that at school very long time ago and then um started my pr career in a in an agency, portugal valley in brussels, in the brussels bubble. Uh, originally I'm from belgium and I spent 10 years with ibm as a public relations manager and then moved out as a consultant. So pretty early on I loved a good crisis from time to time, and so that's my specialty. Now it's crisis, emergency and risk communication, and I'm lucky enough to have clients on both the private sector side and the public and UN agency side, so I sometimes work on product recall and then sometimes I work on epidemics and nasty things like that.

Tom Mueller:

Well, you bring a really wide variety of experience into this and lately I know you've had a focus on artificial intelligence and trying to make sense of that For those of us who are just sort of awakening to the fact that there's this AI tool out there. I've been very interested to read a lot of your writings around AI and, for those of you who may not know, philippe does a regular newsletter called Wag the Dog where he dials in and talks about issues with AI and crisis communications and that in the newsletter and it's usually an amalgam of a variety of different sources of information with his own unique spin on it. So it's very interesting reads I'm finding, philippe. Thank you, hey, let's just talk AI.

Tom Mueller:

I'd like to dive into this, since it's such a hot and emerging issue for everybody really. But when you think about crisis management incidents right, and for most of our listeners they may be in a corporate communications job, such as you were with IBM, or an agency job like Porter Novelli. I'm figuring out. You know what's the applications for AI in dealing with a major incident and I know you've got thoughts on that. So if you were going to counsel a client today on how to use AI in an event that happened this afternoon. What would you tell them?

Philippe Borremans:

Well, there's many different ways of applying AI, and I think first, we need to understand as well what do we mean with AI, because there's different levels and formats of AI. When I usually speak to communicators crisis communicators, emergency communicators we think about generative AI. That's the chat, gpt, that you have, the cloud and what have you, and that is, of course, the things that we, let's say, at least played around with, because it's online, it's free or paid, but it's not expensive, or it gets integrated into our working environment. If you work in a Microsoft environment, you have, as well, their own AI integrated in all your tools. So it's becoming, let's say, a day-to-day thing which is very much accessible. But then you have other AI systems which have been there for a long time. But then you have other AI systems which have been there for a long time. All our media monitoring and social media monitoring tools have been using one or the other form of AI in the backend for a long time. Crisis management systems, all the way down to operations and logistics, have one or the other form of AI or at least automation in there. So it's a big field.

Philippe Borremans:

But if we look at generative ai, the things that we can put our hands on, uh, very low scale and very, very quickly into our workflows, then what I see is that many people immediately think about content creation and and why not? It's a good thing, why not? These things write pretty well. You can train them to write very well, as long as you know how to talk to them. That is, then, the prompting techniques that you need to know. So that's fine, but we can go higher up.

Philippe Borremans:

I'm, for instance, using my own little AI system to create first drafts of rather complicated crisis communication plans. Or when a client is asking me well, philip, could you create a? So I do a lot of crisis simulation, scenario design and then rolling them out. Well, there again, I've got another AI help sidekick, as I call them, to help me again first draft. And so I think, for crisis communicators, emergency communicators, risk communicators in the context of emergencies and crises, the biggest added value of these systems is, of course, when you know how to use them. Is the gain of time, single resource that we never have enough of and, uh, of course, using them ethically and with transparency and, of course, based on on good information that you're working with in a secure environment.

Tom Mueller:

but the time saving aspect is is incredibly powerful in the context of crisis so let's talk about the, the plan drafting, for a moment um so we had a guest on a few episodes back, dan Smiley, who does the Tactics Meeting podcast, based out on the west coast of the US, and he was talking about how he needed to develop an AI agent and have it spit out a rough initial response plan to deal with this. And he said you know it wasn't a final plan but it gave me a great head start to get this done. Companies or municipalities that might want to tap into that. Is that as simple as going to a chat GPT and asking for help?

Philippe Borremans:

How does somebody get started with that? Well, I think, if you've never had this interaction with generative AI, I think that would be a good idea, taking into account, of course, a couple of rules that you need to take into account. You will never feed these systems confidential information and, of course, you'll be very transparent in the use of those tools. In the EU it's a law thanks to our EU AI Act, so, but to test out the waters, definitely go in there and then maybe have a bit of, take a couple of not courses, but online. You have very smart people you know explaining you how to prompt, how to create really good instructions as well, and then you can play around. So you can see a first feedback, first output of these systems, and I think most people who have never done this in a structured, good way will be very much impressed. Now, that's good to get a feel for it and to get an idea starting to get an idea of what the power of AI could be. Once you get serious about it, then, of course, I advise all my clients that they need to look for a large language model, the system, the AI system backend, on which generative AI systems or chat systems are set up, find the one that is suitable for them and then, of course, implement that on their own servers behind their own firewall, with their own IT specialists.

Philippe Borremans:

You want your own system. It is not, let's say, an incredible huge project. There are very good open source LLM systems that can be instructed, can make them purpose-built. You can then train them on your own information, which I assume you will think is the correct information that you want to work with. So the hallucination part is getting limited. You can instruct the system what to do exactly and then I think and I know a couple of organizations which have done that already they see a huge increase in in, in in time savings in these first drafts, getting them out or even as far as going to test, uh, emergency messages, um, so there are a lot of applications that one could think of, but it's about starting small, taking one thing that you think well, maybe there it can make a difference, and then testing it.

Tom Mueller:

So one of the challenges then early on is just educating the AI agent about your way of doing things. And you generally do that by just feeding it copies of your existing plans or past plans, maybe copies of your press releases, and then it will sort of interpolate all that data and once you ask it to output something, it's going to rely on that data to do that.

Philippe Borremans:

Exactly. But that, tom, that means and that's another, how would I say aspect that I see from time to time that means that your plans need to be in order, that your SOPs need to be in order, that your system should run without AI. And so I think it's a nice aspect of trying to work with AI, because it forces you to look on the inside and oh yeah, that crisis plan from six months ago is not really up to date anymore. Oh, that SOP. Well, you know, 10% of these people have the company. So I think it's good, because it forces at least people who want to go into that direction to turn inside and look at okay, what we have now. Is that good enough to feed the system? Because, of course, if you train it on outdated information or not optimal SOPs or whatever, then the output will be based on that old information.

Tom Mueller:

So you will not get the value. That takes us to a very old adage garbage in garbage out right, yes, yeah, interesting. Yeah, it's a really good point you make about sort of spring cleaning your crisis plans and make sure things are up to date. But you could make a case too for dump it all in there, spit out a new plan and you know, now you can see, you know here's some new things.

Philippe Borremans:

Here's where your gaps are, so you could use an AI which is then connected to the web, maybe to you know, public information, research databases and say look, this is my plan in my context. What are the gaps? How can it be improved? So I do this often to double check. So I write plans or do stuff, and by using AI as well, but then I ask the AI again now check the work, be very critical. Have we missed something? Have you overlooked something? What if scenario? So it's very good at that, but then that means that it, of course, can check other resources outside of the organization. Otherwise, you stay in your bubble.

Tom Mueller:

So when you're doing that, when you're checking it with a broader database, then is that's? Is that on something like a chat, gpt or another language model?

Philippe Borremans:

yeah, it depends on so different language LLMs, large language models and chat interfaces have have different I wouldn't call it specialties, but they're better in certain things. For instance, cloud from anthropic is the best writer. We know that according to benchmarks is the best writer. So if you want you know good written stuff, then claw this is is maybe the go-to place generalist approach with, uh, deep research now as well. Um, you have chat gpt very powerful model. It's still one of the most powerful models out there.

Philippe Borremans:

Perplexity, for instance, is an interesting one because it was the first one which was also trained on scientific database, so it was the first one which gave you references that you could double check, which is important.

Philippe Borremans:

So it depends on what you want to do. Generally, I would start off with perplexity. If I'm using the public available ones, I would use perplexity to do the research that I can double check the links and where the information comes from, and I know that I can tell perplexity only to look for scientific papers and things like that. Then I would go over to chat GPT to maybe brainstorm things and maybe question stuff and have it think really through and if the output would then need to be I don't know an article or a paper or an e-book or whatever. I would use Claude as well, because it's a good writer for a first draft, and I always end with another small tool, which is Instatext, and Instatext is very good at correcting my very bad written English. My mother tongue is French and Flemish, so I always and although I always worked in English, but still I want to be sure that I'm not making stupid mistakes in the language, and then I have to pick British English or American English, of course.

Tom Mueller:

But that's quite the series of different tools that you're using, each of those AI based, each of which brings a slightly different sort of area of focus or expertise. Then sort of area of focus or expertise, then, for the average person who wants to jump in and sort of practice or try these out, do you have to subscribe to each one of those tools then to have access?

Philippe Borremans:

No, no, all of them have a free version. And already, if it's your first step, definitely start with the free versions. All of them are accessible in the free version. Once see a bit the the power or the potential application, you could go for a subscription. Most of them are like 20 a month. It's not the end of the world, but they do give you, um, let's say, more power, more computing power, more thinking power, uh, reasoning power. So you do see a difference. But for the first steps that will open, the free versions are definitely worth checking them out.

Tom Mueller:

So one of the things that concerns me as I think about that if I wanted to drop in some crisis plans to use as a basis for developing future plans is the confidentiality issue right. Do I need to scrub those plans of you know sort of company, specific references or phone numbers before dropping them into a large language model?

Philippe Borremans:

Well, the reason you would have to do that is because your company has that rule in place. Your company has that rule in place because there's a lot of um hype around ai and people think, oh my god, you know, I put the plan in and it shows up somewhere else. That is not the case. It gets split up in a zillion pieces and everything so. But of course, you have your company rules and, uh, it's.

Philippe Borremans:

It's very easy to take a plan uh, take out all the the references company name, telephone numbers, of course, name of people in the crisis team but just keep your plan as such, edit it to take out all the confidential things, and if then your company allows to use that because it's a simple framework of a plan, then you can work with that. The other approach is simply in your you, when you chat with the system is describe your organization. You don't have to use the name. You could say um, I'm working for a major uh technology company. Um, we're looking at uh geographic scope would be the us. Uh, we're looking at certain types of potential crises, let's say a cyber attack. So you could make it like that as well in your prompt and then it will work with that information, and so you can fine-tune it so that it's very relevant to your own environment.

Tom Mueller:

And then it will search its own existing database of other plans, other data that it's learned from already, and come back to you with something that, from what I hear generally, is a pretty good, pretty good start. I have a very small experience with that, bill that the graphic that I have as my backdrop here I generated using AI. It was like 18 months ago, and I think it took me more than 50 different prompts to try and get this image, and it was such a frustrating process, right? So there's something you mentioned earlier about you know learning how to talk to them and how to write good prompts, and that's something that I think is going to become a real skill set for practitioners and people working in the crisis space. Right, the better you can do that, the more quickly you're going to engage and get meaningful results.

Philippe Borremans:

Well, there's two schools in that, because it's a topic that comes back every couple of months that you've got a school which says, in fact, your prompting skills are, or will soon be, not worth anything because the systems will get so smart that actually you can just type in what you need and it pops out. And then you've got a school which says, no, I mean, actually the structure of your prompt is, there is a system behind that, because you're getting closer to the thinking language, so to say, of how these systems have been built and the latest, let's say, debates. I've seen that now, with the new version of ChatGPT coming open, it's the organizations themselves, like OpenAI in the context of ChatGPT, who said no, prompting is still important, it's getting easier, but the structure is still important because you'll get better answers. You will always get an answer. That is not the issue. The thing will answer. But if you want a good, structured answer where you can work with and it takes into account what you really would, then yes, you need to have those prompting skills and they're still relevant today. Maybe in six months not anymore, because we know how fast these things evolve, but still I think.

Philippe Borremans:

Apart from that, it's also an interesting exercise for yourself, knowing actually what do I want as output. How does it need to be structured? What is the context? What is the audience? Because these are all the questions. Does it need to be structured? What is the context? What is the audience? Because these are all the questions that you need to ask yourself if you want to write a good prompt, and I think it's for communicators, I mean, those should be always your basic questions who is the audience, what is the format? What is the context, what are the things that move around that can be important? So I think it's apart from just the technical prompting aspect, it's a good way of organizing your thoughts as well.

Tom Mueller:

Yeah. So those good basic practitioner skills come into play here, but applied to a new technology, trying to make sense of it, I'd like to throw a couple of just ideas at you and get your thoughts on how quickly a response team today could leverage AI to do some of these things. For example, translations Press releases going out you need to translate it into two or three different languages. That's something AI could do very quickly today, right?

Philippe Borremans:

Yeah, with a human in the loop for a final check, definitely, but we know all these cases where federal agencies send out risk communication messages but forgot that 30% of the population was Spanish speaking. We know these cases, we know that it goes wrong, so now there is no excuse anymore. Of course, we're always the human in the loop. I'm very much from that ID. Yes, it helps me enormously, but I'm always the last one to check and I think that's a good approach.

Tom Mueller:

Yeah, is there a difference between Google Translate and dropping a document in there versus dropping it into an AI agent, or is it really same technology?

Philippe Borremans:

Well, to be honest, I don't know, because I haven't used Google Translate for years. Now I use DeepL, which is a German translation model which uses AI, and to me that is on the level of what I need. I can actually translate full documents, powerpoint slides, what have you? With minimal correction, and it's very, it's very impressive.

Tom Mueller:

And what's the name of that tool? Again, d-e-e-p-l. Yeah, how about? Another task might be social media monitoring. Is that something that AI is set up to help us with?

Philippe Borremans:

Yeah, so I mean, the usual suspects of the social media and media monitoring platforms all have incorporated AI since a couple of years now and of course, it's getting better and better now, or more powerful, but they already were looking at this technology before it became a big thing two years ago.

Philippe Borremans:

So it's already ingrained in the back end of those systems, so that will evolve and get better. Another thing is that if you and I think that's the beauty of AI as well is the, let's say, the low cost approach so not everybody has a couple of thousands a month for a social media monitoring platform or even a simple media monitoring platform well, actually, the 20 a month systems out there can actually do a very good job if you know how to prompt it and how to instruct it. You could actually upload the urls of your online coverage, for instance. It will give you a sentiment analysis, it will give you these things. So if you know how to tinker with that, yes, uh, it, it is a. It is also a potential approach to do it yourself instead of relying on the big platforms. But of course, for many companies, they already have a contract in place and there the ai is just in the back end.

Tom Mueller:

You don't see it, but you know it's there and it does its work, yeah so if you're a a sole practitioner out there working with small clients and you want to sort of be able to monitor social media efficiently, some of us use different tools to monitor X, for example, and TweetDeck was one of those very popular You're saying you could just go into you know X's version which is Grok now, I think and give it some prompts and have it running a search for you over time.

Philippe Borremans:

Yeah, so many of the generative AI platforms now have something called or close to tasks or repetitive action. So ChatGPT has that now, so you could actually create a task and say look, every morning at 8, I want to get an overview based on these keywords. Now there's a big caveat it's still very much touch and go, not always the best results. So my approach would rather be I'll have through RSS feeds, which is a very old technology but still one of the best out there. You know, yeah, I mean, let's face it, old stuff really works.

Philippe Borremans:

I would pull in coverage based on keywords and then I would feed that into a Gen AI, um, a gen ai system, and say, look, this is my input. Now you go and analyze it. The finding stuff is still a bit difficult for these things. Uh, perplexity has been now known for finding good coverage and and and some recent coverage. But still, I would either rely on the professional platforms or tinker a bit more with RSS based search and then feed that in and really prepare it for the AI and say, look, this is my input today. Do the analysis give me the results?

Tom Mueller:

And then it will provide you some kind of a heat analysis or sentiment analysis, all these things that are possible with other platforms, yeah. Yeah, interesting. It's good to know some of those old tools are still relevant today. We've already talked about developing plans. How about in the training scenario development areas? Do you find applications today for helping to develop scenarios or training plans? What's your vision on that?

Philippe Borremans:

So that's a big part of my work, what is called an incapacity building.

Philippe Borremans:

So I'm running simulations, so I've got I trained my own virtual sidekick, and so that is one that I use to get my first drafts of simulation scenarios out or training courses and things like that.

Philippe Borremans:

Another one, and I'll put the link in the chat if you want. I created a small LM and this one is open to anyone who has a chat GPT login and it's, in fact I call it the crisis role player. So actually you open it up and it will kick off with a couple of questions and, based on your response to those questions, it will run a scenario for you and it will ask you how do you respond, how will you react, and at the end, when you've done playing the game, so to say, you'll get some feedback. So it might seem a bit crazy, but it really helps. I use it in face-to-face workshops just to show one the potential application of AI, but also people, by playing that role-playing game, so to say, through an AI system, both understand the AI but at the same time test their knowledge or their reactivity against some very, sometimes very difficult questions that the AI system is asking you. And again, this is, of course, pre-trained, based on a knowledge base et cetera, so there are different applications in that area.

Tom Mueller:

Yeah, yeah, so it's an online crisis gaming tool. Essentially, yeah, it's a role player yeah.

Philippe Borremans:

It's fun to do and people like that. But it is a serious game, so to say yeah.

Tom Mueller:

That's really fascinating for me, philippe, because when I first well, when I worked in crisis training for BP for many years and I was always trying to figure out how can I test individuals Exercises you do, it's big groups of people, individual roles are important, but it's hard to really focus on individual people, and a tool like that actually might give you an opportunity to individually test and prompt those folks you know in a variety of programmable type scenarios. Right?

Philippe Borremans:

Yeah, yeah, and that is it's. First of all, it's, um, it's a safe zone, right? I mean, people know they're interacting with an ai, so it's fine, but still, it gives you feedback, it tells you. Well, you know, this is maybe too early, that first reaction you gave there, maybe. So it really gives you feedback and it's very individual, or you can play it in a group, but it's made for one person playing through the screen, um, and so, yeah, it's, it's, it's fun. Again, that's also the power of ai. It's I'm not a programmer, although I spent 10 years at ibm but as a public relations manager, so sure, I've got a bit of a technical feel or, let's say, an it field, but I'm not a programmer.

Philippe Borremans:

I never learned to code or whatever and I can make these things in. I mean, it took me maybe four hours just preparing the knowledge base and then the instructions and testing it and trialing it, um, so it's very low key, low cost and and quickly deployed, uh, and now it's it's available to all. So I I gave you the link, so if you want, to yeah, I got, I got it, thank you.

Tom Mueller:

So, philip, to summarize in terms of AI, so your best advice for companies who are, or even municipalities, government entities who are trying to think about how they engage with AI and get on the front foot, what's your sort of best recommendation for getting started?

Philippe Borremans:

Well, I think, start very small. Invite a couple of people and we're talking maybe about the crisis management team or the communicators. I'm more a crisis communications person but invite a couple of people in a safe zone, try out the free versions with no confidential information. Just play around with it, see what it gives. Maybe follow a couple of prompting techniques YouTube or webinars and then start to play with it, because keeping it away is pretty dangerous because it will impact our work Anyway, it's already impacting our work and there's no way that you know we're going back to a world without AI, probably. So start by very basic things playing around with it, see what it can do, both from a content creation part, but also use it as a brainstorm partner, as a critique, as someone who puts holes in your plan. Plan, but do it all in a safe environment, no confidential information. Once that you then have an idea on where it could make the biggest difference from a productivity point of view, from a speed point of view, then I would say organize a what what we call a sandbox environment, a little piece of the server in, of course, in full collaboration with the it people that you have. Create a sandbox, meaning it's all independent. It sits there. You've got 100 control download in open source llm. Again, there are a couple of them which are really interesting. There is one of coming out of France, mistral, which is very powerful. It's open source, you can download it.

Philippe Borremans:

And then just begin with a very small, very specific use case, not the big. Oh, we're going to create a crisis management AI system. No, no, no. A very specific, small use case. Test it out, try to run it, see how it works, correct it and then build on top of that. I think that's the best way forward. Um, and and I do believe it's also an important way forward because I do think, once we go into private companies, um, governments, local, national, what have you? You need to have that control. And let's not forget something else In Europe, we have the EU AI Act. It's the law. You cannot just play around and do whatever you want, and we see the first signs now in the US as well, where states are implementing at least guardrails, right. So you'll have to be transparent in what you're doing, and having that 100% control is, of course, the best approach, but it also allows you to work in a safe environment, in a sandbox environment.

Tom Mueller:

Thanks for being very specific and keeping it simple for those of us who are really trying to figure it out. Hey, if our listeners want to get a hold of you, to tap into your deep well of advice, what's the best way for them to do that?

Philippe Borremans:

Well, I'm very open to connections on LinkedIn. You can find me, Philippe Borremans, on LinkedIn. And then I think, yeah, the most output I do for the moment is through my weekly newsletter, which is Wag the Dog and that's wagthedog. io. The dot com was taken, but, yeah, and that's where I publish every week an article on risk, emergency crisis, comms at the intersection of technology and AI, and then there is a podcast version, but that's my two AI avatars male, female who, in fact, narrate my article automatically.

Tom Mueller:

the best place to find me yeah, all right, and we will include some of those details in the podcast notes here. Terrific, philipp. Thanks so much for joining us. It's been a really fun conversation. We look forward to having you back again soon. Thank you, tom. Thanks for the invitation, bye-bye. A quick publicity note here for the Leading in a Crisis podcast. We've gotten some recognition recently by the folks over at Feedspot, who have ranked this podcast as number 13 in the state of Texas in their listing of management podcasts. So congratulations to the team here at the Leading in a Crisis podcast for that ranking and that recognition by the good folks over at Feedspot. Feedspot, if you're not familiar, is just a company that aggregates global media outlets and content creators and makes those lists available for others who may be looking for content or to engage with creators. So thanks to Feedspot for that recognition. And that's going to do it for this episode of the Leading in a Crisis podcast. Thanks for joining us. We'll see you again soon.

People on this episode