The Leading in a Crisis Podcast

EP 36 Artificial Intelligence for crisis planning and support: Entrepreneur Justin Snair and Preppr.ai

Tom Episode 36

Send us a text

Can AI really revolutionize crisis management? Today, we're joined by Justin Snair, the brilliant mind behind Preppr.ai, to discuss how artificial intelligence is transforming the landscape of emergency preparedness. From his roots in the Marine Corps to his innovative work with under-resourced organizations, Justin shares his unique journey and insights on making planning and crisis exercises more accessible and affordable. We explore how AI could help small organizations and nonprofits (and all of us, really) develop crisis plans  and plan exercises using AI to facilitate/automate those tasks. 

We'll also tackle some thought-provoking topics, such as the personalization of emergency management through AI - using chatbots to respond to public inquiries during an emergency - and using AI for more mundane tasks like sorting through your inbox. 

Justin and I dig deep into the biases inherent in AI development, the sustainability challenges, and the national security implications of relying on foreign-produced semiconductors. This episode underscores the need for responsible AI development to ensure accurate information and sustainable solutions during crises. Don't miss this compelling conversation on the future of crisis management and the role AI could play in it.

We'd love to hear from you. Email the show at Tom@leadinginacrisis.com.

Tom Mueller:

Hi everyone and welcome back to the Leading in a Crisis podcast. It's great to have you with us once again today. On this podcast, we talk all things crisis management and we deliver that through interviews, storytelling and lessons learned from experienced crisis leaders. I'm Tom Mueller. On today's podcast, we're going to continue our conversation around the use of AI in emergency management, and the recording that we're sharing with you today was actually just a conversation that I had with our guest, Justin Snair, who is an entrepreneur started up a company called Preppr. ai, who's focused on helping smaller organizations and nonprofits develop crisis exercises and crisis communications plans, and it was really just an introductory conversation he and I were having, but he recorded it, as he records everything and he feeds it to his AI to help it learn his speaking style and how he approaches conversations. So, since he had recorded it, I asked if we could use it for a podcast episode and he graciously concurred. So this is our conversation about Preppr. ai with Justin Snair. Let's join now.

Justin Snair:

I have definitely leaned into, originally just trying to understand AI, and then I am known for asking inconvenient questions.

Justin Snair:

- inconvenient questions couched in curiosity, but also a frustration with generally, with status quo and inertia and like like saying that, the saying this is how we've always done it. I was in the Marine Corps, so I was terrible at being in the Marine Corps because I ask a lot of why are we doing it this way? I found myself on the Syrian border because of that question, like 2004. It's a vein going through Even just starting in preparedness and emergency management. After the Marine Corps I would learn what was going on and then instantly reject it and then try and just leverage whatever I could to fix stuff I saw as wrong. Maybe it was a bit naive when I was 27, but now I'm 41, so maybe it's more palatable at this point.

Justin Snair:

And so I have a background in developing some technologies within government and now when AI was coming around, I spent probably about a year learning about it. I did about 250 phone calls to date so far in the past 12 months talking to AI developers, practitioners, academics, policymakers, basically the whole thing to try and understand what does AI mean for my own work. And then I started getting mouthy about it, basically just started sharing what I was observing, and then that observation has led to a tech startup building specific solutions and then, through my consulting work, clients hiring me and my now developers to provide like, really custom solutions to. You know, a lot of it is low risk emergency management. You know low risk, so this is, like you know, where I think it's appropriate.

Justin Snair:

So my work right now is focused within the tech startup is developing an approximation of a master exercise practitioner, automating the process documentation, making it more inclusive, lowering the bar for expertise, having the AI provide the expertise and just making it more accessible, more affordable. As a consultant, I would charge a lot to design tabletop exercises and I'm reading the tea leaves. Tabletop exercises will continue. I don't know if the way that they're currently designed will. So that's what we're working on right now is rethinking how exercises are designed, with AI integrated into it.

Tom Mueller:

Are you going to put us all out of business here? Is that the idea?

Justin Snair:

Even myself, no. So what I'm trying to do is very specific. I don't see the workforce shifting dramatically. What I'm trying to do is those that would never have hired someone like me or you, but they have an obligation to design exercises. So, like you know, some very small under-resourced communities, maybe some small healthcare entities like skilled nursing facilities, long-term care facilities that just don't have EM experience on staff and they're never going to pay 50 grand for me to design an exercise and they're usually scrambling like two weeks before an audit and they like call out to a neighbor saying, can I join in on something you've already done? Or they reach for a template and so you know and that's yeah.

Justin Snair:

So like, there's just folks that you know I don't think they're ever gonna hire me, and so giving them a utility that they can use, that gets them from template to something that is personalized, that pushes them, that guides the process, that removes the jargon, makes it really accessible. And then, on the other end of it, I want to scale people like us. So, you know, I've talked to a lot of emergency management folks that say, like they wish they could do more in more places, be more inclusive, have higher fidelity of their exercises or their work. But to do that, you have to be scalable to a degree, and we're not infinitely scalable, right, and we're not infinitely resourced. So to be able to be in more places, to be more effective, we need tools. So what I'm trying to do is provide a shovel to you guys. Like, I don't want to dig the holes, I don't want to stop people from digging holes. I want to give you a shovel versus, you know, like one of those little kiddie shovels that might be just something more efficient, more effective, that lets everyone scale, just something more efficient, more effective, that lets everyone scale.

Justin Snair:

So, and then, if we do that, you know, to a degree, taking a process that's data rich but not not well accessed, like if we shift things from a lot of paper, heavy process, pre-digital process and get them into a platform. What can we learn and share by, at the very least, digitizing some of the process and, in some ways, not even going full AI? It's like if we just digitize some of the work, what could we do with sharing learnings, creating networks between folks? That and I'm not talking to high risk I don't want to share AARs and threat assessments. I want to share exercises and best practices and things like that, and it's just not done through a system right now. It's done through an upload to the federal government and then you hope so, yeah, okay, that's one of the things we're working on right now in Applied AI is the approximation of an exercise designer, and then I have some planning coming up Something that helps with planning, something that helps with exercise conducting the exercises, and something that helps with after action reviews and reports.

Tom Mueller:

So you're developing sort of separate AI tools for each of those pieces?

Justin Snair:

Yeah, it'll be a suite. We're starting with exercises because I view it as the lowest risk. As the lowest risk and you know, on a spectrum for AI between like possibilities and creativity to precision, exercises are mostly, you know, it's safe to be wrong there. They're games. It's not a question of being 100% right or wrong. It's like is it high enough quality?

Justin Snair:

And then the tool we're making now is a co-piloting of that process. So it's not an easy button. I reject the whole concept of an easy button for this. It co-pilots the process and pushes folks along. Even master exercise practitioners that I've used it's said like it got them thinking in different ways. It got them thinking in different ways. And there's some other functions we're adding where it will allow users to upload basically their history, so past documents, exercises, aars, plans, any data that they want. And the first step is what I would do as a consultant is a really solid discovery process, trying to understand them, but it takes so much time and so now, with the tools we're making, I can help a user, particularly a consultant, rapidly understand an organization, draw insights, see trends and then have that incorporated into the exercise design process automatically.

Tom Mueller:

So, justin, are you developing a proprietary AI, then, or you're building off a ChatGPT?

Justin Snair:

Yeah, so the model we're using is OpenAI's ChatGPT Turbo. The model is interchangeable, so at some point, you know, we'll likely shift to an open source like Lama or something else, and then self-host for a lot of reasons, but right now OpenAI's model is industry leader. For now Its costs are kind of high and uncontrollable, so we have to like make choices around how do we manage users so they don't just turn on the gas and just like let it run. But yeah, so the model I mean to build our own. We're wrapping something around the model. So on top of that model we've done months and months and months of additional knowledge-based fine tuning, rulemaking which is really important for our uses.

Justin Snair:

A lot of examples that aren't included in the models-based training, but really the rules, like when government or emergency managers use Prepper to design exercise. There's very little space for bias and things that would be perceived as like government behaving poorly, so like base. Right now, chatgpt sometimes will, if you say design an exercise or tell me about a terrorist attack, it'll automatically assign a religious theology to it, and so we've written a bunch of rules for what I would be is like our bar of do no harm is a lot of rules that says prepper can't automatically assign a political ideology or religious ideology to threat actors. A political ideology or religious ideology to threat actors, like that's just. There are some that I'm positive the data supports that. They are probably the ones perpetuating it, but I don't want prepper automatically assigning these things. So there's a lot of rules in place that's that just chat, gpt or cloud or any of the models, just don't even care about to make use of it, more to safer for government use. So there's been a lot of that.

Tom Mueller:

Okay, yeah, there'd be a little bit of well, quite a challenge in doing that, I guess, and a little bit of frustration and just trying to figure out where are all the landmines within that model. And then designing your rules around that.

Justin Snair:

And me being the you know, I'm a white, middle-aged dude of privilege, and so me being the arbiter of what is an appropriate rule or not, like you know. So it's a starting point. And then we're trying to bring in folks that I mean, I just got to acknowledge it, like I'm, like, yeah, I'm just perpetuating the OK, ok, white dude, why are you writing rules about this? And so so we see a landmine in our own approach. So we're trying to bring in external perspectives to basically, you know, I'm doing all the best intention, but the intention isn't enough, so we're trying to do more.

Tom Mueller:

Yeah, Well, you know, some would say just your experience and expertise is enough to you know rule out other biases and whatnot. You know rule out other biases and whatnot. And you know God bless you for going the extra mile.

Justin Snair:

Trying and doing that.

Justin Snair:

I just don't want like a like. If a government is using this tool and it's open on a desk and right now you can talk to it, you just hold down space bar and you can dictate to it as as like a participant in the planning committee. If that automatically generates something that's perceived as biased, perpetuating a generalization, that's not survivable. It doesn't matter if the AI made it or not. It's a tool you're using and that nuance will be lost on people.

Justin Snair:

I'm really into AI when it's done responsibly and so for me responsibly using AI requires us to evaluate what harm is and harms, on a spectrum from privilege to stuff I can't even anticipate. So we're being really deliberate with defining harm. Our original vision statement was like use AI with no harm. And then we very intentionally use AI with as little harm as possible Because it's just even the models themselves are environmental disasters in the making. They're water hogs. So it's just like being really deliberate with what we say. Originally I was like I want to do no harm and I'm like, well, I should go back to, you know, rocks and sticks because um, like, harms it in the you know perception of the of the receiver right and there's just no way to predict all of those yeah, kinds of things.

Tom Mueller:

You're definitely on the cutting edge here. It sounds like you're mostly self-taught in doing this.

Justin Snair:

And I have good developers, Like I know just enough. I mean, I'm a disaster researcher, I'm a technologist, Like I have visions for what technology could be and I know the practice. And then I've always found myself I'm a generalist, I'm an expert in nothing.

Justin Snair:

I've worked all the way up to the national level for policy and research, being a generalist, like in between practitioners, researchers, technology developers, all of those and just asking questions and then taking risks. I take risks and so I like activating things, even if they fail. I mean the effort is worth building off of the knowledge and code. Back of Prepper.

Tom Mueller:

Well, you're obviously deeply into AI and the coding and all of that. Do you see a potential for AI then in an actual incident response, you know, as part of an IMT?

Justin Snair:

Yeah, I think the direction we're going to be going is we have a couple things to happen first, but AI needs to mature more. It may be technologically ready right now, but whether it's ready, whether we know it's ready, prove it's ready for high-risk endeavors. There's no margin of sorry. The AI messed up. Someone died. That's not survivable. So I think the tech has to marinate a bit within emergency management. Whether it's ready or not now. It needs time to marinate and people need to gain some trust in it. It needs a track record that just doesn't currently have.

Justin Snair:

I think for low-risk areas within disaster response, it could help. I think we're co-pilots or chief of staffs for everyone. So like, say, for you, in order for you to be a better a crisis manager, maybe there's like 50% of your world, your life, is just stuff that could be automatable that if they automate it and there was a mistake there, there's no one gonna no one's gonna die. It might mess up your calendar. It might, you know, like it's a low risk situation, but how much of your work could be franchised a bit to an assistant, right?

Justin Snair:

And so if you franchised out some of that low risk work. What would it let you do that you can't currently do? That is high risk or needs your full attention at times. And so it's. There's this push now within the industry to try and make these chiefs of staff for everyone, these chiefs of staff that know you, personalized to you, protects you from from the BS, organizes you. That just, I think. When every single person, especially Ian, has a chief of just, I think when every single person, especially EN, has a chief of staff, I mean imagine that, and none of that is technically like far off, it's not that far.

Tom Mueller:

Does that include managing your inbox for you?

Justin Snair:

Everything making sense of it, and I think there's going to be an imperative to do this, and not because we just want to save time and do better work. I think we're going to be faced with outside our industry using these tools to distract us, to get to basically to bombard. We're going to be bombarded with AI-generated junk, and to make sense of this, we're going to need a filter, and the filter can't be a Gmail filter. It's going to be something that has to know you, because it has to know what to let in and what to keep out, and so it can't just be a simple algorithm filtering stuff out of your inbox. It has to make sense of it.

Tom Mueller:

And to make sense of it, it has to know you to some degree, which means you've got to train it, you've got to feed it a lot of information.

Justin Snair:

Tons. So I have a thing always be recording, basically like always recording calls and everything. And yeah, I've generated over the past year probably five or six thousand pages of transcripts. That that is me talking and meet my perspectives and like, and I have a. You know I'm using it to train a I agents to help people. They can talk to it, they can basically query it to figure out what I say. Generally it sounds like me, but we like what.

Justin Snair:

I think the best thing everyone could do right now is just start recording things, writing stuff down, getting the stuff out of your brain, like because you already have a ton of data wrapped around you through your regular tech use. So it knows it'll eventually read all your, your email and understand voicemails, you'll be able to dictate to it, it'll be present in the rooms, it's going to already have access to all that. But your perspectives, your opinions, like, a lot of people don't spend a lot of their time talking on the phone or writing, right, and it's so. How does an AI learn you and your particular perspectives without having that information? So always, always record as much as you can.

Tom Mueller:

Today we think of that in terms of the algorithms. Right, if you're on Amazon or Facebook Instagram, wherever you are, the Instagrams, the algorithms are tracking you.

Justin Snair:

Yeah, against us, almost right, Right, yeah. And so there's a project we're trying to get funded around flipping targeted advertisement through social media and the Internet. Use, flip it back and use it for good. Everything that's used against us to sell us sandals and Coca-Cola and stuff could be used to learn about an audience segment, to give them targeted messaging in crisis. You could use it for good. It's just understanding how to do it and with AI being combined with that technology, personalization is the biggest thing. Like personalization in education, I think, personalization in emergency management. That's what AI is offering now is the capacity to personalize, and so, like our own Prepper, work is. My biggest push in the MVP is how do we personalize the user so when Prepper talks to them, they know their history, they know their preferences, even language of preference, location? How do I personalize it so it's like that perfect consultant that just spent days reading about the client?

Tom Mueller:

Most people are just going to be terrified by this, terrified of it. We fight today to minimize the data we share with the algorithms right. I think most of us have probably given up on that, yeah ship sailed.

Justin Snair:

So right now, all of our data is being vacuumed up and used against us, right To sell us crap all the time, and we can't stop it. If we want to interface with modern technology, we're just bombarded with this garbage. What if you could take the time? And we can't stop it if we want to interface with modern technology and just bombard it with this garbage, what if you could take what had a technology that, instead of learning about you to sell you crap, it was learning about you to protect you from that crap and so like, if it knows that I like running shoes and I don't like mountain bikes and I was getting targeted for all this stuff and my co-pilot knew that it would let in the running shoe ads and keep out the junk.

Justin Snair:

I don't want mountain bikes, you know, for emergency manager. If it learns them, you know, maybe read my emails. And if it's the same lady emailing over and over and over again and it's consistently been nonsense, maybe let that in once a week, right? If it's the town manager or the city council saying this is on fire, let it in. And so it's like learning you Right now. The reason people resist having all this data is because it feels like it's being used against us. Has because it feels like it's being used against us, and so if we can make sense of all this information for our benefit, I think it could revolutionize our own work. But in some ways, you know, the use of AI I think, like I said earlier, was that it would give access to expertise to folks that don't have it currently.

Justin Snair:

I'm not saying that the AI is perfect. It's an approximation. It's not a master is perfect. It's an approximation. It's not a master exercise practitioner. It's not going to be a planner, is it like? You know? This is the thing never start with a blank page Like that's what AI is offering now. So it's like, if they can get, if these organizations or community groups can get access to tools that let them finally maybe take more ownership over their own preparedness without having to pay for it without having to go to the EM and expect it from government.

Justin Snair:

Like that to me, like something I'm really focused on right now, is how do we remove barriers? And community based participatory preparedness, like money, expertise, time and who gets included or not, has been those are the barriers that we've acknowledged, no matter what we've thrown at it. People don't come, they don't make personal plans, they don't necessarily make organizational preparedness plans, they're not running their own exercises. They may not even come to an exercise that's free unless they have a compliance or an obligation to do so. How do we get these other groups that are really on the front lines of response to take more ownership over preparing? And right now, if they chose to, they'd hire someone like me or they would, and they just don't. It's not affordable.

Justin Snair:

So if you can remove time, cost and scale expertise, would you have better engagement of community groups? Like could the faith-based organization do more with preparedness? A school, you know if a principal could be an exercise designer? Like what would that mean? Like, through tools, like that's what we're endeavoring to. We want to conduct some research around that. It's like provide Prepper fine, maybe Prepper doesn't survive, but if these AI tools that are going to be developed better. Engages the community groups, like what could ?

Tom Mueller:

Is there anything in particular that scares you about the technology?

Justin Snair:

A lot I mean. So, starting on one end, the majority of models that are available to consumers or even to, like you know, organizations. They're being made by people that look like me.

Justin Snair:

Like they're being made by a very small subset of the population that are largely privileged, that are limited. They have a limited perspective not saying it's not a valid perspective, but they have limited perspective and they're making decisions around how these models are trained and they're getting better at trying to remove bias or to be more inclusive. But that's scary. The technology is being made by just a few folks really.

Tom Mueller:

Can I push back on you for a second? We've seen the you know, the recent malfunctions with chat, gpt and you know people asking to create images of founding fathers for the United States of America.

Justin Snair:

Right.

Tom Mueller:

They're coming up with. You know brown and scanned and Asian people for those things. So that sort of points to the opposite of what you're saying there.

Justin Snair:

That it you know is they've overcorrected, like some of these models have. Like originally I could get it to do a bunch of stuff, like a year and a half ago, that was useful but also demonstrated bias, and now a lot of that's been stripped away. And so you know, I think those organizations you know $90 billion companies are trying to mitigate risk, right, and so they're removing all these capabilities. A lot of people say that ChatGPT has been effectively neutered and what they're going to eventually do is force people to migrate to decentralized, open source models. But honestly, like that, that's good and bad.

Justin Snair:

If OpenAI came out as a perfect tool, had all the money invested in it and everyone recognizes it's perfect, would we have all these competitors popping up or open source popping up? You want people, they need a differentiator. Right, I need a Goliath and so it's good, and so eventually all my stuff will probably go open source. But then some clients will question especially government would be the safety of open source?

Justin Snair:

But yeah so that's just who's training it, who's paying for it. I always worry about who's training it and who's paying for it. I always worry about who's building it. Is it sustainable? You know they're talking about. Mark Zuckerberg was on something yesterday talking about the need for would. Basically, eventually, you would need nuclear power plants to power the training of these models. That's mind-blowing. It's not sustainable.

Justin Snair:

Open AI drinks Olympic swimming pools worth of water every day when there's not enough water in some places just to keep it cool. It's not sustainable as a technology and it's actually counter to what most of what we're saying we should be doing is consuming less, being more sustainable, and it's also, you know, there's the national security Like. A lot of our technological resources are going towards developing of these technologies now, particularly chips and semiconductors, going towards these models. That's taking away from a lot of other stuff and it's putting us in a. The United States does not produce its own semiconductors and chips sufficiently enough to be sustainable. It's just a security problem. So there's a bunch that I worry about.

Justin Snair:

And then irresponsible use, particularly by government, I really worry about. There's an urgency now to meet demands and changing expectations and no room for failure. We can't fail as emergency managers. People are going to start turning to tools and the tool they're turning to is AI and I have several examples of them developing AI chat assistants, for example, that are poorly deployed, not trustworthy, misinformed citizens, and you know they'll say they're used for just routine information, which I'm like sure. But if you socialize an assistant, a source of information in blue sky, and they go to it during an emergency and it's not sufficiently trained to answer those questions and I always use the um I work with a client who wants to build chatbots to help them understand wildfire and I'm like, if that chatbot ever says don't evacuate, you're done like, you are done like and so poor deploy, irresponsible deployment of it by government, not just like a PR problem, especially in our space. I've seen countless ones of these chatbots being developed and almost never is the crisis manager or the emergency manager the first one involved.

Justin Snair:

This thing's able to answer questions about our knowledge base, and then that knowledge base or our plans and who to call it, and we haven't sufficiently addressed evacuation or there's a, there's a, an art to it. We need, like what happens when a citizen asks that question and it doesn't say I don't have the answer to that Go talk to Bob Like and it just gives you no. You know, it could give you the theory of evacuation, right that's. I really worry about that. I worry about constantly poorly deployed assistance. Like you know, you've seen it. In New York City they had one that was giving misinformation around business law, like it was telling people to do illegal things, and I was actually in New York City at an event for public health innovation and the director of the New York City agency that's involved with developing AI tools for New York City was there, and it was the day after the New York City earthquake and I knew this chatbot would not know there was an earthquake yesterday.

Justin Snair:

It wasn't going to know that, but live in front of him, he was on stage. We asked the question hey, we just went to your New York City chatbot and asked does New York City have a? Has ever had an earthquake? Chatbot says absolutely not, there's no problems of earthquakes in New York City, and it's I know why. It doesn't have the right answer. But someone going there and saying you know what do I do about earthquakes now? And it's like no, you don't have to worry about earthquakes, there's never an earthquake in New York City.

Justin Snair:

And that's an official message coming out of New York City government. Now, right, and there's no disclaimer saying this thing will lie to you. So government use worries me a lot, like a whole lot, and so I spend a lot of time talking to government potential users, giving keynotes, doing workshops, doing demos and just like trying to get them to understand that the existential threat with AI is not, I think, this non-state or state actor using it against us. It's, I think it's potentially us poorly deployed technologies that we just don't think out Like. I really worry about that.

Tom Mueller:

And that's going to do it for this episode of the Leading in a Crisis podcast. Thank you once again for joining us. It's always great to have you with us. If you like what you're hearing, please like and subscribe to the podcast and, hey, tell your friends about us as well. Hey, we'll see you again soon for another episode of the Leading in a Crisis podcast.

People on this episode