Paradox

A reputable human systems engineer and PhD candidate, Lisa Flynn’s background encompasses launching technology startups and C-suite executive roles. Her expertise spans information systems, business models, psychology, marketing, and entrepreneurship, all foundational to cognitive security advancements.

We examined the dual-edged nature of AI, addressing both its potential for tremendous advancements and its capacity to facilitate misinformation and disinformation.

TIMESTAMPS:
00:16:00 – Navigating the AI Paradox: Innovation and Danger
07:52:00 – From Tech Entrepreneur to Anti-Trafficking Advocate
12:17:00 – AI Agents Compete Against Human Social Engineers at Defcon
19:47:00 – Innovative Approaches to Cybersecurity Education and Workforce Development
26:51:00 – Combating Deepfake Misinformation in an Increasingly Sophisticated Landscape
31:36:00 – AI’s Impact on Jobs and Cybersecurity
38:16:00 – Connectcon: A Collaborative Cybersecurity Conference Focused on Human-Centered Solutions
41:18:00 – Exploring Unique Bars and Cybersecurity-Themed Drinks in Vegas

SYMLINKS
LinkedIn (personal): https://www.linkedin.com/in/lisaflynncatalyst/
ConnectCon: https://www.connectcon.world/

This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors.

Chris: Lisa Flynn is a human systems engineer and PhD candidate renowned for her innovative approach in utilizing intelligence augmentation for rapid, systemic improvement in cybersecurity workforce development, with a foundation deeply rooted in internal disruptive technology. Her expertise ranges from launching technology startups to a repeat C suite paratrooper, all underpinned by a profound understanding of information systems, business models, psychology, marketing, and entrepreneurship, important building blocks of the modern cognitive security discipline.

Chris: Lisa, thanks for stopping by. Barcode.

Lisa: Chris, it is so good to be here. I feel like we’ve been talking about this for a year, man.

Chris: Yeah. Yeah, we have been. I think last time I saw you in person, I was interviewing you for inhuman, and you provided one of the most legendary sound bites I’ve ever heard.

Lisa: I’m always happy to help in that regard, my friend. We’ll save it for the promos, though, huh?

Chris: We’ll save that for the promos. But let’s just say that, in other words, you said time’s ticking, right? And I think it was in relation to the inevitable point in time where AI will take over the world. Has your stance on that changed since we had that conversation. This was a year ago. So within that past year, how has your perspective changed as AI is taking over the world today?

Lisa: Yeah, great question. So, TikTok, we’re going faster than ever. So I stand behind my earlier statement, and I think what people need to realize, we have a perception problem, and we have half of, let’s just talk about our country right now, is terrified of Skynet and the machines are going to take over the world. And so we’ve got a lot of people in denial with their head buried directly in the sandheen.

Lisa: While in the meantime, we’ve got a good percentage of people on this planet that are moving forward with AI as fast as they can. And they don’t care about the rules, they don’t care about the regulations. They’re just trying to smash, and they’re doing it. These are still nascent. This is still nascent technology. It’s still very new. And while we’re trying to put in guardrails, the reality is we’re all still playing around with this and we don’t really know what its full capabilities are. So I can tell you that I live, I feel like I have a very split life. Like, when I’m talking to people out, just, like, in general, just, let’s call them normal people, right?

Lisa: And they’re like, is it as bad as I think? And I’m like, oh, no, it’s not. It’s actually much, much worse. They don’t like that very much. So it’s something that I really feel strongly about. We need to be looking into more. We need to be educating ourselves further. We need to, and it provides so much help at this point. There’s so many autonomous tasks that we do over and over and over again that really adds to our, like, mental load that we’re all saying we can’t take anymore.

Lisa: So I’m like, the cavalry has arrived. And we’re going, no, I don’t really. You know, that seems scary to me. And it could go off the rails, and it could, but it’s absolutely going to go off the rails if we don’t have, you know, as a whole and as a people really embrace it. And the thing is, like, help me help you. Like, I really feel like we’re at that point where if we did not prohibit and we’re more permissive and curious, it could really be a game changer right now. You know, we could be having defensive agents fighting just as hard as, you know, all the trolls that have been coming after us you know, at least since 2016.

Chris: Yeah, yeah. I think with each day that passes, it does become a little bit more confusing where you have to cut through that noise to really understand what it’s about.

Lisa: Exactly. And we’re really. I spend a lot of my time managing expectations. Right. And just kind of approaching gently and spewing facts as much as I can to, you know, I guess, demystify and make the AI less scary to people. And I will tell you just one quick tip. Like, the way that I’ve done this the most to get people involved is I’ve asked them to use their AI to help the meal plan, right? And they just are like, oh, my God, it can, you know, and I kind of just tell them the prompt that I give it. And with that, they kind of, you know, their eyes light up and they’re like, oh, I get how this can be really different, you know?

Lisa: You know, we’ve had AI for a while. We’ve had Siri, we’ve had, you know, Alexa, all of that, but that’s dumb AI, right? And now with generative AI on the scene, we have the smart AI. And I think that if more people would play around with it, even people in cyber, even, you know, the OG tech guys, I feel like, are also not really wanting to get into it. So, you know, I’m just beating the drum to say, experiment, get curious, and, you know, find out a little bit more for yourself, and that’s gonna make you not only feel better about what’s going on, but will really, I mean, significantly impact your day to day life. It just saves hours and hours and hours and a lot of mistakes that, as humans, we make when we get tired, right? And we’re all tired right now, let’s be honest.

Chris: That’s true. So tell us a little bit about your background and how you ultimately landed with a focus on AI and security.

Lisa: So I came to the cyber and Aihdenhe family basically via technology companies and more specifically, via counterterrorism and anti trafficking. So I had been a startup entrepreneur, call it a serial entrepreneur, just over and over. I think it’s something I was born with. It’s sort of a disorder. So I had several venture backed companies. I bootstrapped a couple, all technology plays. They ranged from, I actually had a children’s photography studio when digital photography first came out, and had created kind of an Amazon Turk like, post production editing program, essentially in software that would allow me to crowdsource content editing. And that was a really big deal back in 2009, ish right. 0809, that era.

Lisa: So I actually franchised one of my businesses. I had a couple other. I was c suite paratrooper. So I was on a lot of other startup teams for SaaS companies and apps and kind of all the things I grew up in that. In the technology, you know, boom, the second one. Right? And then I was in marketing when emerging, you know, media became a thing where before we only had radio, tv and print, you know, billboards, that was basically how we did marketing.

Lisa: And then in like 2000, in that area of the marketing revolution, everything changed. And those who could, who understood and who could work with technology were really the ones that, you know, just shot out of the gates at that point. So it was. It was through marketing that I realized I was entrepreneur. And through being an entrepreneur that I realized I was an engineer. And through the pandemic, I was actually working for a company, an impact company that sold pajamas and loungewear here in the United States to provide employable skills training for women in India who would otherwise have no choice for dignified employment and would be forced to be sex slaves, essentially.

Lisa: So we went in, we sold things here to power that employable skills training and workforce development pipeline to really fight fast fashion, you know, to create a virtuous supply chain. And so that’s where I was trying to really strengthen that virtuous supply chain in India when the pandemic hit. So I knew that I was not going to be able to do that work anymore because you really need to be able to be on the ground, you know, and understand what’s going on. And with the pandemic, that just wasn’t possible. So, you know, in the smack middle of my life, I had this opportunity where I found myself without a job, you know, and I was the CEO of the company. But the right decision was to, you know, was to really shutter it and just take care of different aspects of what was going on there. So I had this, you know, moment to say, what do I want to do with the rest of my one wild and precious life?

Lisa: And I knew that I couldn’t leave the women and children who were impacted by human trafficking. And so I thought I’m either going to need, so I need to skill up a little bit more, right? And so my background was in marketing and psychology and then business, business, business, right? And I said I, in that time, actually, that I was working for that company. And with the pandemic, human trafficking became the second largest criminal industry in the world. So it jumped up from third to second.

Lisa: And I was like, why is that? Well, because the technology. And so I really, and all my companies had been tech companies, so I was very, you know, I was already a tech girl and decided to get my master’s in information systems management and focusing on generative AI, human AI. And we’ve called it so many things just in the past five years. Right. Human AI integration. Now it’s augmented intelligence.

Lisa: And chose that field because I saw where AI was going. So it was really cool to be able to have 25 years of experience in tech companies and business and startup and growth stage and all that, and then be able to go, oh, like. So instead of starting from scratch, I was starting from a place of experience. And that I think, has really made all the difference. So through that, I was doing a lot of work, really, in the counterterrorism circles, field work there, and have now really gotten out of the field and leaning more into research and academia. I’ve written an AI policy that is being used by the county here, and we’re looking for the statewide.

Lisa: I’m working with the Nevada state attorney general’s office to distribute it statewide, and then we’re hoping that we can push that policy nationwide. And finally, Chris, the last thing you.

Chris: Got a lot going on.

Lisa: I got a lot going on is I’m actually getting my PhD with a lab in Finland. It’s an innovative service and information systems program within the it and electrical engineering program at OlU. And my lab specifically is on online social influence. So my work in that lab is now specifically on deepfakes and digital twins, you know, good, bad, and ugly. So that’s what I’m very excited. We’ll be taking the stage in the social engineering village at Defcon this year with Barry Carpenter. And we’re going to have a fun, a fun little Vishoff with our AI agents versus some very impressive human callers. So I hope to see you there.

Chris: Yeah, I’ll be there. Tell me about the Vishoff. What does that consist of?

Lisa: So you’ve seen, I know you’ve seen the social engineering village vishing competition that they have every year, and it’s really cool if you haven’t seen it. So basically, it’s teams of two that go into a booth. They have no control over the, you know, who’s dialing or whatever. There’s somebody outside the booth handling that. And, you know, you’re up on the screen in front of 500 of your closest friends, and you’re calling.

Lisa: There’s a target company, and you’re calling to try to obtain it’s a capture the flag type exercise where you’re trying to obtain objectives like what messaging system do they use at work? Do they have a no tailgating policy posted at the back door? What, you know, antivirus software, do they use, like, all sorts of things that obviously would be helpful if someone is wanting to hack into the system.

Lisa: So it’s really more focused on, you know, hacking humans, not the machines. And it’s always a super popular event, like, if you don’t get there early, you won’t get in, I hope. I think we might have a bigger space this year. So that’s always. And you have like 22. You have like 20 objectives to get. And there’s teams and they compete and there’s different pretext, and it’s super fun to watch because it’s like, heads up, and they’re doing it live in front of the room.

Chris: Nice.

Lisa: So once Perry Carpenter and I actually started talking about. Actually, we were talking with Doctor Jessica Barker, an FC, who you introduced me to. So thank you for that connection. And we were talking about, you know, what AI agents can do at this point, even just the off the shelf stuff with a really serious prompt package. And it’s scary what they can do. So Perry Carpenter is already, there’s a lot of stuff out there right now. If you just Google Barry Carpenter and fake faik, he’s got some stuff already published, and so he’s going to have his bots that are going to show the kind of the low end of what anybody can do. Fairly simply. You don’t have to do any programming, no coding.

Lisa: You just go in and write a decent prompt. And he’s got a pretty great agent who does phishing and can follow and obtain objectives. It’s pretty impressive. So that’s going to be one end of the spectrum. We’re standing at the other end. We’re going to build Lynn the Finn, who will be my digital twin.

Chris: Oh, okay.

Lisa: Right. So she’s going to be trained on chat, GPT 40, and we will train her how to be basically a really killer social engineer. So she’s also going to be loaded with my particular skill set, which I’m not going to give you the recipe for that right now, but let’s say that I’ve been in temperament and, you know, psychology was my background, so the human centered side. But then I also bring the, you know, technology. I’m a programmer as well, so I bring that to the table, too. So Lynn will have all of that on board, and we’re going to see how well she does. And then basically we’re going to be. There’s the judges and we will be judged, our AI bots. So once we set our bots to go, we cannot. Perry and I will no longer will be hands off. So we just sort of have to sit there and watch what happens.

Chris: Interesting.

Lisa: Which is a little bit scary because we don’t know what these, what these bots are really going to do. So we obviously are doing a lot of testing, but, you know, they could go off the rails. We’re not really sure. So we’re going to be guinea pigs. So we’re going to be the first ever and it’s just going to. We’re not going to be part of the main competition. This is going to be an expedition where it’s just a vish off, where our AI agents will compete against two of the very best social engineers.

Lisa: I’m gonna let. Yeah, so, and then we’ll be judged and then we’re gonna have a decent. A decent round for people to be able to ask questions because we anticipate there will be quite a few after this little. After this little vish off experience.

Chris: Now, is there somewhere to register for this or if anyone go into DefcON, can they just walk in?

Lisa: Yep. You queue up. It’s gonna be on Saturday from ten to noon in the social engineering village.

Chris: Cool. So I know you participated in the, the prompt engineering CTF last year.

Lisa: Yeah.

Chris: Do you know if that’s happening again?

Lisa: Not that I’ve heard. So that was a big deal. That was a White House sponsored event and they wanted to see, and it was, you know, all the, I’m just going to use chat GPT at all. Right. All the models were still fairly new and they wanted to see, and they had started putting the rules and kind of, you know, guide boundaries in, guardrails in and wanted to see if we could bust through them. And long story short, the answer was yes. The people who use social engineering tactics so tried to hack it like a human, were successful. I mean, for me it was quite easy.

Lisa: And those who tried to hack it, you know, from, tried to hack the code, basically turn misinformation into disinformation, were not successful. So, you know, kind of the, the same thing holds true is that it’s much easier to hack the humans and we can hack the AI that’s acting like a humanity human now as well. So. Yeah, and there’s a couple. I’m not going to talk about the easy ways to do that, but, you know, if you know, what you’re doing.

Lisa: Last year, it was not very difficult at all. And unfortunately, for those of us who know what we’re doing, it’s not that difficult even now. But I think it is getting a little more difficult for those who don’t know what they’re doing. So that’s the good news.

Chris: From your perspective, do you feel that overall, developers of LLM systems are more cognizant of this threat?

Lisa: You know, I wish. I wish that the answer was yes. I really think it’s back to the sort of the head in the sand, it’s back to the cybersecurity is not my job, you know, and we need to be beating the drum that it’s everyone’s job, you know, and even, you know, the great debate over should cyber security even be the name anymore? Because those of us in the biz know that there’s no such thing. That’s actually another conference around DeFCON that is going to be a debate between Arun Vishwanath, who I know is a friend, I think he was your first episode, and Perry Carpenter. So they’re going to be debating kind of what cybersecurity is right now. And because of the psychology behind what people do when they’re scared and when they don’t know, they sort of don’t want it to be their problem. And so a big group, Arun amongst them, where I’m actually working with him, with his cyber hygiene academy, to start saying, like, we need to be learning about this. We need to be, first of all, protecting our most vulnerable. So, you know, from the ground up, we need to be teaching kids digital citizenship and how to be cyber resilient and how to improve their cyber hygiene.

Lisa: But it’s not only the kids. We need to be teaching their parents, too, you know, and that’s what I love about the new offering from the Cyber Hygiene Academy. It’s this really great. It’s this really great app that in that sort of meets families where they’re at to help them, you know, to help them increase their cyber hygiene and therefore the community resilience. So I really feel like in my travels, and if anybody has seen differently, I would love to hear about that in comments, because that would be a great, you know, feel good story.

Lisa: But I think people feel helpless. And the more that we, you know, the more that we do nothing to combat the misinformation that’s out there about AI and how scary it can be, the bigger the divide is going to be and the bigger the job. We have to sort of get society over that hill. And again, while the threat actors, they are not slowing down at all. When I get in red teaming type exercises, I get tapped quite a bit to consult on how will the bad guys think and what will they do. So I do a lot of cat and mouse scenarios.

Lisa: And what we can do now with generative AI is we can change the game from cat and mouse to leapfrog. And that’s what we have to do, because the reality is we’ve been behind, we were behind with regular tech, OG tech, we’ve been behind in cybersecurity. We have a massive lag in butts and seats, which is part of the workforce development projects that I work on. And we have to catch up, and we’re never going to catch up if we don’t do something differently. I think that what we have the ability to do differently right now is all over the place, and we just need to embrace that and learn more about it and save the world.

Chris: Tell me more about that workforce development project. Is this basically helping supply organizations with cybersecurity professionals or what does that look like?

Lisa: Yeah, it’s a double sided model. So I recognized in the market, I came, moved, actually, from Oregon to Las Vegas to get my master’s here because I wanted to be in an international city where I was doing counterterrorism work. So obviously, there’s a lot of that lot of opportunity for me to gain more experience here. So I recognize, and then I went back to school late in life. Right? And so I’m in class with all these really brilliant students who are average age is 30, and they’re getting this degree that in this very, like we said earlier, in this very sort of novel area where I’m learning more about AI, when I had the opportunity to study it in school, school at the timing that put me, you know, that really jumped me ahead of the game, too, because I had a chance to sit back and actually study it before I went, before I dove headlong back into it, right.

Lisa: So with the workforce development incubator, I recognized that we had lots of talented students who knew what they were doing that were not matching and not sticking in the cybersecurity position that we really needed in this area. And so, and being a non traditional student, a little bit older than the rest of the, than the 30 year olds in the class, I was like, we got to, you know, we got to do, somebody’s got to do something about this. And as an entrepreneur, whenever you utter those words, it means I got to do something about this. And so started this workforce development incubator.

Lisa: So we took. I took students either in their capstone course or any students who just wanted to do an internship or an independent study, and then put them on real world project teams that were doing cybersecurity projects for small to medium businesses and nonprofit organizations in the community. So it create. And so it was a six month project about. It runs with the terms right at UNLV. So UNLV has been very supportive of this in their information systems management, cybersecurity, and applied economics and data intelligence masters programs. Those are the students that I work with.

Lisa: And so we sent them out. We had one that created one team that created a methodology and a sort of operations manual on how to conduct CIS and NIST audits for a local cybersecurity company in town. My favorite, though, and the one I’m most proud of, was last term, we created the generative AI policy for Clark county, which is the fifth largest county in the nation. And then that’s the one that’s with the Nevada state attorney general’s now. So that’s the one I hope to have in all of your hands as soon as possible. And actually, why don’t we just attach that to show notes? I’m happy to just share that policy.

Chris: Yeah, yeah.

Lisa: The big difference and what I’d encourage, I’d encourage people to use it. I’d encourage them to use it as a launching, you know, a launching pad to then customize it for their area. Some of my research will be coming out pretty soon on my hops and skips, which are holistic operational planning sessions and strategic collaborative implementation planning sessions. So this is a way to use all my research involves collective impact, which is just basically, if we work together, we’re going to be able to do more, go farther, faster.

Lisa: There’s a little more to it than that, but that’s the gist of it. And then the methodologies that I will have that I’m writing articles now, so they should be out very soon. Just really bringing multiple stakeholders together and to be able to examine a problem from multiple angles, which is so important in cybersecurity and in cognitive security. And so bringing people together to actually dive in and say, not just let’s write this policy, but how are we actually going to use it? Because the policy only is, only matters when you have to put it into play.

Lisa: Right. And so that’s the one thing that’s very different about the policy that we created. The catalysts and Canaries is the name of the incubator team or company kind of, that runs the incubator teams. And so the one thing that we did differently was we developed use cases. So in this hop that we did, we did some rapid ideation and brainstorming. I pushed people between systems one and systems two thinking to really get all the good nectar out. It’s very similar to, and I learned from, took some pages from the Google, Sprint, Google Ventures, Sprint.

Lisa: And so we developed these. We identified use cases and, like, specific ways in which people will use the AI in which there might be ethical issues or where it might break or there may be problems. And then we troubleshot around that to create a policy that was more than just a piece of paper saying, here’s what we kind of think you should do. We were able to get into. In this instance, you should do this. In this instance, we’d prefer, you know, do’s and don’ts all of that. So again, it’s, it’s still pretty broad, but I do think that it’s. It’s at least advancing the ball a little bit. And I’d love to have, you know, more eyeballs on it and more feedback on it and, like, let’s. Let’s build the rest of it together.

Chris: Yeah, I love that. Um, I want to quickly go back to the topic of misinformation, specifically misinformation campaigns that involve deepfakes. And, you know, I feel like these type of campaigns are going to become increasingly sophisticated and more prevalent, especially with the election year. You know, I really suspect that this will continue to become even more aggressive than what we have seen in mainstream media.

Chris: And I know we spoke about this in depth about a year ago, but looking at the current state, how do you foresee these technologies evolving and what measures can be taken to combat their malicious intent? And are we making progress from your viewpoint?

Lisa: Yeah, it’s. Here’s where I’m. I think, yes, we’re making progress because we’re doing research in the area. Right. So I’m making progress. I am learning more about it. My lab, you know, is making progress. One of my lab mates studies russian misinformation and disinformation. I will tell you, I don’t know if you saw the, you know, probably one of the most well known disinformation labs at Stanford just shuttered because of all the death threats and lawsuits, and, you know, just basically chaos headed their way.

Lisa: And that’s problematic, right. If they’re coming after the people that are studying misinformation, disinformation makes me feel a lot more comfortable that I’ll be nestled safely in a norse country by the time.

Chris: Yeah. Interesting. I did not know that.

Lisa: Yeah. And so my lab in Finland is, you know, we study misinformation. There’s also a lot of politics around just the terms misinformation and disinformation. Disinformation is seen as an act of war. Misinformation is seen as a mistake. So disinformation is. You’re intentionally trying to, you know, get people to do things based on these lies that you’re feeding them.

Chris: Yeah.

Lisa: Misinformation is just. Maybe I didn’t check my facts, you know, did my own research. I just. It was just unintentional mistake. Unintentional, yeah. Yeah, much easier. Yeah. So intentional versus unintentional, but intentional, you know, maliciously intentional.

Chris: Right, correct. Yeah.

Lisa: So I would love to see more funding going towards research here, obviously. I mean, I’m going to be. I’m going to beat the drum there. This is a global problem. So, you know, in the beginning, I said, let’s just talk about the United States, but, you know, now let’s talk about the world and. And the appetite that other countries have to jump in and to collaborate and help here. You know, we know that we can learn so much from the European Union.

Lisa: They are doing, you know, they’re better than we are in cybersecurity, and there’s a lot that. And with. And I think they’re ahead of us in terms of policy and regulation and things like that as well. So, you know, I think that we have a really good opportunity in front of us right now. As the eternal optimist I am, any challenge is really an opportunity to collaborate here. You know, a collective impact model is when we have shared goals and we’re going to have a common measurement and we’re going to coordinate our work, and we’re going to work together towards, you know, solving this problem.

Lisa: And I know that we already have so many groups. You know, my research, my friends call it my intergalactic council because I have key people in, you know, PhDs in key, like, specialty areas all over the world, because that’s what we need to understand. Our attackers are not just coming from the United States, you know, and they all have, you know, different intents. Some of it’s politically motivated, some of it’s just chaos motivated, good old fashioned hackers, you know, and some of it, you know, then we’ve got the hacktivists coming in, and I.

Lisa: I mean, I see it as a Marvel movie, honestly, you know, where we’re like. And. And we are in the fight for good and evil, and I can only hope. And, you know, this is why I go on podcasts and write papers and, you know, books pretty soon is to beat the drum, to say, let’s not, you know, what we. What we can’t do is look away.

Chris: Yeah.

Lisa: And pretend like it’s not happening. So, you know, and we don’t have to. Not everyone has to dive into the same level. Like, we all don’t need PhDs for sure. But, you know, let’s stay aware and let’s. Let’s stay open to the possibility that a massive game changer has just happened in our universe. And, you know, that’s real. And I, you know, I see, I saw a meme the other day that, because what I hear most is like, oh, the AI is going to take my job.

Lisa: And I’m like, I’m in workforce development, and AI is my specialty. So will AI take some jobs? Yes, it will. Do you know what jobs it’s going to take first? You know, it’s following the robots. Dull, dirty and dangerous. Right?

Chris: Yeah.

Lisa: So what’s an industry that’s already being disrupted? Hospitality. Food and Bev. Bartenders. Servers. What’s an industry where there is human trafficking by definition and abuses beyond like that you wouldn’t believe are happening in 2024 in our country. Right? So maybe those jobs should be replaced and guessdez. But guess what the good news is. Do you know who’s. Who’s like, just in their DNA become very, very good social engineers, those who have had to deal with people, like, in reality industry, right? So, like, servers, bartenders that have to deal with people just being, oh, the best of people, right?

Lisa: So, like, let’s get them out of those dangerous situations and maybe let’s reskill them on social engineering, on prompt engineering and get some of these seats in cybersecurity filled with people who are naturally very good at it. Survivors. You know, this episode brought to you by trauma survivors are very good social engineers. I mean, FC is a great example, right? Look at him, you know, and he, you know, unfortunately, he learned some of his best moves on the mean streets, you know, just surviving.

Chris: Right?

Lisa: And that’s what a lot. You know, if you look at the makeup of people who are in who I call the overworked, having to have one or more or more than one job. So two or more jobs to make ends meet, which is half of our country right now. So, you know, when I so back to the meme I saw it said, you know, it was like these rabbits or something. And it was like, oh, you know, AI is here and we don’t. And it can do all of our social media posts for us now.

Lisa: And the rabbit goes, well, then what are we going to do? You know, what are we going to do? And he goes, nothing, basically. And they run off and play into the sunshine and daisies. And that’s where I need. I’m really beating the drum for people to see that. It’s like, what are we going to do? I talked to my son the other day who’s in theater, and he’s like, are they going to take my job as a writer?

Lisa: And I said, they might take parts of your job. And he’s like, well, what am I going to do? I said, you’re going to live your life, and maybe there will be a universal salary. Maybe there will be. We don’t know how things are going to change in the future, but when we are a society that is overworked and under resourced and very undervalued, like, this is a good thing. The fact that this AI can come in and take half of our mental load off of our plate at both home and work, like, what is wrong with that?

Lisa: And it’s safe enough, right? We do have guardrails in place, and if you’re on there, it’s not like going to the dark web or anything. You know, if you’re on there playing around with AI, you’re really not going to get into any trouble, you know, unless you’re specifically trying to get it to do bad things, then you’ll get in some trouble.

Chris: So the integration of AI agents in cyber offense and defensive strategies is an emerging trend with significant implications. I was wondering if you could discuss the potential impact of AI agents, both on the offensive and defensive side within cybersecurity operations.

Lisa: Yep. Well, I can tell you what I’m working on. So I really am the Lin. The Flynn is being built to be a cyber warrior, offensive and defensive. We have the version that we’re going to release and show at Defcon, which is kind of the personal to commercial grade version. But then we will build a better, more secure version of her to hopefully be military grade or nation state grade. And what makes really great agents is that. So let me just tell you a little bit about the training process.

Lisa: How to train your dragon, if you will, right. How to train your model. You basically can load in any information. I’m training mine off of GPT four, just because I want to use the most prevalent and widely available. And we’re kind of making a point that we can do this much with just off the shelf stuff, and we’re not doing any, we’re not doing any custom coding, right. But we are. You have to stitch several kind of softwares together to make. To make this work.

Lisa: So if you think of an agent that knows every single, every single, it’s listened to every single vishing call that we have out there on the Internet in terms of training purposes. So it knows what to do. It knows every single, you know, what to look for, how to get around it. It knows all the classic, you know, social engineering tropes. It knows how to change pretext. It can change its accent. So, like, one of the things we’re going to throw at them in DEFCON is, you know, last year someone used a southern accent, and it did really well. People just love the southern accent, and they told them more stories, you know, and using.

Lisa: So being able to switch to that accent or even being able to switch languages. Right? Like, these are all very, these are all things that we can load into the model to do. So Lynn the Finn is going to be, you know, much smarter than Lisa Flynn, her digital twin, right? So think every, every. They know everything I’m loading it on. So we know more about temperament theory. Like, she could also be a member of the behavior analysis.

Lisa: So we’re training her, you know, to under. And then to understand people. Temperament theory, jungian temperament theory. How do people gather information and how do they make decisions? Those are important things to know, you know, if you’re attacking. Right. So really, I am positioning Lynn to be able to be a red team agent or a blue team agent, and they just, they know everything. And then think of a blue team agent that knew every single attack that was happening up to real time everywhere in the world and was already prepared to defend it. Right. You’re constantly, it’s constantly updating your incidence response plan, and it’s constantly updating different scenarios that could happen, different hacks that are out there.

Lisa: Sorry. Different hackers that are out there, different threats that are out there. It just knows everything immediately. And it doesn’t take the processing time that it takes a human to. So when I’m dishing, for example, it takes me time to process and shift gears. I think I’m pretty quick on my feet, but it still takes time. And these agents are experts. They know everything all at once. Right? And so that’s just, I just don’t think humans can compete with that now, you know, come to. Come to DeFCon on August 10 and find out.

Chris: Yes, yes. So in addition to DefcOn, you’re also pioneering Connectcon. Yeah, tell us about that. And then tell us where people listening can maybe find out more information about that, ultimately connect with you also online.

Lisa: Yeah. So Connectcon is actually on the 9th, so that’s going to be the day before. It’s kind of a slow day during hacker summer camp, so we’re hoping people will come by. We are almost sold out, so hopefully we’ll have some seats available. You can register at Connectcon World, and it is going to be incredible. So it’s Connectcon, meaning we’re connecting all the people in humanity centered cyber. And we really want to do what I just talked about, you know, create some collective impact here.

Lisa: And we hope that it’s a way. Okay, so it’s going to be a conference unlike any other. First, our keynotes are going to be a debate style. So that will be with Arun and Perry Carpenter. I’ll be moderating the debate. So it’s just going to be a really fun way to get lots of great information from these two, you know, pioneers in the field. After that, we’re gonna have a robust Q and A, and then everyone.

Lisa: And it’s a small. It’s intentionally small, so we will not have more than 60 people. And they come from. These are, you know, if you’re very into human centered cyber, in either government, industry, education, or research, this is a place for you to connect with your people in those other categories where, you may note, be just recognizing that we really need a feedback loop between industry, research, education, and government so that we can all be moving. You know, we’re sharing notes and moving together collectively faster. So after the keynotes, we’re all going to break up, including, you know, Arun and Perry will actually join the conference. And so this conference is more about the attendees.

Lisa: And we’re going to break up, and I’m actually going to lead them all through one of my holistic operational planning sessions, one of the activities there. And we’re going to brainstorm ways, you know, all the problems and challenges that we see in human centered cyber these days. So that’s the morning is on challenges and the afternoon is on solutions. So we’ve got an all star panel moderated by Jessica Barker, who’s one of my faves.

Lisa: She’s just a professional, you know, like, to the t and super entertaining and so, so smart, but she’s moderating this all star panel and where they’re going to talk about solutions and things that they’ve seen, you know, for. And again, we’ll have people in all of those territories, and then we’re going to, again, break out and do a strategic, collaborative implementation planning. So this is like, what are we actually going to do about this? And we hope to leave connect con with actual solutions.

Lisa: Typically in my hops or skips, I mean, we’ve had people come out with products before have come out of it, policies have come out of them. So we’re very excited to sort of put this room of just incredible people together and see, you know, what. What groups bring forward. Will we have resource projects? Will we have new companies start up? I don’t know. I mean, it could be really fun. So a couple seats left there, and then you can find me on LinkedIn.

Lisa: Lisa Flynn. And come see me. I don’t do the socials, you know, very much. So I’m on LinkedIn, but come chat me up. I love a good chat. You know that I love it.

Chris: Yeah. Yeah, you do. So you’re based in Vegas for the time being, which there is no shortage of awesome bars and venues there. So what’s the most unique bar that you’ve come across in Vegas?

Lisa: Well, I love to learn. You know, I’m a consummate learner, so we’re going to combine drinking with education. I’m going to tell you, the best bar that, like, nobody really knows about, probably, or you wouldn’t think about, is the mob museum. There’s a speakeasy in the mob museum downtown Vegas, which is cool to get off the strip and get downtown. That’s where all my favorite bars and restaurants are. Anyway, Carson’s kitchen is really great, but, yeah, don’t sleep on the mob museum.

Chris: Yeah, you know, I’ve never been there, but I will have to check it out now. So I just heard last call. You got time for one more?

Lisa: I do, always.

Chris: If you opened a cybersecurity themed bar, what would the name be and what would your signature drink be called?

Lisa: I love this question. All right, so I think I got to go with the deep fake. And shake would be the name of the bar.

Chris: I like it.

Lisa: Right, because there’d be some dancing there, too, obviously. And then the drink special. You know me, I always got to bring my friends to the party, so I would call it the fake cha cha. And here’s why. The fake it’s. That’s going to be spelled f a I k for my friend Perry Carpenter’s new book that’s coming out that I want everybody to get and read in October. And then the chacha for that’s the cyber hygiene academy. And I feel like we need to say it again. Cyber hygiene Academy.

Lisa: So it’s in honor of both of my friends who will be the debating keynotes at Connect Con. And it’s the. You gotta get yourself a fake cha cha. Now, here’s the thing. You might be a little bit surprised. Of course it’s gonna be a mocktail.

Chris: It’ll be a mocktail.

Lisa: Cause it’s fake.

Chris: Cause it’s fake. Now, would you disclose that it’s fake?

Lisa: I mean, they would know. I don’t know. It’s the deep fake and shake. You never know.

Chris: That’s true.

Lisa: I mean, basically, if it’s a blended drink, if you get a blended drink anywhere, it’s going to be fake. I was a bartender in college, and let me just tell you, that’s. That’s the facts. Straight facts.

Chris: Straight facts.

Lisa: But we. If we went there, Chris, we’d have an old fashioned. I think I’m in. I think we’d celebrate an old fashioned, and it wouldn’t be fake. It’d be good. Oh, yeah.

Chris: Well, thanks for stopping by, Lisa. Really appreciate it.

Lisa: Yeah, man.

Chris: I’ll see you soon.

Lisa: Awesome. Thanks for having me.

To top