BONUS: BCP LIVE with John Dwyer

SESSION TITLE: IBM X-FORCE
RECORDED: 12/13/23
VENUE: City Winery
LOCATION: Philadelphia, PA
GUEST: John Dwyer 
SPONSOR: IBM

ABOUT THE GUEST:
John Dwyer – John Dwyer is the Head of Research for IBM Security X-Force. He has extensive experience in cybersecurity research, threat actor behavioral modeling, immersive incident response simulations, and integrated security technologies. John is a highly regarded speaker at industry events and has expertise in AI, threat hunting, and detection engineering.

John Dwyer discusses the impact of artificial intelligence (AI) on the threat landscape and the changing role of AI in security tools. He emphasizes the importance of understanding the goals and objectives of attackers and how AI can be used to enhance security measures. John also highlights the need for proactive risk reduction strategies and the potential of AI in threat detection and response automation. He concludes by discussing the future possibilities of fully immersive deception and the importance of training and awareness in the face of evolving cyber risks.

TIMESTAMPS:
00:01:00 – Introduction and thanks to sponsor IBM
00:02:28 – Introduction of guest, John Dwyer
00:08:28 – Discussion on how AI is changing the threat landscape
00:11:33 – AI’s impact on security tools and risks introduced
00:13:48 – Commercial vs proprietary LLMs for organizations
00:15:06 – Predicting attack surfaces in AI and importance of security fundamentals
00:16:17 – Differentiating between credible threats and hype threats
00:18:13 – Goals of financially motivated threat actors
00:20:35 – Phishing attacks and the need for better defense strategies
00:24:17 – Altering security awareness stance for employees
00:26:09 – AI capabilities in threat detection, response automation, and vulnerability analysis
00:29:11 – Need to invest in infrastructure and innovation to combat crime
00:30:15 – Guidance for proactive risk reduction outside of AI
00:33:57 – IBM Xforce Threat Intelligence index provides year in review
00:37:08 – Closing remarks and thank yous

CONNECT WITH US
Become a Sponsor
Support us on Patreon
Follow us on LinkedIn
Tweet us at @BarCodeSecurity
Email us at info@barcodesecurity.com


This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors.

Chris: Okay. Welcome, everybody, to BarCode Live here in old City Philadelphia at City Winery. First off, I want to thank our sponsor, IBM, for helping make this possible tonight. So give it up for IBM. And I’m proud to announce now that BarCode just recently became an official partner with IBM. So I’m super excited about that. Also, I can’t forget everybody in this room for coming to support barcode security and make the drive into the city tonight and battle parking.

Chris: So we appreciate you guys being here. So, without further ado, I’d like to introduce my guest for tonight’s podcast, John Dwyer.

John: Thank you. Thank you.

Chris: So John is head of research for IBM Security X Force and leads a team focused on adversary, trend analysis, threat hunting, detection, engineering, and more. He previously conducted defensive cyber operations research for the US army and Air Force. John is an experienced speaker at industry events, including black hat sans threat hunting summits, and ISC squared security Congress. His expertise spans cybersecurity research, threat actor behavioral modeling, immersive incident response simulations, and integrated security technologies.

Chris: John, thanks for joining us tonight.

John: When you say it like that, it makes me seem like an adult.

Chris: I actually had to cut that. Um, so, as an industry leader, John, you have to be focused on AI all the time now. I think it’s inevitable. There’s no question about it. So AI within the enterprise is becoming increasingly prevalent. It’s becoming increasingly used. So with that and your visibility that you have, how would you say that AI is changing the threat landscape today? And what new attack surfaces is IBM X force seeing in relation to AI today?

John: Oh, you want to talk about AI? I thought we were going to talk about how the pats were robbed from a victory over the Eagles by the refs. That was how I was brought here. But you want to talk about. Yeah, so it’s a great question. And I think the thing about artificial intelligence, that where people are like, well, I don’t even understand the underlying technology, how can I conceptualize what the attack surface is? Right.

John: The reality is that there is already an attack surface probably in a lot of your technologies that you don’t know. It’s actually not all that hard to backdoor one of these models, to pop a shell or something like that. You can pack in like a beacon into the model weights of a model, and as soon as you load that model into a platform, it will spawn a reverse shell. Now, here’s the thing, though. What I love and I really appreciate about IBM security is the steadied hand and kind of measured approach of how we’re trying to evaluate AI for security and then securing AI is that if we zoom out a bit and you think about, I don’t know if you got who was around or doing system administration in the early 2000’S.

John: Okay. You remember Novell networks? Right, you remember Novell. Okay. So if you think about what made Novell go out of business was active directory. Right? Now, active directory was introduced in . The true attack surface didn’t mature until . So there was this report that came out that basically was the first ones to kind of leverage active directory as a privilege escalation or a lateral movement tool was not until . So we had about a decade of time before it became an attack surface. If you look at things like the cloud.

John: So AWS was released in what year do you guys think? Yeah, it was like  came general go to market in . Now, if you look at the Gartner reports of infrastructure as a service, in , there was what they would call  major players in the market. In , that number dropped down to six, with three players controlling % of the market. And it was only then, in , did we have our first targeted breach against infrastructure. As service was the capital one breach in .

John: So from an attack surface in AI, we would call AI as a platform in a pre mass market maturity level. Right now, we have startups, we have established organizations, IBM, Microsoft, Google, AWS, all trying or vying for position to say who’s going to be the platform of the future for AI. And what we can take from what we’ve learned in the past is that, yes, there is an attack surface. If you have tools that have AI in them, they could be attacked. Right now, what we know about adversaries is that there has to be a return on investment.

John: So to generally craft or create an attack against unique kind of boutique implementations of AI models. And the technologies right now is far too expensive. If you’re highly targeted by someone who has enough money or time, it doesn’t really matter, right? So we got to look at this from a larger landscape point of view. And so what we’re going to see is that probably whenever we decide, whenever whatever platform controls % of the market, or the market distills around three or so technologies, then we’re going to see an attack surface mature. Until then, we really got to be focused on the fundamentals of security and how we implement and deploy technologies. Because you can see this happen over and over again. It’s like your attack surface is going to mature as the market matures. In a particular technology, right.

John: Between  and , MFA vendors went through the roof, right? And at the beginning, there wasn’t a massive MFA attack surface. Now, if you look at it, how many different times are we reading in the news that criminals are developing ways to do things like MFA fatigue attacks or MFA bypasses, as particular technologies have become more ubiquitous across the market? So we can take that to say that’s a discovery in and of itself, of the AI attack surfaces that we need to be really cognizant of what we have deployed internally, because you are a beautiful and unique snowflake within the threat landscape, and no one’s perception of the threat landscape is going to be the same depending on your mission as a company and the technologies that you have.

John: And we need to understand that, to fully understand the attack surface of AI. Hopefully that made sense. That was a long winded way, but you probably already have an AI attack surface, especially in your security tools. But the chances of it being highly targeted are minimal compared to something like your AWS infrastructure.

Chris: Yeah, and that’s a great segue into my next point, was the AI within your security tools, because we’re starting to see that more. How would you say that AI is changing those tools that are used for security purposes and what risks are introduced there?

John: We are what I would call it an inflection point in cybersecurity. And it’s one of the only times we actually may have the upper hand as a disruptive technology comes out. So the thing that generative AI and large language models do extraordinarily well is data distillation, data summarization. What do you think is one of the biggest challenges in modern enterprise? Socks right now?

Chris: Too much data.

John: Bingo. Right? The threat actors are think about ransomware, right? Why was ransomware so successful? They came up with one attack path that could rinse and repeat against a lot of different organizations because of the decisions we made in the late ninety s about how we deploy enterprise technologies. But they were largely successful because they were able to hide in the noise. We did this report in , and it was basically, how long does it take an enterprise ransomware attack to happen? From beginning to end.

John: And in that study, what we found is that the incident response analysts were finding more evidence of the attackers in existing security tooling year over year. So if we take a step back and what does that mean for us as defenders? That means we were buying things, deploying things, and those things were generally working like they were setting off alerts, waving alarm bells. But what we were missing is that detection and response, most likely because we have too much data now we have a technology that is available to anyone that can take piles, piles of disparate data and say, give me the TLDR of that.

John: So AI for security is like, this is a once in a lifetime moment. If we do this right, we can actually change the rules of the game to say, okay, criminal person, you can’t hide in the noise anymore because I can tell my chat bot to tell me, what is the story between these , alerts? What does that actually mean? Or what do you predict it’s going to turn into? That’s great, and I think that’s really going to turbocharge the analysts. We don’t have enough security. We’ve heard about the talent shortage, but the reason we have a talent shortage, I believe, and we did a study about the mental health of incident response analysts, like, right after the pandemic.

John: Turns out, not all that awesome. It’s a hard job, right. And you are sifting through a lot of different data, and you’re just grinding. Right. And I think that that extends into our soc analysts. And what we can do is we can use a technology to not replace humans, but turbocharge humans and make sure that humans are doing valuable and rewarding work rather than slogging through a heap of alerts every single day.

Chris: So you’re saying using existing technology to help parse and automate a lot of that work.

John: Yeah, it’s like, tell me that the human brain is incredibly good at making a decision based on different sorts of data sorts and making a logical decision based on a pattern. What computers are great at is collecting, processing, and distilling data down into a story. And so we can put these together, and now we’re actually found this great jet stream of performance where we can finally solve the problem. I mean, how long have we been talking about alert fatigue?

John: If you guys have been working in security for  years, we’ve been talking about alert fatigue. We finally might have a solution to it. This is pretty exciting.

Chris: So what are your thoughts on organizations leveraging those commercially available products versus proprietary LLMs?

John: Yeah, here’s the thing. If you look at, there’s a lot of people doing, including IBM, doing a lot of red teaming work on how do you attack AI? Right, again, I would say let’s go back to the cloud example. How do you guys think most cloud compromises happen? Any guesses? Social engineering. Boom. Misconfiguration. Right? Exposed s, three buckets, credentials hanging out there, just like I’m going to spin up an EC two instance and let it hang out in the wind, right? And then it pivots through.

John: That is a cloud, not even just the cloud, like conceptually. Let’s talk about just aws or azure within that. When you have security or networking as code, you’re one asterisk away from an attack surface. And it is very complicated and there’s so many different permissions and it’s really hard to tell what the effective permissions are on any asset, that it’s not the control plane that is being attacked or targeted in these attacks. What it is that the attacks are manifesting ourselves. Based on how we deploy these technologies, we can predict that the same thing is going to happen with AI.

John: Even though it’s a fancy new technology, it’s still going to need storage, it’s still going to need APIs, it’s still going to need a platform that’s going to have to run, you’re still going to have to authenticate it to it. So the best thing we can do is say, where is our attack surface likely going to manifest first? Probably in not protecting the data, probably by not protecting the API, probably by not protecting the model through lease privileges and doing effective authentication and multi factor authentication and doing data cleanliness and protection. And so what we can say is, is there anything inherently wrong or bad by using whatever technology?

John: Probably not, because the real thing that we have to figure out first, which the one thing that really, really appreciate about IBM is that they’re coming forth to say this is an approach to securing AI that can be applied to any technology because you’re protecting the data, you’re protecting the model, and you’re protecting the platform. And that’s what we got to do first before we start thinking about the art of the possible with how AI can be attacked through some super mechanism, because really we got to take a logical approach. And it’s been proven over and over and over again that security fundamentals will save us.

Chris: Yeah, and you continue to hear these threats come through. And a lot of times people that are outside looking in, they can’t really differentiate between the credible near term threats versus what I call the hype threats or the threats that really don’t mean anything except they’re just really clickbait. What key threats have you seen that security teams should be aware of and defend against today versus the hype threats?

John: Why it’s so important to understand our adversaries from a particular point of view as to be our beacon or our guiding light to our security strategy. And that is the goals and the objectives of the attacker. The goals are the objective of most the financially motivated threat actor, which is the ubiquitous attack type across all private and public. Right? Espionage is a thing, certainly a thing. If you’re highly targeted, it’s a whole different thing. But we all are a target against a financially motivated threat actor, right? So that’s something that we can all bank on.

John: Their goals and objectives largely haven’t changed in that their goal is to get money through some sort of cryptocurrency, and they do that through an extortion based attack, either through encryption or data theft. Right? We can take that to say all of the fancy things, the Terminator malware that you read about, fraud, GPT or whatever you want to call it, you make up whatever you’d like, right? That can change their goals and the objectives don’t change. So our security strategy doesn’t need to fundamentally change.

John: Let’s just say tomorrow someone develops some AI malware that bypasses all EDRs on the planet, right? Okay, we’ve been dealing with EDR bypasses for five years, right? So we already kind of know how to do that. The telemetry doesn’t change. We do things like threat hunting. We do network segmentation, we do privileged access management, data security, all these kind of things that are meant to mitigate that risk at the end of the day to stop them from encrypting our systems or stealing our data, right? So it doesn’t change.

John: What we hear a lot about with AI from an AI threat point of view is what, what do you guys think? Chat GPT for what sort of attack? Boom. Phishing attacks. Phishing attacks are going to go through the roof. They’re going to write great emails. Okay, bad news, guys. If you zoom out, we have been going up on phishing for  years. We are already in the element of scale. It is like you can already automate an entire business email compromise attack against targeted organization with no technical expertise without using a large language model.

John: So you can go online and it has a marketplace platform. You can say, what kind of attack would you like to go? And then you go and you pick and choose what type of attack. I would like to target office  credentials here, generate my emails. I want to host this in AWS, create the infrastructure for me. I want to do an MFA fatigue attack. So I add that onto my little thing and I check out at the end and I hit go and it automatically builds the infrastructure for me. It crafts the email, sends the emails, captures the credentials and gives them to you. At the end of the day, that already exists.

John: So we’re already there. But a phishing attack is still a phishing attack. If someone clicks, someone clicks, the strategy again remains the same. Now we may be responding to more or better phishing emails, like we may get a higher click rate, but fundamentally the detection and response stays the same. Our control mechanisms to keep the blast radius down stays the same. So I think my advice is to say when you read something online or wherever about an AI generated threat, take a step back and distill it down for yourself.

John: How does this change their goals and the objectives? What I’m worried about is that something like this, like an advanced technology, is going to enable them to carry out an attack that we don’t have a strategy for, right? Let’s say that they move away from data extortion and instead some super smart guy develops an incredibly charming AI model, more charming than me, and then it says, I’m going to convince your AI model to make decisions on my behalf, and now I’m going to manipulate the stock market.

John: Now we don’t have a security strategy for that, then we can start panicking, right? But at the end of the day, let’s just stop guessing and just assume that they’ll use AI, but they’re still going to try to do it through ransomware or data extortion. Hey, maybe we just need to get better. We already know the game plan.

Chris: Do you think organizations need to alter the security awareness stance for their employees in terms of the sheer scale or the volume that may be coming at them, or even the sophistication of spear phishing now as being more prescriptive to the target.

John: Yeah, it can’t hurt, right? The days of the old Nigerian prints, poorly worded emails are gone. Like, the emails are going to be great, they’re going to be slick. We did a study and we had our social engineers do a phishing campaign against Chat GPT, and it was close. Our social engineers still had a higher click rate, but it was close within % or %. Right. So the days of the old crappy emails are probably gone.

John: The training needs to be updated with thread hijacking. Would they have to be aware of that? You can’t even trust that you’re getting an email from someone in an existing conversation. It all stays the same, training aside. That’s the thing. Fundamentally, I don’t want to blame Susie in accounting for a compromise because she clicked on something right? That’s not her job to secure the network. It’s our job to secure the network.

John: I also don’t want people to be so scared of using something like this because they don’t understand the technology. A lot of the time when I talk to security professionals, the first thing that they’re worried about with using something like chat GBT is data Leakage, right? They’re like, I’m not going to put anything in there because then it’s lost and anyone could ever access it again. So if you put your Social Security number out there, that means it’s lost. That’s a data leakage incident. And I’ve had people say that to me.

John: And I think this is such an important training exercise to say, how do we highlight how these technologies actually work? And to say, I’ll go ahead and I’ll put my Social Security number in a Chat GPT, because that’s not how large language models work. Like my name and my Social Security number would have the same correlation with my name and garbage truck or something like, pick a word out of the zebra, because the models are designed to say, I’m only going to predict the next word based on how often I see that pattern.

John: And so if I let my Social Security number slip out there, it’s going to register as a zero because I’ll have the same connection with any other word that I’m not connected with. And so I use that example as a way for security people to quantify the risk to themselves, like, what is actually the risk to Chat GPT or using these technologies, because we have to embrace them. Like I said, this is our opportunity.

John: And so we need to create policies and procedures and training that are prescriptive enough to get people comfortable with using them in the right way and not scared of them. We should be using them, and we need to understand how they actually work so that we can come up with a governance strategy that makes sense to enable this, not prohibited.

Chris: And so, beyond chat GBT and commercial LLMs, now you’re starting to see organizations come out of left field with AI product, and you’re starting to see, I.

John: Host AI in my home network, the wire GPT is sick. Okay. Now it takes two weeks to get a response, but you can ask a.

Chris: Question, but it’s airtight. But no, even mainstream security products now are starting to incorporate AI into their products as well. So when you look at threat detection, you look at response automation and vulnerability analysis. What AI capabilities or tool sets do you feel bring the most security value? Right now.

John: Great question, and it’s going to take me a minute to think about which one. I think that’s more. I think that automation and is going to be the immediate, we’re going to see the immediate value in automation and offloading of human repetitive tasks into a model. We’re going to see that as an immediate gain in productivity. Going forward, though, what I’m very interested in is to say, from a threat detection point of view, large language models like chat, GPT would be terrible at threat detection.

John: That’s not what a model is designed for. Do you know what kind of models are really good for threat detection? Weather models. Because if you think about when we get those notifications of a hurricane that’s developing off the coast of the United States, right? And then you get that cone of probability as it collects more data, it says with some, sorry, bud, with some probability, as the data that we have, we expect it’s going to land maybe a % chance in Boston, % chance Nova Scotia, % chance in Charleston based on the data that we have now.

John: And as it collects more and more data, that cone gets more and more finite, and it’s be able to predict where it’s going to go based on all of the information that we have behaviorally, we can look at for threat detection to say, like, okay, this user or software is behaving in this way, and we project that it’s got a % chance of doing something malicious based on that. Let me go ahead and collect more data from other different sources and feed that back into the model and give you a new prediction.

John: And as it goes along, it’s going to be able to say, I’m going to take this, and I’m going to develop a new detection based on this model by forecasting what we believe its behavior is going to be in the future. So I don’t know which is going to be best. I think from what I care about the most are the humans right. I have done that job. I have been in the sock. I have done that job. I’ve been incident response. And so I’m more excited about seeing those people being taken care of.

John: But I am excited about the possibility of threat detection using forecasting models.

Chris: So would you say that that’s the one capability a little bit further out right now that excites you the most?

John: No. You know what I’m stoked about is the idea of fully immersive deception. So we have the capability, or we have a technology now like we’ve been using honey pots and canary tokens and all these kind of things, but there’s no reason why I can’t have an a. Like, I was reading this book, and it was talking about misinformation and disinformation, which I knew you were going to ask me. I actually, forget it. I’ll send it to you. Put it in the show notes, guys.

John: Which, regarding AI, now, that’s something. That’s scary. I don’t know anything about that. Don’t ask me. I don’t know anything about psyops, so don’t ask me that stuff. But what it said was that we live in a new time where your reality is no longer based on your experience, and rather, it’s based on what you pay attention to. And when I read that, I thought about, well, why do we have to keep playing by the same rules of the game?

John: We have a technology right now. We have multiple different technologies right now, especially with the cloud, where I can create an alternate reality that the attackers need to go through, and I can use an AI model to control a network to say, yeah, this is iBM.com. You have access to it. Let me spin up some fake data. We have some fake users. Go ahead. Burn all of your infrastructure, burn all of your implants because you think you’re operating in the real deal.

John: Burn all your backdoors. And then we just collect that information for free because we have an AI that’s modeling it, doing the detections, doing the triaging it. We don’t have to put any human effort into it. We take all that intel. Now they have to go back to the drawing board. If you got to think about this, like, competitive business, because it is a business, right? So if you’re going against a company that you want to take out of the marketplace, one of the things you can do is drive up their costs, and I can undercut them so I can impose costs against them by making them say, for every successful attack, you have to come up with five implants.

John: You got to do five times as many infrastructure, you got to have five times as many people on that. And now the revenue starts shrinking, the profit starts shrinking, and then eventually it gets so expensive where they’re like, we’re not doing this anymore. We’re going to move into somewhere else. Now, I don’t think people are going to stop doing crime, but we can at least say, you got to come up with something.

John: You got to go innovate. Innovation costs money. Innovation gives us time. And so that’s what I’m most excited about, is that I think we have a technology in which we can take deception to the next level.

Chris: I mean, how fast do you think that’s going to.

John: Mean? This is all speculation. It’s just based on what I’m interested in, but I don’t think that that’s more than five years away. Yeah.

Chris: All right, I want to hit the kill switch on AI just for a second because we talked about the AI specific threats, but organizations still face escalating cyber risks outside of the scope of AI. So what guidance can you provide for organizations that want to be more proactive in reducing their risk in the general threat landscape outside of AI?

John: You guys are really lucky we didn’t make this a drinking game where every time we said AI, you had a drink, a bunch of people sleeping out there. Okay, great question. And I like it because it is like, what can we practically do to live in this world? You guys heard about move it, right? The move it MFT mass exploitation breach. One of the things I talked about at know. Sorry, sorry. One of the things I talked about at black hat this year was MFTs. And why were they a thing?

John: Because you look back, before move, it was go anywhere, MFT and before go anywhere was confluence, MFT. And it was like, okay, well then what was before that? Then it was proxy shell and proxy logon and log for J. And it was these mass exploitation events happen. And every time they happen we seem to be caught off guard and we have to take it on the chin. But every time these mass exploitation events occur, we can go back and look at the data and say, well, they did the same thing right after.

John: So the different technologies, different vulnerabilities, same behaviors afterwards, why are we waiting for a mass exploitation event to build proactive security measures against our very valuable technologies? I didn’t know what an MFP was until they got popped. I’ll own that. That’s on me. But if you think about it, that’s a highly valuable target to a financially motivated threat actor. Look at their goals and objectives. Steal data. What does MFT have? Valuable data.

John: Right. That should have been at the top of our list of things that we need to control. So why am I not proactively going out to say, all right, we’re going to do extortion, I need operational extortion encryption, extortion, data theft, extortion. Where is my most valuable data? How do I protect that against stuff that I know that they’re going to have to do? Like they have to move laterally to it, they have to escalate privileges.

John: If you look at the solar winds attack, the most sophisticated supply chain attack that we have seen, no one could have saw that coming. But if you look at how you detected it, they ran who am I as a child? Process of the service binary. Why didn’t we look for that in exchange? Why weren’t we looking for these known behaviors? So I think one of the things that we can do proactively is to do some self introspection, to say, what do I need to do to be a successful business and what’s valuable to me? What are the levers of pressure that could be pressed against me for an extortion based attack?

John: And let’s start building in protections and detections based on how we know that they operate so that we don’t wait for an exploitation to happen to then learn about a new technology.

Chris: One aspect I look forward to is the IBM Xforce threat Intelligence index, which will be coming out soon. Right?

John: February.

Chris: February.

John: Okay. Yeah.

Chris: Just talk to us a little bit about that report, what it consists of, and how folks here can access that.

John: So the threat intelligence index is basically our year in review. And what have we seen over the last year? What key takeaways are we observing in real world incidents so that you can make better decisions based on how do you protect yourself going forward? Like, what are the adversaries doing? What are they interested in? And then what can you do to protect yourself based on real world data? So it’s all based on our incident data that we have.

Chris: Awesome. And then that’s just accessible on the website?

John: Yeah. So there’ll be links that go out, but you can just google it. Yeah, Google the IBMX force Threat Intelligence index. It’ll come up February .

Chris: Awesome. So you are based in Pittsburgh? Pittsburgh. But you travel often?

John: Yes.

Chris: For. You mentioned black hat and other security conferences throughout your travels. Since this is barcode, I need to ask you, what’s the coolest bar that you’ve ever been?

John: That’s. Should I pander to the cloud and say the pass young in London, which is actually legit? Like, if you’re in London, it’s actually a legit Philly bar. Is it? Yeah, it’s really good. The steak and cheese is good. It’s good. The coolest bar that I’ve ever been to is this place. And I’m a pub guy. It’s my type of vibe. This place called the grand in. And I had. I spent like a week there, you know?

Chris: Yeah, he’s been there.

John: Great live music. It holds a very special place in my heart. It’s like a magic, magic moment. Nice.

Chris: All right, so I just heard last call here. You have time for one more?

John: Yeah.

Chris: If you decided to open a cybersecurity theme bar.

John: Oh, jeez.

Chris: What would the name be and what would your signature drink be called?

John: Well, I guess I’d call the bar zero day. And then the drink would be like a mystery. Like, you would have no idea what is in it, and you have no idea how hard it’s going to hit you.

Chris: I was going to say it could take you down in one sip. I love it.

John: Yeah.

Chris: Awesome, man. Well, before we go, can you let the live audience here, and also for those that are listening to the program, find and connect with you online?

John: Yeah. So if you want to be influenced at a nauseating level, you can find me on LinkedIn. I post a lot, but I try to put as much helpful information out there as I possibly can. We want to be good members of the community, so we do put out a lot of content for free about what adversaries are doing. You can find us xforce, all of our content on securityintelligence.com slash xforce, and then go to IBM Security’s homepage to find out what we’re doing from a product and services point of view.

Chris: Awesome. John Dwyer. I appreciate it, man. Thank you for stopping by, sharing your knowledge. And thank you, IBM for sponsoring this. And thanks for everyone here that came out tonight. Hope you all enjoyed it. Take care. Be safe.

John: Thank you.