It’s a BarCode NCSAM/ HALLOWEEN special, where I speak with established author and iconic security professional who is no stranger to disguises, deception and duplicity – Ira Winkler! We discuss security awareness, his time in the NSA, Secure Mentem, his new book “You CAN Stop Stupid”, and some of his insane espionage expeditions that make James Bond look like 006. The virtual bartender social engineers a scary good Dracula Margarita.
SYMLINKS
LinkedIn Account
Twitter Account
Personal Website
Secure Mentem
You CAN Stop Stupid
Corporate Espionage
Safe-House (Milwaukee Spy Bar)
DRINK INSTRUCTION
DRACULA MARGARITA
1 oz Tequila
1/2 oz Triple Sec
Red Wine
Fill glass with ice. Add tequila and triple sec, then top with red wine.
CONNECT WITH US
http://www.barcodesecurity.com
Become a Sponsor
Follow us on LinkedIn
Tweet us at @BarCodeSecurity
Email us at info@barcodesecurity.com
Chris Glanden 01:46
In the spirit of Halloween/Cybersecurity Awareness Month, Barcode brings you a special episode featuring a security icon who is dedicated to raising security awareness, while at the same time no stranger to disguises, duplicity, and deception. Ira Winkler is an established author and president of secure mentum, a company dedicated to the human aspects of security, and consider one of the world’s most influential security professionals. He has been dubbed the modern-day James Bond by the media, due to the Espionage simulations performed against some of the world’s largest and most high-profile organizations. Ira has designed and implemented and supported security awareness programs at organizations of all sizes in all industries around the globe. Ira, welcome to the bar, my friend.
Ira Winkler 02:37
Thanks for having me.
Chris Glanden 02:39
So, I guess let’s start with discussing how you got into security. It’s something specific trigger your interest or was it something that evolved over time?
Ira Winkler 02:47
It’s one of those things where I never thought this would be the career I have, in most ways. I mean, what happened was in general. You know, I always growing up always want to be an astronaut, a veterinarian or a spy, through a sequence of bizarre, and I don’t want to say bizarre events. But by default, as opposed to being really great academically and being recruited or whatever to my veterinary career, as I was assuming. I ended up just through a series of strange events at the National Security Agency, I took a test for an organization I really didn’t know much about, because it kind of sounded cool and did well, and I ended up working there, and then pretty much you would think that espionage that’s spying, as I tell people, or I wrote in my first book and essays, more The Land of Gilbert and The Land of Bonds, and over the years, I just ended up doing some really cool stuff, but I just thought of it as computer stuff, and then one day somebody came to me and say, can you make a few phone calls, and three days later, I had control over the one of the world’s largest investment banks, doing a social engineering attack.
So anyway, that was one of those weird things, and then as a fluke, while I was doing all that, one time, somebody came to me and said, we’re working on this investigation, and it turned out to be the infamous Vladimir Levin, Citibank investigation that I got pulled into. I ended up doing that as well.
Chris Glanden 04:24
That’s wild. That’s certainly an unorthodox path to take to break into this industry. Now, you’ve performed some absolutely insane and dangerous social engineering expeditions. You mentioned taking overbank ones and in one particular case, I understand you were able to steal some highly classified nuclear reactor designs.
Ira Winkler 04:46
Yeah, that was badly easier than it should have ever been. But I ended up; so, after I wrote about how to take over a bank, it was called the seminal work in social engineering and people started coming to me to do weirder and weirder things, and I ended up; how would I phrase this? I basically ended up just doing these espionage simulations, and then that led itself to some really high profile things where one of the weirder ones, I had a friend who is talking to the CSO of a Fortune 5, Fortune 10 type of company, and he was frustrated, because what happened, he hired pretty much all the big four firms to come in and do penetration test, they all came back two weeks, then $100,000 later, and said, “We have full control of your entire network”, he went to the CEO, the CEO said, “I had been one of the most profitable companies in the world, I had been vulnerable, I am one of the most profitable companies in the world, I am vulnerable, I will be one of the most profitable companies in the world.
So what?” So then he’s like, “I wish I had something like this, and by that time, I wrote my first book, “Corporate Espionage”, and he showed this, and he ironically, showed it to my friends, and my friend said, “I can get him in here”, and anyway, I ended up doing one of my espionage simulations for a Fortune 5 type of company, and which includes obviously, some nuclear reactor related things that I ended up hacking into their organization, just basically walking around talking to people, finding all the critical data and everything like that, and it was anti climatic. I mean, by the time we were…. I mean, because I stole the nuclear reactor designs within an hour of being on the site. You know, there was obviously some preparation to that. But by that point in time, I mean, after I did that, they’re like, let’s go get some HR data, and I’m, like, “Whatever”, and they’re like, “You’re not even excited anymore”. I go, you know, at some point, when there’s no challenge, it doesn’t even matter anymore.
So, I mean, I’d love to say is highly skilled. But I’ve come to believe that penetration tests can be generally worthless, if you’re not doing, if you’re not having an impact, if you’re not having a change, you know, they’re not hard. If you’re just finding vulnerabilities that anybody could find. The fact of the matter is, people love to think these criminals are geniuses, but really, most of the time, all they have is the ability and the willingness to kind of do bad things, and going against the population who really is just…. I don’t want to say they’re unskilled and unable to stop them. They’re just ignorance of the problem, which lets these people who are mediocre in their skill that be incredibly successful.
Chris Glanden 08:11
It could take minimal effort?
Ira Winkler 08:13
It takes a willingness. [Gotcha added by Chris] I will say it takes a willingness and a focus, if that makes any sense at all. Because everybody says all these hackers are geniuses, and this isn’t the first time it actually happened. But for example, when the Twitter hacker, you know, everybody was saying, “Oh, here’s this 17-year-old mastermind”, and everybody’s like, let’s give him a job. I’m like, why? because he was a good buyer because he spent more time on Twitter security than Twitter did. I mean, unfortunately, I hate to say it, but that’s not exactly a hiring point, you’d have somebody who has nothing better to do thinking, “Hey, I’m going to make a few $100,000 by going ahead and doing making some phone calls, and yeah, I’ll invest a few weeks into it”.
You know, I mean, yeah, like somebody who can invest a few weeks and figure out who’s the best people to lie to is some super genius as opposed to well, G, because here’s the other part that people don’t acknowledge. There’s a completely different dynamic to stopping something, preventing the problem than to actually accomplish the problem because you know, you have a hacker a hacker has to find one way in. It’s not just a matter of the hacker found that one way in.
So therefore, let’s just patch that and we’re secure. The reality is that a security program has to say, “Hey, I’m not worried about the one way in I’m worried about how can I create a secure environment that is sustainable, given all the restrictions that I have?” And for example, is hacker telling people you need to better it educates your staff that’s like, well, no shit, Sherlock, you know, is sort of like, well, we need to go ahead and figure out who has access to these systems. How should we implement better multi factor authentication? How can we go ahead and do better things? How should we limit the people who have this access? How do we have secondary controls on them, it’s not just a matter of telling people how to do things or telling people not to do something? It’s a matter of creating an environment that does it right by default, and I use the example of; it’s not hard to break a light bulb, and then it’s like, oh, you broke that light bulb really well. Now go ahead and fix it, or go ahead and make that light bulb, or you have somebody who stabbed someone, somebody stabbed someone, it’s like, wow, they stabbed them really well go make them perform the surgery to fix what they stabbed. That’s really the best analogy I see.
You know, for the cybersecurity of hiring a hacker, just because you know how to break something or stab at a system to cause damage, doesn’t mean you know how to go in and know how to fix it. It’s much more involved than doing that but also fixing it in a sustained way, is the more critical point.
Chris Glanden 11:20
In an organization training exercises, such as phishing campaigns are designed to train employees to stop clicking on links that could lead to infection, when a security awareness test fails, and I’ve personally witnessed cases where an organization will enforce extreme disciplinary measures, to the degree of terminating employment of the end user, and who would you say is to blame for the failures? Is it in any way the fault of the organization’s lack of a solid training program? Or is it a user’s lack of knowledge to blame? And if failure or consecutive failures of a simulation occur, not even an actual attack where do you draw the line?
Ira Winkler 12:05
You know, in my book, you can stop stupid, everybody should buy it, it’s awesome. You know, available for pre-sale. Now, there goes my chief plug. But in that book, I talk about principles that safety science called a Just Culture, what a Just Culture is, is that you have properly trained your users, you have also given them the resources and the ability to do things correctly, and that’s a critical factor that if somebody is properly trained, and still violate something, yes, you blame them, and let me give you an example.
So, let’s first have an easy example. You have a bus driver, you train the bus driver, you give the bus driver, a safe bus, but or maybe the bus is not safe, maybe what happens is all of a sudden, the wheel falls off, because the wheel was improperly maintained, and the bus crashes, can you blame the bus driver for that accident? Assuming of course that part of the training doesn’t say every day before you go out, you inspect the bus to ensure it’s safe, and the bus driver did everything they should have. But the bus still failed. You can’t blame the bus driver for the bus falling apart.
On the other hand, let’s say you have a properly trained bus driver, he has a safe bus, but the bus driver is on his cell phone and crashes. Yes, you blame the bus driver. That’s a distinct difference, and let me give you an example of the actual penetration test I had. I talked about this every so often. But there was one time I was doing my espionage simulation, and it was me and I had a hacker type with me. I told him, here’s what we’re going to do. We’re going to go into the front door during the morning rush hour with everybody else. There’s a receptionist in the lobby. Just ignore the receptionist. Just keep walking with the flow with everybody else. We walk in and we end up going up the elevators.
But even though there’s a swipe card, everybody pretty much swipes every floor. No problem getting upstairs and getting to a floor, find an empty room. I pick up a phone, I call up the operator, I say “Hi, I’m the CIO, who do I send my contractors to get a badge”. I get transferred around and some woman finally says send them down to the front desk. Me and my accomplice go down to the front desk. I say, “Hi, we need badges”. She’s like, “I saw you to come in and I tried to stop you. I’m like, Oh, I’m sorry. We just…” she’s like, “Oh, you look busy. Don’t worry about it”. I’m like, “Oh, we’re sorry”. Anyway, she calls over a guard, and she tells the guard these gentlemen need badges, and the guard says okay, takes us to the guard room where they print up badges. They took our picture and while we’re sitting, they’re waiting for these badges to print the guard says, “What do you guys do”? I go, “Oh, we do computer stuff”, then the guard says, “Do you guys need access to the server room?” I go as a matter of fact, we do. So anyway, got access to the server room, wait till lunchtime end up going down there, the room is completely empty but the primary domain controllers left logged on, no screen locks down at a new admin user to the domain, and then pretty much as you can guess, we had control over the entire network. Now, three weeks later, I get a call, and this was after I submit my report.
Three weeks later, I get a call from the person claiming to be the facilities manager, and the facility manager goes to me, I’m the facility manager, I need to know the name of the guard who gave you the badge. I’m like what, and he repeats itself. I go, “Here’s the deal. I’m not giving you the name of the guard who gave me the badge. I go, if you want the name of the guard, you can ask the CIO, who’s the only person I report to because frankly, you might be lying to me of who you are”? He’s like, “I’m not lying”. I go, “I don’t care. I’m still not giving it to you, even if you are who you say you are, and then he’s like, “Okay”, and then I go, “By the way, if the CIO asked me for the name of the guard who gave me the badge, I will give them the guards name”. However, I will also tell him that the fact that guard gave me the badges [inaudible 16:16] the fact you don’t know who that card is, is infinitely worse. Absolutely, and the reality was, I have no idea that the guard gave the badge, you know, I can describe them physically. I didn’t bother with that, and the reason was, the guard wasn’t doing anything wrong.
The process for that organization, from everything I could tell, was the wonderful reception is to everybody loves and trust, tells the guards who gets access to the server, or who gets access to the building, those security staff do whatever she says, and the fact that guard can just randomly assign special accesses throughout the building. That’s the fault of the security manager, because there’s no approval process, much like a badge, you’d have like an authorization, you know, chain.
There was no authorization chain for who’s supposed to give out access to critical facilities, and you know, I could go on, but do you blame them for that? No, I can’t blame the guard for that. I blame the facility manager for not having processes in place not having training in place, if there were processes to train on. You know, likewise, you can threaten to fire a user. The question is, was that user provided as I described, before, I went on this long, tirade, a just culture, where they were provided every opportunity and every resource required to do it, right.
Chris Glanden 18:00
What’s ironic about that story is that you are an outsider playing the role of an insider, and then after the fact explaining to them what proper process should be, which is something that a true insider should have already known.
Ira Winkler 18:15
Well, I mean, that’s one of the things now, Do I look at it that way? The question is, and I always have this other thing that I do, whenever I go into a company, I always try to steal from the CEO or the CEOs, admin assistant, because nobody will ever blame them should something go wrong? And like, why should they try to make a scapegoat out of anyone? It’s like, okay, you first. There was one time for example, this is telling another story, actually, it was bizarrely true, it was almost surreal. But I was told the CEO was involved in some personal lawsuits, I can’t discuss them, and they said, we need you to stay away from anything having to do with the CEO.
So, in order for me to stay away from especially the technical side, I need to know the IP addresses for the CEOs Office Suite. So, what I did was I went up to the CEOs Office Suite, got a company logo shirt on with a clipboard and I said, I go like, he had a team of executive assistants. I go to one of them, I go “Hi, we’re doing a quick security assessment. I just need to double check that your antivirus software is up the place. Can I please just do a quick check on your system?” And just make sure that your systems and she’s like, “Sure”. So anyway, I just sit down on the system, bring up a command window, type up IP config to get the IP address of the system and then I think okay, we’ll stay away from that class C.
So anyway, and then the woman was like, “Well, what about all these other computers here? I go, “Well, honestly, I tell the woman, I go, “If your computer is with this, there all those other computers should be the same. She’s like, Well, why don’t you check? It’s not a problem. She has all the other people get up while I sit down and type up IP config, and then she’s like, Well, what about Mr. So, and So’s computer? I’m like, I really don’t want to bother him, and she’s like, not a problem, and she tells him, and the guy, the Cesa gets off the office chair, I sit down at his computer, and then while I’m there, his email was open, and I just decided, what the hell I sent a message to the security that Cesar I am reporting to and I just go, “Hey, you’re doing a great job, take the rest of the day off”, and it is then, and then I just turn around a woman I go, it looks like, she comes back, I say, “Looks like his system secure as well, thanks so much”, and I leave in the Cesa was like, Ira, you’re going to tell me why the CEO all of a sudden cares about my well-being? I go, you’re not going to believe it.
But if I didn’t do something like that, it would have looked weird but it’s like, what are they going to do blame the Reno fire that woman for having a potential bad actor sit down with the CEOs computer. I mean, they should if they fire anybody in theory, but the reality was, they’re never going to fire her, and the fact I did that… Even though it didn’t make it into the report kind of protects everybody else.
Chris Glanden 21:32
Right, and hopefully, they learn from that, and they’re able to recognize that during the next exercise. So, in most of these social engineering campaigns, you are essentially playing a spy. Does that feel like a job to you? Because hearing it, it really sounds cool, and you need to have the skills of a Hollywood actor slash psychologist slash sociopath. So, it seems like there’s definitely a high level of complexity involved, but fun and challenging at the same time?
Ira Winkler 22:05
Well, it’s definitely on the awesome side. I’m sitting there, and I’m frankly, trying to go ahead and do a job act professional, but there are sometimes it’s like, I’m sitting there, Oh, my God, I can’t believe this is working but at the same time, it’s just my mindset. You know, I’m being paid to embrace this character. Yes, I’m being paid to act. I have done some in the past and things like that. But it’s just a matter of… Yeah, I’m playing a role and I take it seriously the fact of the matter is, any sociopath can kind of do the same thing. I mean, all it takes is somebody to a large extent, to put any sense of guilt aside and say, screw them. You know, a sociopath thing. Well, I’m smarter than they are, they deserve it, if I trick them, and I look at it this way, when I approach it, I do, I put my guilt aside, because I’m like here thinking, I need to know what I’m going to find because I need to figure out what’s the best way to help the organization.
So, it does take that mindset, I once worked with a woman. I should say, I started working with her, and at one point I needed a woman to call somebody, because that’s what they were expecting, and she just couldn’t, for lack of a better term lie to somebody that quickly. So, she went ahead, and she wasn’t the right person. One of the guys I work with is a former Navy SEAL, and he is the paragon of virtue and he can do it, I guess if he’s mentally prepared but he’s just not a good natural liar, which you can’t fault the guy for. But he has so many other valuable skills, but unless you’ve prepared for it, he’s not going to sit down and be able to quickly lie to somebody.
On the other hand, you have other people like my friend [name not clear 24:04] former Russian, he’s a colonel in GRU, and I would trust his perception with my life. I mean, this was a guy, we once were doing an assessment, I was doing an assessment, we walked out of the office with a former director of security former Secret Service agent, and Stan [not clear 24:24] walks out with me. He’s like, I don’t trust this person. Like a former assistant director of the Secret Service, what do you think he’s like, I don’t trust them? Anyway, the next day we find out Stan was 100,000%. Actually, right? The guy was a scumbag. Anyway, I’ll leave it at that.
Chris Glanden 24:45
So that could be attributed to the human Intel gathering culture of the GRU, which is what most organizations are striving for now, creating that retrained mind of their employees to identify red flags. I’ve always been fascinated by magic and illusion, subconscious thinking power persuasion, and I found that approach often runs in parallel to cybersecurity, specifically awareness efforts. It’s challenging enough to get one person to believe and begin practicing secure behavior. But the mental trigger for each individual is going to be different. So how do you effectively build that culture, that mindset at scale?
Ira Winkler 25:28
Here’s the problem, and I think one of the phalli, one of the biggest problems, security awareness efforts have and again, highlight this in my book, you can substitute the other two plugs. But anyway, you go ahead, and you look at it, and the problem is people treat the users in with regard to cybersecurity in a completely ignorant and naive way. Because think of it this way. What happens if somebody you know, most organizations do not fill out their timecard, they don’t get paid? What happens if they don’t submit? … Like, I travel and if I’m working for a large company, either as a consultant or direct employee, if I don’t submit each and every receipt, I’m not going to be reimbursed.
So, if I have a $3,000, airfare, hotel bill meal, expense report, and I leave off a $4 and 63 cent receipt for Frappuccino. That whole expense report is kicked back for that $4 expense. Why? Because that’s how they have trained the organization to do it, and why? Because they’re concerned about financial fraud during travel, and over a period of time, they’ve gone ahead and said, Hey, how are we going to ensure the appropriate charges are made? We want receipts, if we don’t get all the receipts we want? We kick it back. Why? Because that’s how people become responsive. Now, does anybody say, “That’s not fair to the user?” Did anybody ever say, “Oh, well, I was a nice guy”, and we really want to reimburse them, it’s only $4. Just forget about. No! you need approval to not have the receipt, even if it’s a $4. receipt.
So why insecurity do we go ahead and not have that same mindset? Where when appropriate security is a must. You know, we can’t go ahead and presume every right action that somebody might theoretically take or not take. But we should have expectations for security the same way we have expectations for everything else, and by not realizing that that’s the failure of the security organization. It’s the primary failure of security awareness people, because what they’re doing is, they’re portraying their security efforts as a should, you know, you should have a unique password, you should go ahead and turn off your computer at the end of every day. But what happens if it’s enforced is a must. You know, you’re saying you’re in changing the culture. The reason the culture exists, is, you know, a culture is essentially the representation of how everybody how all the individual behaviors combined.
So, when I was in NSA, everybody wore a badge. Because everybody wore a badge. Yes, you were not allowed in the building if you’re not wearing a badge. But theoretically, there’s something that you could just put it down on your desk, but you don’t, because everybody else wore the badge constantly. Because that was just how things were done. When I went when I did consult for an airline, everybody I was shocked to see everybody wore this badge, despite the fact that was the most insecure organization Otherwise, why? Because that’s the culture of an airline. Because everybody in an airport has to wear a badge.
So, everybody in the whole airline wore their bench, and it was just the way it was done, because that was the culture, and it was enforced because it was made a must. Now, the other part is a lot of security programs are teaching people how to do and how to not do things wrong as opposed to how to do things, right, and that’s another focus that people really need to have, how to do things right. As opposed to you know, I use the example of the little dated now but you have too many people being trained to be Elmer Fudd. Being on the lookout for the last goalie wabbit where every time somebody is on the phone, the person is taught to think is that the hacker trying to trick me. They shouldn’t be worried about who’s on the other phone. They should be worried about, okay, I have a call. They are asking for certain types of information. I don’t care who it is. I care how do i do my job? Correct. Wait, how do I not distribute information improperly and so on?
Chris Glanden 30:05
In terms of enforcement and transitioning into the technology side. Are there any tools or techniques that we have out there to help automate the reduction of the human impact?
Ira Winkler 30:16
Yeah, again, that’s really the whole principle of you can stop stupid, which is, we’re applying safety sciences, to the concept of cyber security of you what I call user-initiated laws, because that’s a critical concept going to what you’re saying, again, the whole books on user-initiated laws, because the problem is not that a user is unaware of the problem is that a user is presented with the opportunities to initiate laws. Now the user problem really should be okay, how do I go ahead, and ideally stop the user from being in the position to initiate loss?
So anyway, you figure out okay, how do I prevent the user from risk, for example, use phishing messages, because those are easy phishing messages, the first way to stop a user from initiating a loss is to have demark, for example, don’t let spoofed messages come in, then assuming that’s not the problem, then you have email servers that have email filters, and the filters weed out likely attacks, and then the system then once it’s there, then it gets delivered to the user, and the user system should have, again, their own set of anti-malware, and that should potentially weed out attacks as well, and then once the user is actually looking at attack that wasn’t weeded out, because all of a sudden the technology failed, then you should have a user experience that guides the user in theory, to do the right thing, and how to guide the user would be for example, saying this domain looks unusual, this domain does not this link does not go to where it says it does as an example, and then you have the awareness.
This is where awareness comes in. Awareness tells the user Okay; how should you generally determine a message looks accurate? And let’s say the user goes ahead and make the wrong decision, then what should happen is the reaction or the response for a phishing message, is usually to attempt to download malware, or an attempt to give up credentials or send them to an unsafe website. So if you download and install malware; if you set up the system so that the user doesn’t have permission to download and install malware, they’re not given admin rights, for example, their rights are limited, then if you’re going to go ahead, and for example, try to get the user to send out sensitive information saying I’m the CEO, I need you to reply back with financial data, there should be data leak prevention software that stops the user from sending that out, and then there should also be web content filters that stop the user from going out to unsafe websites and giving up credentials or sending out other information. Like you’re saying there’s a whole technology environment. Again, it’s implementing safety sciences, to the field of cybersecurity related to user actions related to cybersecurity.
Chris Glanden 33:30
Yeah, so the definitely compensating controls there.
Ira Winkler 33:33
Well, now, if you are relying upon the user, I see so many awareness companies say let make your user your last line of defense, conquer user error, and all that sort of stuff. That is so, so stupid. If your whole security program relies upon the user as your last line of defense, you failed as a professional, fundamentally, because here’s the thing, and a lot of people don’t want to talk about this in the awareness space. But a user can do things that create damage either because they are unaware, they might be rushed, and have other things conflicting with doing things right.
They might be aware, but they might have accidentally clicked the wrong button. Or they could also be malicious. It doesn’t matter. You know, when I talk about user-initiated loss, why the loss is initiated doesn’t matter. It just matters that it happens, and you need an environment that protects everything from lack of awareness to malice.
Chris Glanden 34:34
Click, click, boom, game over. Doesn’t matter if the sniper hit you with the kill shot or if it was self-inflicted, you know, end result is the same. Now let’s consider the progression and advancement of AI and user behavior analysis technologies interacting with the human element. Does it help organizations Does it hurt organizations? Can this tech be leveraged to create more and Intelligence around social engineering and phishing? Where do you see that as a negative?
Ira Winkler 35:05
Okay, so there’s a couple of different things, and I’m going to answer a few different ways by telling you what theoretically available. So, for example, you have user, UEBA user and entity behavioral analytics, and those types of tools out there, start telling you what’s normal, what’s abnormal, and try to stop things that look abnormal based upon that given user. So, they’re setting up that user’s profile and trying to determine based upon the user’s past experience, is this likely user or is it somebody who’s compromised the user simultaneously with is the user, a risky person in general.
So, you potentially have that. Then, from an awareness perspective, per se, you also have tools for example, like a cyber ready, cyber ready is a fishing simulator company primarily, and they have behind that an AI engine, that sends out phishing messages that end up being tailored to each and every user, and other words, that the user has demonstrated a level of proficiency, they can then upgrade the sophistication for future phishing messages to that user specifically, and that allows awareness to be tailored. It also like for example, one of the problems when you do these phishing simulation campaigns, as well as awareness in general, a lot of it to the average user is an insult to their intelligence, for the most part.
So, if you keep sending somebody a phishing message, because you see all these vendors saying, we’ve got phishing down to 1%, that may be zero. That’s kind of crappy, and the only way you do that is by sending users the same phishing messages, or the same basic messages again, and again, and again, that’s like saying, I’m going to have given somebody a PhD in computer science, because they’ve taken computer security one on one, and we’ve got them to the point of gain 100 on computer security one on one, and therefore going to give them a PhD in computer science.
No, that’s what happens, for example, is you get send somebody to computer science 101, maybe they get a B, but they’re still allowed to move on to 102, and maybe they get their next B or so on and then they get 201 and so on and take more advanced classes. Have they ever got 100? No, but they have sufficient, they have sufficient knowledge to move to the next level of progression. That’s what a good phishing campaign should be. Because if you ever get down to zero, it’s showing you’re not increasing the base knowledge level, you’ve just got everybody to the same pathetically low base of knowledge.
Chris Glanden 38:03
No, that makes perfect sense. So, let’s start with this is ransomware.com and then level up from there. Now do you see phishing campaigns that mirror real time attacks, or basing simulation efforts off of threat Intel is beneficial?
Ira Winkler 38:20
Well, again, that’s kind of back to what a cyber ready product type kind of does. But then you also have other products like everybody has their own, like all these different awareness companies have different strengths. So, another company in the awareness that has awareness and phishing simulation is mine cast. Mine cast their strength is that they provide an integrated suite of products regarding phishing and user related error regarding phishing, and so part of that includes also doing threat intelligence on incoming phishing messages.
So, one thing that mime cast phishing solution does as an example, is they can take actual attacks and progress, de-weaponize them, and then send them out as a phishing campaign to see how users would do with a real attack. Now, one of the reasons why you say, Well, we’ve already weeded out the attack, and maybe it’s like, the reason you want to do it, especially these days with most people working from home, due to the COVID situation, is you can start saying it’s like, well wait a second, people likely don’t just use their email, like their work email, they have their home accounts, and they might not have the same type of quality of email filtering that the work environment does, and you want to ensure that they get trained and also just to see how would your company have done in this sort of case where the real attack got through, so you know what to potentially improve your awareness program on for the future, and what you were lacking.
Chris Glanden 39:56
You hit on all of these points in you mentioned your upcoming book, You Can Stop Stupid. How can our listeners get it? Would you mind giving us a little bit more information about the book?
Ira Winkler 40:06
Of course, I would be happy to. But anyway “You Can Stop Stupid is available for pre order on Amazon or just go to tiny.cc/stupid book and you’ll get to the Amazon link. At the same time, my base footprint, most people can reach me on LinkedIn, Twitter or Facebook, I’m not hard to find there’s only two Ira Winkler’s that I know of in the technology space, and then on the other ones, a woman, and then my website IraWinkler.com is currently being updated. But hopefully at some point, somebody’s going to hear this, and it will finally be ready.
Those are the best ways to reach me and also my company, if you do need any help on the human aspects of security, or actually just about any other aspect of security as a whole. You can reach me through securementem.com, and mentum is M-E-N as in Nancy, T-E-M as in Mary, securementem is actually translates to secure mind. But somebody was domain squatting. So how to get mentem,
Chris Glanden 41:14
I actually prefer secure mentem. So, Ira looks like it’s last call here. So, I have one more question before you take off. If you were to open a cybersecurity themed bar, what would the name be? And what would your signature drink be called?
Ira Winkler 41:29
You know, what would the name be? Okay, so if it was me, I’ll have to give you two answers because I can’t think of which is best so quickly. I got to admit I do like securementem being secure mind, I’m not going to lie. I do like that. At the same time, I might call it stopping stupid as well. Because in a bar, I guess that’s one of the ways you create stupid bugs, and then we’ll call it the sounds so cliche, but the drink might be called the social engineer, which frankly, I don’t like because it is too cliche or the human spy.
Chris Glanden 42:25
I like that.
Ira Winkler 42:26
Although there is a place called the safe house in Milwaukee, which is really a cool buy thing bar. So they might have taken my better ideas.
Chris Glanden 42:36
And you will require ID to get in?
Ira Winkler 42:38
Or a password. If you actually looked up the safe house on the internet for that Milwaukee bar restaurant type of thing. It’s kind of on the cheesy side, but it is unique in it and fun.
Chris Glanden 42:57
That’s cool. I have to look that up.
Chris Glanden 42:59
Thanks, Ira. I appreciate you taking the time to share some of your amazing stories and insight. I look forward to reading your upcoming book, You Can Stop Stupid, and I wish you the best of luck.
Ira Winkler 43:10
Thanks so much. I appreciate it.
Chris Glanden 43:11
Sure. Thanks. Take care.