In this conversation, Chris Glanden and Matt Canham delve into the realms of deep fakes, cognitive security, and the implications of AI technology on human agency. They explore the nuances of cognitive security, differentiating it from social engineering, and discuss the potential vulnerabilities that arise as AI continues to evolve. The conversation also touches on the OSI model and how cognitive security can be integrated into existing frameworks, highlighting the importance of understanding cognitive attacks and their implications for both humans and AI systems. In this conversation, Chris Glanden and Matt Canham delve into the evolving landscape of cognitive security, emphasizing the need for security practitioners to adopt a systems-thinking approach. They discuss the implications of AI and direct neural interfaces on security practices, the role of DeepSeek in shaping AI interactions, and insights from the Cognitive Security Institute’s meetings. The conversation also touches on emerging trends in cognitive warfare and concludes with a thematic drink inspired by the complexities of the field.
TIMESTAMPS:
00:00 – Introduction to Deep Fakes and AI Technology
02:28 – Understanding Cognitive Security
09:58 – Differentiating Cognitive Security from Social Engineering
19:05 – Exploring the OSI Model and Cognitive Security Layers
21:48 – Bringing Security Back to Earth
24:26 – The Role of Cognitive Security in Modern Threats
25:02 – AI’s Impact on Security Practices
30:36 – DeepSeek and Its Implications
33:47 – Insights from the Cognitive Security Institute
41:28 – Emerging Trends in Cognitive Warfare
45:43 – The Complexity Cocktail: A Thematic Conclusion
SYMLINKS:
Dr. Matthew Canham – Home – https://www.canham.ai/
The official website of Dr. Matthew Canham, showcasing his expertise in human–AI integration, cognitive security, and updates on his latest projects and research.
Research – https://www.canham.ai/research
A dedicated section highlighting Dr. Canham’s research initiatives and academic contributions in the field of human–AI integration and cognitive security.
LinkedIn – https://www.linkedin.com/in/matthew-c-971855100/
Dr. Canham’s professional networking profile where you can learn more about his career achievements, collaborations, and thought leadership.
Cognitive Security: Exploring the Human Layer w/ Dr. Matthew Canham | CSI Talks #1 – https://youtu.be/OGmvoj5Dj_A
A YouTube video where Dr. Canham elaborates on cognitive security and human–AI integration, aligning closely with the conversation’s focus on evolving security threats.
Cognitive Security – Army Cyber Institute – https://cyber.army.mil/Research/Research-Labs/Cognitive-Security/
An official U.S. Department of Defense page describing research into cognitive security—protecting decision-making under adversarial conditions. This resource underscores the growing institutional focus on the subject.
Apple’s Mind-Blowing Invention: AirPods That Can Read Your Thoughts – https://digialps.com/apples-mind-blowing-invention-airpods-that-can-read-your-thoughts/%5D
An article that examines Apple’s patent for AirPods designed to detect brain signals.
This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors. Chris Glanden: Welcome to barcode. I’m your host, Chris Glanden. And today we have a very, very close connection to the podcast, Dr. Matthew Canham. And for those that don’t know Matt, he is a former FBI special agent and research professor at UCF and leading expert in human AI integration. He is the coauthor of the forthcoming book, Synthetic Media, Deep Fakes and Cyber Deception, Attacks Analysis and Defenses. and serves as executive director of the Cognitive Security Institute, a nonprofit focused on evolving security threats in human and artificial systems. Matt, welcome back to Barcode. Matt Canham: Thank you, Chris. Thanks for having me again. Chris Glanden: Anytime, If you don’t mind, I think we should level set on a key term that I think we’ll be discussing more throughout this conversation and that key term being cognitive security. if you don’t mind, let’s just start there. Would you mind defining your perspective what exactly cognitive security is? Matt Canham: Well, let me just kind of start from first principles here. I think we’re all familiar with physical security. Physical security is protecting physical assets. It uses things locks or seals or other access controls to prevent access or to prevent the unauthorized withdrawal assets. And A lot of the principles that are actually developed in physical security were then later transferred over to cybersecurity as people were learning how to develop protection of information assets. cybersecurity is focused on information and or systems. Now Cognitive security really how I’m thinking about it is protecting agency. Specifically what I mean is that somebody should have informed intentionality. if somebody believes that they are giving money to Brad Pitt to help him with an upcoming kidney surgery, then they should actually be helping Brad Pitt with his upcoming. kidney surgery. If that is a scammer who is behind a deep fake impersonating Brad Pitt, then that person is robbing that victim of their agency. that person is no longer making an informed intention, intended action. They’re no longer making an informed intended action. They’re being deceived and they’re taking an action that with full disclosure, they probably would not take. And the reason that I’m really using this term agency is because we’re very quickly, very, very quickly moving into a world where cognitive security is not just applying to humans. I know now that’s primarily the direction that it’s applied, but we’re talking AI agents. People are already beginning to use AI agents. I think In a very short order, we’re all going to have our own personal AI agent. think Lewis Rosenberg you’ve had on your show, he talks about this, We’re going to have, what is it, the elves, the life assistants, and the digital elves. Cognitive security is about protecting agency—both human and AI. Share on X Chris Glanden: The digital elves. Matt Canham: And for as important as we think cognitive security is for ourselves, our digital elves, if those things get turned against us, I mean, that’s going to leave us in a very bad place. cognitive security, as I conceive of it, really this, it’s protecting the agency of humans and AI to make sure that they can make informed, intentional actions. Chris Glanden: Got it. then how would you differentiate cognitive security from another key term that I guess most of us are more familiar with, which is social engineering. Matt Canham: All, that’s a great question, and this is something I get asked often. I think that this is very important because something I hate is when people just come up with new terms for the same old thing, And what I would say is that social engineering refers to a cluster of TTPs, tactics, techniques, procedures that are used to manipulate, again, humans, although Matt Canham: Arguably, social engineering attacks have been applied against AI. There have been documented cases of this. But again, comes back to this idea of cognitive security, It’s not just humans. I’ll illustrate the difference, though, with an example. And the example is from the movie War Games. And in the movie War Games, David, who’s played by Matthew Broderick he manipulates school staff to put himself into a position where he has access to passwords that are written down and kept in a desk. And that is an example of a social engineering technique. He’s using a technique to gain access to something that he wouldn’t otherwise have access to by manipulating human beings. Now, later on in the movie, is able to, I’m spoil alert, sorry, movie’s 40 years old. Chris Glanden: And for those that haven’t seen it Matt Canham: definitely worth watching. Yes. He is able to reverse engineer a backdoor password that was created by the creator of a system that he’s trying to gain access to. Now Chris Glanden: Definitely. Matt Canham: He believes that this designer is dead, there’s no way for him to contact the designer of this system. But he also believes that this designer has left a backdoor password that will give him access to this system. he goes into this huge sort of effort to learn about the individual and then reverse engineer that password and he gains access to the system. this is not something that I would consider to be social engineering, because he did not manipulate anybody to get access to that password. He reverse engineered the cognition. of the person who developed that system. It’s a very different type of an attack. And in fact, this is something that we catalog actually. It’s an offering, it’s an open source offering by the Cognitive Security Institute. The Cognitive Attack Taxonomy is a catalog of over 360 different types of cognitive attacks, vulnerabilities, exploits, et cetera. what Broderick’s character did was essentially an inference attack. he was able to put together pieces to infer the password that gave him access to that system through a data vulnerability, which was that information that was available about that designer. by exploiting that data vulnerability using an inference attack, he was able to reverse engineer. password and gain access to the Whopper system. And maybe in this, I don’t know, show notes or something, I can put links to the entries for both of those attacks. let me say a little bit more about cognitive security and where this is going and why I think this is important at this time. And I mentioned that the Cognitive Security Institute has this open source product called the Cognitive Attack Taxonomy. And what this taxonomy does is it catalogs cognitive vulnerabilities, exploits, and TTPs. And I mentioned earlier that cognition is not limited to humans. And my background is in cognitive neuroscience. And when I was learning about cognitive science it was from a very interdisciplinary sort of perspective. And obviously we have humans, but we also have that are processing information. And in a very abstract sense, can think of cognitive systems as being information processing systems and cognitive agents as being a semi-encapsulated information processing system. that is able to take in information from the environment through sensors and able to act on the environment through actuators. And I know that this is a little bit kind of in the weeds, but the reason that I’m making this assertion is that when we start to look at a cognitive system through this lens, then it really opens up the possibility of what a cognitive system can be. a cognitive system can be a single neuron. It could be an organism a human or an animal. It could be a sociotechnical system an airplane cockpit where you have two humans and you have a dashboard which is a form of external memory which is that memory system is distributed between those two humans, the pilot and co-pilot. when we consider the cockpit as the system, it can take in information from the outside world through sensors and speedometer and on and forth. And it can actuate by exerting control over the airplanes, don’t know, rudder and elevators and on and forth to act on its environment, that’s a cognitive system. This can scale all the way up to a nation state and all the way down to a single neuron. How does this all tie back in? Well, several years ago, and I have not really been able to figure out who came up with this idea first, whether it was Bruce Shire or whether it was Ian Farquhar, but around the same time they proposed independently an HIM add-on to the OSI model. most people in this audience are probably familiar with the OSI layers one through seven sort of stack, And they proposed that a layer eight be added, which would be for humans, human systems. A layer nine. for organizational policy or organizational systems and a layer 10 for legal and regulatory systems. And when we start to think about, security as being securing cognitive systems, not humans, but cognitive systems, then we have things ChatGPT that exist on layer seven, And we have the LLMs. I asked ChatGPT. It told me it exists at layer seven. I’m taking it at its word. And we have layer seven, cognitive attacks and vulnerabilities. At layer eight, we have things social engineering that we discussed already. We have the whole cluster of social engineering attacks We also have other things there’s something called the Stroop Test, and the Stroop Test is, the task, is that someone is supposed to name the color of the font of the words as quickly as they are presented. And you might see the word cat, but the font is in, it’s in blue font. And you say, you say blue. or you see the letter, the word car, but it’s in orange font, and you say orange. The problem is, is word blue, spelled out B-L-U-E, but it’s in orange font, because of those contradictory inputs, it actually causes people to significantly slow down. how is that used in a cognitive security context? I haven’t confirmed the veracity of this story, but the rumor is, is that during the Cold War different color words were presented in Russian, in acrylic. And this was used by intelligence, US intelligence agencies to screen out native Russian speakers because it would slow them down. Even if it was imperceptible, you could still time it and you could see a consistent timing difference when they were presented words that spelled out different colors. in Russian, whereas a native English speaker with no knowledge of Russian, it wouldn’t slow them down at all. And again, this is an example of something that’s completely not social engineering, but it falls within cognitive security in humans. Threat actors are already exploiting the full stack of human systems. Share on X Matt Canham: Another example is a strobe attack. think in 2009, I’m not sure about the years, but I think 2009, and then again in, I think, 2018, the website for, and then later I think it was the Twitter account for the National Epilepsy Foundation. I might have the organization wrong. I can get you the reference. was compromised and the attackers caused the site and the Twitter account to flash a strobe light. And this is what’s known as a strobe attack. Again, this is cataloged in the Cognitive Attack Taxonomy, the CAT. And the purpose for this flashing was try to induce a seizure. this is trying to induce a physical effect through a perceptual system. by activating something in their epileptic condition. again, example of a cognitive attack, nothing to do with social engineering. give you one more example for humans, P300 guilty knowledge test, and this is where you put an EEG sensor net on somebody’s head and you show them visual stimuli. And as you’re cycling through these different pictures, what we’ll see is something called an amplified P300 wave. This is a positive, P for positive 300 wave, which means 300 milliseconds, roughly a third of second after the onset, after that that picture’s been shown to the person, will show an amplified P300 wave. What this can potentially be used for, and again, this is somewhat controversial, there’s debate about this, but you show someone different pictures, are you familiar with this person? Regardless of what that person says, they’re going to show an amplified P300 P300 response to faces that they recognize versus strangers. And this is not something that can be inhibited because by the time that you recognize somebody’s familiar and say, I don’t know that person, that P300 has fired, it’s come and gone. again, this is violating cognitive integrity, I would argue. it’s an attack on cognition, nothing to do with social engineering. these are layer eight examples. Layer nine, we saw last year the very dramatic supply chain attack where multiple pagers and then later walkie talkies exploded in the Middle East. A great layer nine exploit for supply chain is that an attacker poses as a vendor and sells products at a very heavily discounted rate. This is exploiting. policy that encourages a reduction or a reduced budget, for a department that has a policy to say, buy the cheapest thing or buy the least cost equipment, this is a vulnerability that can be used to exploit supply chains because you can use this then as a way to interject or introduce malware or even in this case dangerous products into that supply chain, that would be a layer nine vulnerability. 10, law. because of the way that AI is progressing and the rate it’s progressing, I think things are going to start getting very strange very quickly. And I don’t think we’ve even begun to see what that looks. but Bruce Schneier has written somewhat about this already that we exist in socio-technical systems. these are social and technical systems. Well, something that I’m starting to play with a little bit is using LLMs to look at established law. And in a way what we can envision, I think, in the near future is that LLMs are going to be able to operate almost software scanning or code scanning tools, but on legal code, not software code. And when we get to that point, if they’re able to scan for vulnerabilities, then maybe they can understand where to inject exploits or where to exploit those vulnerabilities. Or I think an even scarier possibility is that an LLM or series of cognitive agents may be able to identify how to put together vulnerabilities in legal code. you can imagine regulations where there’s a piece of innocuous code put in one place and a piece of innocuous code put in another place and looked at by themselves, it’s sort of they don’t look much of anything noteworthy. But when you put these two things together, can be used in a very malicious way. When we look at modern legislation, it is big and complex that often the lawmakers themselves are not actually reading the entire thing. think the Affordable Care Act was, I think, very close to, well, I think it was in the tens of thousands of pages. And to expect somebody to actually read all of that before voting on it, it’s unrealistic. And you can almost imagine a malicious actor using cognitive agents as a prosthetic or as an aid to put in a certain piece of code that’s promoted by one legislator and somebody else to introduce code from another legislator, they never see each other’s little add-ins and now you have either loopholes or other things that can be exploited in ways that we might not think about. Chris Glanden: Interesting. Matt Canham: That’s layer eight, nine and 10 as an extension to OSI. And because of the way that AI is progressing, I think we’re going to see more of a blending of these things. They’re not going to be as discreet as we’ve traditionally thought about them. Chris Glanden: Well thanks Matt, was all great intel, but I want to bring this back down to earth. What does this mean for security practitioners? How is this useful to us? Matt Canham: Well, I don’t think it’s much surprise to say that the world is becoming more complex, but what that means for security practitioners is that we need to think about things in terms of systems much more commonly than we are now. again, coming back to the example I gave from the legal perspective. How many, how many socks do you think out there are thinking about what their legal department is doing? I know that there is interaction to some extent, but I’ll give you a very concrete example that we saw recently after the SEC. passed the regulations on or enacted the regulations on reporting breaches. I can’t recall when that was, but it was some time ago. Well, what we then saw was that threat actors were weaponizing that. they would breach a company and Matt Canham: Inject ransomware and holding their data hostage and ransoming it, but as an extra incentive or an extra lever on that company, if the company did not pay within that specified time, these threat actors, these criminals now, were reporting those companies to the SEC for not reporting the breach within the amount of time. that they were prescribed to do. And this is how really thinking about these systems in full stack is becoming, I think, very important. And this is going to affect everything, public relations, but also investor responses. And having a much more holistic and systems perspective on security, think is critical. Somebody we hosted recently was Jessica Barker, Dr. Jessica Barker, who I think has been on Barcode as well. And something I really about her approach is that she is thinking about security in a very holistic and systemic sort of perspective. That’s what I think that this all means for security practitioners, and it’s where cognitive security can be an add-on to what they’re already putting into place. Security isn’t just technical; it’s legal, cognitive, and systemic. Share on X Chris Glanden: Yes. why would you say cognitive security is an issue now? Is it because of the surge in AI or do you feel it’s more than that? Matt Canham: Well, AI is definitely playing a huge role. I would argue that AI-powered autonomous or semi-autonomous systems, both digital and physical, are playing a huge role in this now. And it’s something that, that alone, I don’t think anyone is going to really escape from. I think it’s going to impact everybody. But Chris Glanden: Hmm. Matt Canham: Part of the reason that it is going to have such an impact is that we’re seeing an increase or an increasingly more persistent digital environments. And what that means is you’ll have people in Africa, let’s say, who may not have electricity or running water, but they’ve got a mobile device and they use that mobile device to engage in global commerce. And even if you don’t see it, there’s this constant digital presence almost anywhere in the world now. you combine that, that persistent digital environment with AI, and now you’ve got a very powerful combination. But why Why now? Why is this a concern now? And I think the reason that this really needs to be a focus now is that we are moving towards direct neural interfaces. things are always five, 10, or 20 years out, Until they’re not. And We’ve already seen through Neuralink the first human trials with a human patient, I believe quadriplegic, who had a direct neural interface implanted and is now able to play World of Warcraft and other games online. using nothing, you no motor movements whatsoever, purely just from thoughts and able to control the interface. as we see the emergence and potentially popularization of direct neural interfaces, brain-computer interfaces, that is going to take cognitive security to a whole new level. Now, when it comes to AI I mentioned the AI assistance earlier, And this is a whole nother sort of dimension to this because we can now start to envision where we have direct neural interfaces and we have our cognitive agent assistance. we can start to run into things principal agent problem. And the principal agent problem is that an agent outside of yourself may have motivations that are different than yourself. And where this can become a problem is you see anthropics research with the sleeper agents where you have an agent that is acting on your behalf but then all of a sudden starts behaving maliciously. Matt Canham: And this can be really dangerous when AI does it because it can do it in a very subtle, undetectable or very low detectability sort of way. you can think of something, let’s think of a gambling aid, it’s helping to steer you towards the games that are going to be most profitable. And you can imagine a slot machine that pays off say 49 % of the time. If a slot machine pays off 49 % of the time, it’s going to feel you’re winning. but over time, you’re going to end up behind, you’re going to end up losing to that machine. And we can very easily conceive of a world where an AI agent that you trust is actually working against your best interest, and it’s going to be very, very difficult to detect that. And just a quick note, this is not limited to AI agents, This principal agent problem. I mean, you ask some of Bernie Madoff’s clients and they will very quickly be able to describe how a principal agent, an investor was not acting in their best interest, it’s not just, it’s not this is a new problem, but it’s presented in a new way. Chris Glanden: I’d love to get your thoughts on DeepSeek and where that plays into this if it does at all. What are your thoughts there? And again, depending on when this gets released, things could change by tomorrow. what are your thoughts on that? And does this play into our conversation at all? I know you have DeepSeek here, just, we can touch on it, we can skip by it, whatever you think. Matt Canham: Anyway. 100%. no, think DeepSeek is a really interesting twist on things for a lot of different reasons. And I’m not going to get into sort of the technical evolution of that because that’s just outside of my area of expertise, but where I think a lot of security evaluation on DeepSeek has been very interesting is that DeepSeek performs very well or seems to perform very well in certain benchmarks. But when you start to ask questions that are sensitive to the CCP, the Chinese Communist Party, or not favorable or potentially not favorable to the CCP, that model shies away and very quickly tries to steer the conversation in other directions. It won’t answer. I have seen some people that have gotten it to respond, but only after some jailbreaking has taken place. And I think that this is a preview of the direction that these things are coming or going, because we already talked about this principal agent problem and the AI agents. And we’ve already seen, not just in DeepSeek, but in other models political preferences. There was a paper that came out late last year, I think it was in Proceedings of the National Academy of Sciences, where the researchers actually mapped out the political biases of different models. that is one example. But Douglas Rosenberg on an episode from, I don’t know, two years ago, I think he hit the nail on the head back then where Chris Glanden: Hmm. Matt Canham: You’re walking down the street with your own digital elf and it is steering you towards different different commercial opportunities. for example, you’re walking down the street and you’re looking for a place to eat and it’s steering you towards this restaurant versus that restaurant. Why did this other restaurant happen to have, some sort of in with the company that’s running your particular AI agent, steering you in this direction versus that. I mean, now these things are very blunt objects, But where this is going is that SEO, search engine optimization, is probably going to go out the window. And what’s going to replace it is that we’re all going to have our own AI agent assistance. And those things are going to be leading us around by the nose, and we’re not going to know it. AI agents could steer us without us even knowing it. Share on X Chris Glanden: They’ll be unmarked. mean, we talked about that before where, where is it going to fall in terms of responsibility on disclosing that it is pointing you towards this direction because it is, a paid advertisement. Because if not, that could get really, really scary. You run a nonprofit called Cognitive Security Institute, as I mentioned, which holds weekly online meetings. And I’ve been to several of them, which are phenomenal, by the way. But I’m curious, what would you say have been some of your most biggest takeaways from these weekly meetings? Matt Canham: Well, first I’ll plug that in a few weeks time, depending on when this is released. We’ll have somebody presenting from OpenBCI and we’ll have a neural talk. I think that will be our first. some things that have been very surprising from presentations. And these meetings, they follow the format of somebody will present on a topic, usually an expert in the area. We take those presentations and we package those up into videos, put those onto our YouTube channel, link in the description. But we have a discussion afterwards that stays private within the group. one of these episodes, is actually going to be released on our YouTube channel this week, is by an individual who goes by the handle randomwheeler. It’s obviously a pseudonym. And he infiltrated online. Telegram scammer groups. And I thought that the way that he did this was very interesting. He created sock puppet accounts to act as victims. And when he was brought into the group and asked to prove his, know, bona fides, that he was actually a legitimate scammer, had the skills and the willingness to scam people, he went out and he scammed himself through these different personas to establish his, his credibility. And the most shocking thing that I learned from that presentation was that some of the higher end online groups actually have counselors who will help the scammers work through the psychological distress that they experience knowing that they’re scamming victims out of their money. And oftentimes scammers are scamming elderly and these elderly people do not have income and limited and the knowledge that they are taking critical resources away from these people who may not be able to replace them causes psychological distress on the people who are enacting these scams. I have to confess, that thought had never occurred to me. prior to Random’s presentation. And when he explained it in the after discussion, I actually stopped him and I Matt Canham: When we look at the general population, only about 1 % are actual psychopaths. If you take criminals, you may boost that up to maybe 10%. that means that roughly 90 % of the scammers who are doing this feel some level of empathy or guilt for what they’re doing to the victims. And that was something that was completely surprising. Another recent one, I think this is actually this year, but I’ll throw it in there anyway. Isaac Hathaway demonstrated his DeepFake OS, and this is a Red Teaming toolkit, open source, and he was able to demonstrate real-time video chat with audio DeepFake as the voice. And this is significant because it’s very difficult in a real time interactive video chat what we’re having now to sync up the audio voice, the audio deep fake voice of say Tom Cruise, while also impersonating the video of Tom Cruise and getting those, the lip movements and everything to match. And he, if you look There is some discrepancy, but, he is very, close to having cracked that. I thought that was very interesting. Chris Glanden: That’s wild and I’m hoping to have Isaac on soon and have him demonstrate that. Matt Canham: It is really, really cool what he’s done. that was great. A couple other episodes that we’ve had. Dr. Sean McFate talked about how private companies mining companies, think are specifically most guilty about this, are actually using cognitive warfare techniques against the local populations to prevent uprisings against some of the mining operations that they’re doing in countries, because this causes toxicity into the water and other pollutants, and people are rising up against this. they’re actually employing cognitive warfare, TTPs, and people who are experts in this area of warfare to suppress local uprisings. And this is not something I’ve really thought about, how PMCs, private military contractors may play a role in cognitive warfare moving forward, but this is something that he talked about. Perry Carpenter gave examples of he has a system where he can inject statements into something that an LLM is saying. And he has scenarios where an LLM take on the persona of a virtual kidnapper or will try to voice fish to fish somebody in fact He he was one of the competitors in the John Henry competition at Def Con this year last year which pitted a human visher against an AI visher and the AI Visscher, voice bot, lost that competition by just a hair’s breadth and the humans actually gave the AI their trophy and said, look, you guys really deserve this. that, Perry Carpenter was in that team. We hosted him about a month after that competition and he demonstrated these, these injects. And you have this voice bot that is trying to elicit someone’s password and or login credentials. And all of a sudden it starts talking about what would you on your cheeseburger. And then he would respond back. He’s, that’s kind of a weird thing for somebody from technical support to be asking me. And it was very interesting to see how the models would try to reset or adapt to something that really did not make sense in what they were saying. This, think, alludes to a much bigger and emerging sort of issue, which is machine psychology. This is something I’m becoming very interested in, in that these AI systems have behavioral patterns that are observable and in some cases predictable and that we can start to apply psychological principles to these things. Now I am in no way saying that AI is necessarily sentient, conscious, or behaves the same way that humans do, but what I am saying is that they operate according to a certain predictable set of patterns. that in some cases mimic humans. In other cases, they’re very different. But by starting to map out these operating principles, we can develop a of machine psychology. anyways, what he was demonstrating was essentially machine cognitive dissonance, something that happens when you have two things that contradict each other and somebody has to make sense of those. Chris Glanden: Interesting and he has a new book out too, fail F.a.i.k. Matt Canham: Fake, F-A-I-K. Yes yes. Highly recommend anyone who’s interested in this field to get it. I think I’ll shout one more out and that is a graduate student by the name of Dalia Manitova gave a presentation on how she was able to reconstruct the organizational structures of online Russian scam groups based on their telegram communications. subordinates will communicate differently with superiors than they will with peers. And over time, you can start to extract these linguistic artifacts. And from these linguistic artifacts, you can then reconstruct things organizational structures. And she gave a presentation on that. Should be going out on YouTube sometime in the next few weeks. And also very interesting. Chris Glanden: Nice. what else would you to tell our listeners about CSI including how and when to tune in? Matt Canham: Well, we are posting videos every week, and you can go to our YouTube channel. put a link in the description. And that for now is probably the best way to find us. We also do have our website, which is cognitivesecurity.institute. We are currently organizing a few in-person meetings. We had a in-person meeting last year in Las Vegas that kind of coincided with the Black Cat and DEFCON conferences. We’re looking to do some similar things in the future. I would say if you have an interest in this stuff, reach out. We don’t accept everybody as a member. We do some screening because of the nature of the things that we’re talking about. However, if you’re interested, reach out and you can apply to become a community member. we are growing by leaps and bounds. We grew over 250 % over the last year. we’re going to definitely bust that number this year. We’re already on track to break that number within a few months. this is probably good place for me to mention, I’ll be presenting at RSA 2025 and I can give you details in the, I didn’t tell you that. no, thank you. Chris Glanden: Nice. Congrats, I didn’t know that. Now! now I have to book my RSA ticket then. That’s what you’re telling me. Matt Canham: Yes, absolutely. please. It’d be great to see you there. what we’ll be doing there is we’re going to be leading a lab. and with this lab, it’s a two hour lab and it will be introducing the concepts of cognitive security, but we have a cognitive security framework that we’re going to be introducing there and teaching people basically how to apply this to an organization. And then, it’s a two hour lab. That’s kind of the first hour, the second hour is we take what we’ve learned in the first hour and we actually run through a tabletop exercise and apply that to scenarios with cognitive attacks embedded within them. that’s what we’re doing now. I say, we’re growing crazy and more to come, definitely. Chris Glanden: I encourage everyone to at least connect with you online. Again, I’ve been involved with CSI, I think, since you started it and tried to listen in as often as I can. And I will say that what you’re doing is differentiated. There’s nobody really else talking about the topics that you talk about, have the people on with the level of expertise that you have on. learn something every time I tune in to your show. I would love to see that continue growing and you welcome any involvement that that people are willing to put out there and, point everybody your way to help continue that evolution Matt Canham: Thank you. Chris Glanden: Matt, what time it is? Last call. You’ve been on the show before. I already have your signature drink info in the database, don’t need that from you again. But I will ask you this question. If you added, say you added a new cognitive security themed drink to your bar since the last time we talked, what would you call it and why? Matt Canham: Let’s go. Well, first, let me say that I think I want a new bar. Chris Glanden: You want a new bar? It’s been a while, You probably need to you want to, upgrade Matt Canham: Yes absolutely, And, as a hat tip to Wynn Schwertow, the new bar will be called The Institute. And yes, and the drink will be called The Complexity, or maybe just Complexity, Complexity. just Complexity. And do you me go through the ingredients? All. Chris Glanden: Just complexity. please do. I’m writing all this down Matt Canham: All, well, it’d be an ounce of cognac, an ounce of Amaro nonino, and three quarters an ounce green chartreuse, a half ounce of sweet vermouth, a quarter ounce of Luxardo Mariachio liquor, two dashes of Angostura bitters and a dash of absinthe and a lemon peel. There you go. There you go. Chris Glanden: I see why it’s called complexity. Matt Canham: And we’ll include the instructions. yes, I actually created this drink in combination with my AI assistant. And I said, I need a drink that’s complex. And this is what we came up with. And when you come out to RSA, you and I will each get one of these. We’ll see how it turns out. Chris Glanden: Alright. bring me instruction with you and then we just hand that to the bartender and say, look, there you go. we’ll have to validate that. Well, Matt, thanks much for stopping by once again, I hope I can make it to RSA and see you of CSI. can listeners find and connect with you online? Matt Canham: There you go. Yep. Excellent. LinkedIn and I’ll give LinkedIn in the description. But, that’s the best place to connect with me and Please love to hear from you Chris Glanden: Thanks, I appreciate it. You take care. Matt Canham: Thank you, you too, Chris. Good seeing you again. Chris Glanden: you too.