War Machine

Chris Wright, founder and CEO of the AI Trust Council (AITC) stops by BarCode to share his perspective on critical issues related to artificial intelligence, corruption in big tech, and government oversight. With over 25 years of experience as an entrepreneur and former US Army attack helicopter pilot, Chris brings a unique perspective on AI and digital trust.

The episode explores the complexities of AI and its societal implications, focusing on ethical considerations, psychological impacts, and the risks of rapid AI development. Chris explains the concept of Artificial General Intelligence (AGI) and its potential to reshape human existence, emphasizing the need for regulated and ethically aligned AI systems. He also highlights the AI Trust Council’s mission to promote a pro-human future amidst technological advancements. This discussion provides listeners with a comprehensive, and often not heard, understanding of the challenges and opportunities in the AI landscape.

TIMESTAMPS:
00:00:00 – Chris Wright’s Mission to Combat AI Corruption
00:04:39 – The Future of AI and Its Societal Implications
00:14:12 – The Impending Impact of AI and the Singularity
00:19:10 – Political Corruption and Corporate Influence in AI Legislation
00:21:10 – The Psychological Impact of AI Relationships and Their Realism
00:24:00 – The Impact of Chatbots on Mental Health and Society
00:27:08 – Tech Engineers’ Fascination with AI’s Potential World-Ending Future
00:28:25 – AI-Driven Drone Warfare and Its Rapid Evolution
00:32:44 – Building Trust in AI Through a Pro Human Network
00:40:41 – Exploring AI, Vegas Venues, and Cybersecurity-Themed Bars

SYMLINKS
LinkedIn (personal): https://www.linkedin.com/in/christopherwrightaitc/
AI Trust Council: https://www.theaitc.com/

This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors.

Chris: Chris Wright, founder and CEO of the AI Trust Council, is leading a crucial mission against corruption in big tech, government, and the risks posed by AI. With over 25 years of experience in entrepreneurship and the military, his unique background as a US army attack helicopter pilot, civilian commercial pilot, and CEO of several successful startups makes him adept at navigating complex challenges.

Chris: As head of the AITC, Chris commitment to positive change and forward thinking approach are evident. He is dedicated to optimizing digital trust online and mitigating the potential negative impact of AI with his vision, Chris aims to revolutionize the way digital trust is built and maintained in the modern world. Chris, it’s an honor to have you here at BarCode man. Thanks for joining me.

Chris Wright: Absolutely, yeah, thanks for having me on.

Chris: For sure, man. So as I mentioned, you’ve had an incredible journey thus far. I’m just curious, would you mind walking us down that path that led you to where you ultimately are today?

Chris Wright: Yeah, yeah, it’s pretty wild. My background obviously is military aviation. And yeah, I spent a lot of time in the Middle east as a contractor and had a break in military service. I’m getting out of the reserves right now. The army reserves doesn’t want to let people go, so it’s a slow process. But when I was a contractor in the Middle East, I lived in Dubai for about ten years and lived underneath their.

Chris Wright: They said they have like a version of an AI surveillance system. And so I got to see what life is like under that system and, you know, so when I started seeing it happen in the United States, and especially hearing a lot about this, the AI advancements that are coming, I really started to see who ultimately is behind the system and how are they using it and what is our future going to look like. It got me pretty concerned with the future we’re facing right now. That’s why it pushed me to start the artificial intelligence Trust Council and basically come up with a solution to bring alignment to AI and also help people understand what’s real and fake and also give them incentives to do that.

Chris Wright: That’s the trust council. So I’ve been actively recruiting trusted people, commercial pilots, military veterans, humanitarians, firefighters, emTs, especially those types of, you know, folks are critical at this time. So, yeah, so that’s what I’ve been busy doing, and I’m actually over in Ukraine right now looking at drone warfare and, yeah, so it’s been quite a journey.

Chris: Interesting.

Chris Wright: Yeah.

Chris: So as the AI revolution continues and it continues at warp speed, it’s just becoming more embedded within our lives, both personally and professionally. And so the aspect and the concern around AI development practices are becoming much more of a focus. So I’m curious to hear, from your perspective, what do you feel are the primary risks associated with the current trajectory of AI development?

Chris Wright: Yeah, a lot of people. It’s shocking. I was talking to some cousins I have down in Texas and they’re just kind of saying like, oh, have you heard of this chat GPT thing? They’re just kind of scratching the surface on AI and not a lot of people understand what’s really happening. There’s a significant emergency underway right now because of who is actually building these systems. And it’s actually pretty weird because these guys are not.

Chris Wright: They have a different future that they’re looking at than your average person. And so they’re looking at a future that is, it’s really, you can look at transhumanism, but it’s the idea of basically hacking the human body through digital means and ultimately having, like, a future that is run by artificial intelligence. So it’s the Internet of things that provides, like, a surveillance system over all people and things.

Chris Wright: And then so through that, these guys want to establish a social credit score system that’s then tied to your currency. And so basically what that creates is a massive surveillance grid control system that can ultimately control society and manipulate it any way possible. So right now, what you have at OpenAI is a thing called artificial general intelligence, or shortness. AGI. So you’ll hear that kicked around, but AGI is the fundamental transformation of humanity.

Chris Wright: And really, it’s almost like godlike intelligence. And so a lot of people kind of, you know, so think of, like, Siri, but think of Siri that knows the thoughts of every single person in your life, everyone you interact with. And then it collates that information into a digital picture of, you know, life itself that you are exposed to. And then it can feed you answers and recommendations and ultimately give you this high iq level of intelligence that no one on earth has ever really seen before.

Chris Wright: And so that’s why OpenAI is really keeping this quiet. A lot of people quit OpenAI because of the risks. You’ve had a couple senior alignment engineers who have left the company because they’re like, look, you know, opening eyes, not taking this seriously. And I’m familiar with that perspective because it really is a split in big tech, where you have people who are leading it that believe in a different future, and they actually have a term for it, and they call it speciation.

Chris Wright: And speciation is the transition from human 1.0 to human 2.0. And so they actually talk about eliminating the natural human form and then developing an augmented form that can be then controlled digitally. And, yeah, so it’s total science fiction territory, but it’s becoming a reality. And so with the artificial general intelligence coming, you know, that’s going to go on your phone, like, really, really quick. I mean, Elon Musk just came out and said that in 2025, it’s going to be commercialized. And, you know, on your phone and on your laptop.

Chris Wright: And so basically what that means is it means that you’re going to have an extremely high iq system on your phone and or laptop that can pretty much do any cognitive task that a human could. And so if you’ve got, if you’re a financial advisor, like sales agent, customer service, pretty much most jobs that you can accomplish online can now be accomplished with these AI systems. And so with that, it’s an upending of our work life and even relationships.

Chris Wright: It affects all aspects of life. And so the mental aspects, I think, are the biggest risk to what’s coming. Mental health and really the issue of worthlessness. You know, humans have always, you know, we’ve always had a, you know, some sort of purpose. You know, everybody when they graduate school, it’s like, well, what are you going to do when you grow up? And what do you, what type of job are you going to have?

Chris Wright: And now with AGI, I mean, literally, it’s unnecessary. I mean, you’re not going to really be needed, you know, for most things. So.

Chris: Because that answer will be provided for you, you’re saying, yeah, I mean, pretty.

Chris Wright: Much it’ll do anything that a human can do, but better. And then now you have robotics coming in, too, and the robotics are getting really good as far as able to cook. You got robots that can actually go into a kitchen, analyze the pantry, refrigerator, come up with meal plans, stuff like that. But some of that stuff is cool. And yeah, it could be useful, but the issue is, it’s like the more human it gets, the more humans rely on it and start to humanize the technology to where they start trusting it and they start following it.

Chris Wright: And that issue really gets into what is the data set that goes into that. It’s a whole can of worms. And really, there’s been no regulation. The EU AI act in Europe just, it’s almost approved now. But really, it’s a total wild west right now. And so most jobs are on the table because of this.

Chris: Yeah. So in regards to your perspective on big tech having a different plan than what normal society understands the plan to be, to what extent do you feel like this is happening? I mean, in regards to their end game versus what is being revealed to us, could you expand on that for us?

Chris Wright: Yeah, totally. I mean, they don’t talk about speciation publicly. There’s a conversation that Larry Page and Elon Musk had where Larry Page talks to Elon Musk and says, hey, don’t be specious when he’s talking about AI safety. And so what that means is, and I was confused by this. I didn’t really understand it until I actually tried to hire an engineer and do some work. You know, real senior guy flew him in, and basically he explained to me, you know, the mentality.

Chris Wright: And, you know, so I said, like, hey, if it was, you were the last person on earth. And I had this other person there, female, and I said, would you pick our friend, you know, the girl, as a partner to live with for the rest of eternity? You know, you know, with. Without AI? Or would you pick a robot dog? And he’s like, no, 100%. I picked the robot dog because AI, as we develop AI further, we’re gonna merge our consciousness with AI technology and create a digital representation of our consciousness that then can seed the universe in this digital cloud form.

Chris Wright: And so it’s a really weird. Ray Kurzweil talks about this in the singularity, but it’s a unique perspective on technology that is very anti human. And so a lot of these tech engineers and leaders, they actually welcome this transition, and then they look at humanity in its current form as a placeholder for technology, and they say, hey, no, the real important thing is technology and the progression of AI and do not stop it. Do not slow it down, because we’re going to get to this singularity point.

Chris Wright: And that’s the point when AI starts developing itself so rapidly that it becomes unstoppable. And no one on earth knows what’s going to happen at that point. And it really gets into, for these guys, it’s a spiritual thing. They actually look at technology in a spiritual light, and they look at this digital consciousness that they’re trying to create as being an artificial God that they can leverage. And basically listen to.

Chris Wright: If you read the Bible or look at a lot of religious texts, it matches up with, you know, it’s literally like the Antichrist. So, you know, it’s unbelievable. And the public should really, like, dig into this. And I’m trying to raise awareness about it because this mentality is, you know, it’s all over the place in big tech, and. And so it’s up to us to be like, look like we’re human. And you need to be pro human if you’re, you know, if you’re leading this space, because, you know, we’re on the chopping block and, yeah, so it’s a real critical moment for in human history.

Chris Wright: A lot of people talk about how this is unlike anything that has ever happened in humanity and recorded history ever. And so we’re right on that edge. And so what we want is, we want to say, hey, no, no, you know, these are some amazing tools. AI is a gift. It is an amazing tool that we can use for abundance and good, but it has a huge potential to cause problems and upend our way of life and upend our species.

Chris Wright: And we’re playing with fire here, you know, big time. And one of the analogies I use is, you know, candles are cool, you know, you know, fireplaces are nice, but, like, what we have with AI is a raging forest fire, and there’s no one calling a fire department to put it out or slow it down. I mean, it is just like, I mean, it is taking over.

Chris: So, yeah, there’s no stopping it.

Chris Wright: Yeah. And so there’s all this commercialization that goes into it that, you know, it’s a typical, like, you know, iPhone rollout hype, but it’s really, you know, that’s all packaged up with, you know, a trillion dollar marketing plan. But, you know, it’s, it goes much deeper than what people think.

Chris: Scary. How close do you feel we are to that singularity point with the speed of development?

Chris Wright: There’s another thing. A lot of these guys who are real pro AI to the point of oblivion, they love to downplay the timelines. They’re like, oh, AGI is off. That’s not going to be until 2030. And the singularity, or they call it ASI, or artificial superintelligence. And that’s when AI reaches a point where it’s much, much smarter than humans. And you can look at the development curve of where this AI goes.

Chris Wright: And this stuff is developed much, much faster than anybody has ever thought. The concept is that chat, GPT, or OpenAI, is engineering AGI today. And that’s why a lot of these guys have quit, because they’re scared of it. And so these are the safety professionals that they brought in to bring AI into alignment with humanity, have quit out of concern. You’ve had Sam Altman get fired by the board. He got brought back on because of all his developers, threatened to quit, quit because of it, and they were just going to start a new company, so they were kind of forced into keeping them.

Chris Wright: But these guys are hell bent on creating this future. And so it’s critical that people dig into this because it will impact your life. I mean, it is coming. And so the idea is that we have tools that we can use to slow it down. We have tools to cap the IQ so the IQ doesn’t get beyond a human, so that we can keep it, keep human jobs on the table for now so we can have an easier transition into this AI future.

Chris Wright: There’s a bunch of stuff that we can do to make it safer, but the key is that we just need voices to get out there and really start spreading the information.

Chris: Yeah. So what do you feel is preventing that from happening then?

Chris Wright: Yeah, it’s really like we have, you can look at our political system right now, and we have corruption everywhere. We’re barely able to figure out if we can run a vote correctly at this point to elect people. And so what you have is you have corporate capture of a lot of big industries. And so specifically, Blackrock is one of the big ones that is pushing this, and they’re pushing this agenda that is pretty anti human.

Chris Wright: And so really it’s a lot of the investment that goes into political lobbies. And really, no, there’s no regulation in the United States yet. There’s some that has happened in state by state basis. Like in California, they’ve got some Internet laws that are now getting adopted for AI, but really there’s no major legislation that protects the individual, protects your rights online. And it’s really kind of a mess.

Chris Wright: And so they talk about how legislation typically trails by means many years actual, you know, what’s happening in the real world. And so we’re playing this game of catch up, but the idea is we’re trying to catch up to something that’s going at, like, light speed. AI development is just getting faster and faster and faster because what you have is you have AI agents that are acting as programmers and, you know, and so these things can become their own it development industry, and it’s just, you know, you apply a bunch of different AI agents to a task, and these things are just inventing their own code.

Chris Wright: They’re operating on the open Internet, and then they’re creating their own models on their own. And so an AI is a black box. I mean, so what that means is that what happens in the middle of it as far as all the connections, nobody understands it. And so you have a lot of people kind of like, you know, getting spiritual with it because they’re like, oh, just, you know, the output is amazing, and it’s coming up with all this, like, you know, crazy stuff.

Chris Wright: But there’s, you know, fundamental issues with the, with the lack of transparency on how it, you know, comes to the conclusions that it comes to. So, yeah, again, we’re really playing the fire here, you know.

Chris: Yeah.

Chris Wright: And then that’s the reason why a lot of these guys are quitting.

Chris: Interesting man, talk to me a little bit about the psychological impact there. And I always revert to this movie her, which was made over ten years ago, but in a way, it was ahead of its time because now you’re actually witnessing the psychological and emotional connection with AI. What does that mean for our society as the lines of reality become more blurred?

Chris Wright: Oh, yeah, it’s unbelievable. So, yeah, I’ve been in Vegas and working out of Vegas for a while, and I’ve seen a lot of crazy stuff where you actually have senior tech CEO’s that have emotional relationships. I mean, like a true connection with AI girlfriends. It’s pretty unbelievable because if you combine VR virtual reality with porn, and then you have an AI girlfriend experience. And so this thing is like texting you, and it is literally like a full girlfriend experience.

Chris Wright: And so you have all these developers have made these chat bots basically that provide the service. And so you got guys, especially in China, who are paying, you know, monthly to have these basically her type relationships with, with these AI bots, chatbots. And, you know, so you can imagine what that does to mental health. But if you kind of go back to your childhood and kind of think of like, you know, what kids, what we saw when we grew up versus what kids are seeing today, you know, it’s like a nuclear bomb for the mind and, you know, completely warping their sense of reality. Completely warping their sense of, you know, what, what is right and wrong as far as, you know, just healthy relationships, that kind of thing. And, you know, and so it’s a devolvement of our humanity ultimately.

Chris Wright: You know, so the idea is to say, hey, you know, like, you know, these companies that are making all these different chat bots, it’s like if it causes mental harm, there should be some sort of legal liability, you know, so when a kid gets ahold of this stuff and then, you know, it warps them completely to the point where they’re just like kind of a vegetable, especially emotionally. It’s like, well, there should be some legal liability for that. And so, but, you know, right now it’s the wild west, and, and so these tools are only going to get better to where they know, you know, pretty much every detail about you.

Chris Wright: You know, they’re listening to you, you at night, you know, like, you’re looking at your micro expressions, the size of your iris, you know, the tone of your voice, the tone of other people’s voices, you know, what they’re saying about you. And it really provides a full picture to these algorithms of what your life is like, so that it can then manipulate you however it needs to. And so if you commercialize that, it’s like it’s the ultimate sales tool, and literally, you’re just outsourcing thought to these systems.

Chris Wright: So it’s really a nuclear bomb for the mind. Basically, what needs to happen is we need to have some sort of recognition of that and almost kind of like a digital detoxing that happens. The tech industry wants to keep pushing us down this road, and it’s like, oh, you’ve got to adopt AI or you’ll get left behind, blah, blah, blah. But really what needs to happen is we need a close look at the harm that is being done to the minds of, especially kids.

Chris Wright: And what type of future is this going to bring out from a society perspective?

Chris: It’s pretty critical in terms of the deception that comes with that. Do you feel like there’s a positive use case there, whether that’s therapy or other needs for psychological improvement?

Chris Wright: Yeah, 100%. I mean, the thing is, you basically want vetted systems that have a data set that is promoting mental health. So I talked to some neuroscientists, other people who are in this space, and there’s ways to do that. There’s ways to make it healthy so that it mimics more, just like a therapist. That’s reasonable, and you can use a lot of that background information to improve behavior and stuff like that.

Chris Wright: There’s a lot of therapeutic things out there, but the idea is to really separate good chatbot systems, especially for mental health versus the bad, and have some sort of organization that helps figure that out and establish some trust on one or the other. And so that gets into some of the stuff that the company I’ve got is working on.

Chris: There is some concern that exists where one day we’ll see a terminator two situation happening with AI. Is that something you believe will ultimately happen, and if so, how soon?

Chris Wright: Yeah, you know, it’s funny, because in 2015, Sam Altman was at a conference, and he said, you know, AI will destroy the world. You know, as a fact it will. But he said, there’s going to be some amazing, you know, companies that are going to be built around it, machine learning companies built around that in the meantime. And so in a lot of these tech engineers, they kind of live in a video game world where they’re kind of happy to see these final days.

Chris Wright: And they say, like, well, at least we get a front row seat to the end of the world, and they think it’s fun. And those are the ones that believe in this whole digital consciousness thing. And so the concept instead, and I’m background in warfare, drone warfare, and attack aviation. And that’s really what got me into this whole thing. And so that’s Skynet. And so I’m over in Ukraine right now, actually.

Chris Wright: I’ve been meeting up with a couple drone experts here and talking to them about what the trends are, what they’re looking at. And one contact I’ve got actually kind of started to ghost me because he actually is integrating AI into the drone systems. And so it’s the start. It’s just now starting to where you’re having AI being applied to swarms. And so if you look at a lot of the defense manufacturers, andral technologies is the big one in the United States now.

Chris Wright: They have a term, it’s called human in the loop as far as the kill decision chain. And so the idea is that no life should be taken unless a human actually, you know, approves it. But that’s in the United States. And so what’s happening is around the world, and even here in Ukraine, what they’re looking at is just pedal to the metal, get AI out into the drones forums as fast as possible, let them make the targeting decisions. And so you have a speeding up of the kill chain, meaning that, you know, targets are destroyed, people are getting killed at a much faster rate through the AI algorithm.

Chris Wright: And so then if you look at that issue and you look at where we’re going, everyone’s watching this Russia Ukraine war and looking at how is AI being applied to it? And specifically, how are drones being used to fight the war? And so you have these thousand dollar drones that they’re strapping an rpg warhead or some sort of, you know, like a mortar round or something like that to the drone itself, and then just running it into things, you know, that’s pretty crude. But as it’s developing, these systems are getting better and better.

Chris Wright: And so you’ve got, you know, a couple of those drones can take out a whole air defense system.

Chris: Very little human interaction.

Chris Wright: Yeah, very little human interaction.

Chris: And so you think that will ever become fully automated?

Chris Wright: Yeah, it already is. Yeah. It’s getting fully automated to where there are systems that you can set up that have these boxes that launch, like, drone swarms. And so what you can do is you can seed, like, a battle space with these boxes, and the boxes will sit there, and then, as needed, they’ll launch drones. And so basically what you’ll have is you’ll have there’s a little air raid siren going on here.

Chris Wright: But you’ll have these drones that get launched as scouts. The scouts will identify targets, and then they’ll communicate with the AI network to then launch the fleet, and they’ll launch mission specific drones to target whatever sort of, you know, targets are the AI identified. And so then you have that system linked to an intelligence gathering system. You know, in Israel right now, they’re using one called the gospel.

Chris Wright: Ironically. It basically identifies targets. And so, you know, through cell phone and social media, all that kind of stuff, it identifies, you know, potential targets. And then what happens is that is then linked to the physical drone systems, then go launch and execute those targets. And so it’s really a fully automated targeting cell and attack planning and execution, all automated through artificial intelligence.

Chris Wright: And that’s today. The scary thing is you have military leaders that are wanting to throw massive money at this, and they’re looking at having tens of thousands of these drones in the battle space so that they’re out there taking down targets left and right. And so we’re really in kind of a race for that type of development. That’s the new form of warfare that’s coming, and it’s going to make old style warfare look silly because it’s so effective.

Chris: I do want to talk to you a little bit about the AI Trust Council. Tell me a little bit about that initiative and how we can support that initiative.

Chris Wright: Yeah, so the website is theaitc.com, and we’re the first pro human network of individuals who have a basically like an ethical collection of folks that are looking to steer artificial intelligence in a pro human direction. So basically, the big problem we’re solving is the trust issue. So it’s like, right now, it’s like, how do you figure out what is real and what is fake? And it’s very, very difficult to try to do it online.

Chris Wright: And so the idea with the trust council is that it’s really not online, it’s offline. And so the idea is that it’s the people that you trust in the real world, and it’s their friends and family that they trust in the real world. And so it’s really a trust network of actual individuals. The business organization B and I is kind of similar to it. And so basically this is kind of a version of that, but it’s brought into the AI space.

Chris Wright: And so the idea is that if you post something on social media and you know, and you have people that trust you as posting things that are real, then that post gets more traction and spread online. And so the idea is that you want to build a reputation for being honest online and providing honest information. And so we use the golden rule as the foundation for that, to treat other people the way you want to be treated.

Chris Wright: And so what we’re doing is we’re pairing advertisers with individuals who are trusted for their opinion and their content. And what we want to do is get metadata from a lot of these big organizations, and then we’re looking at suing these big tech companies for their metadata because a lot of people sign these agreements, and it’s really, you’re getting your valuable data stolen from you because it’s each person’s millions of dollars worth of data that you’re giving up.

Chris Wright: So the idea is, with this AI future that we’re moving into is to actually use that as a form of income for people so that your data becomes like a bank account, ultimately, and so that you can hold the data in your own personal bank account. And then depending on how trustworthy, honest, or, you know, basically, you want to be able to control all the data that you have, how much gets exposed to the rest of the world.

Chris Wright: You can lease it out. You know, you can sell it, you can do what you want with it, but the idea is that you have control over it, and so you at least know where your data is going, how it’s being used. And so for the most trusted data that’s trusted by your audience, that data gets a higher rating for being valuable. And so that once it’s rated at a high level, you can actually pair that with advertisers, and advertisers can use that to sell products and things like that.

Chris Wright: And so with AI, we have this issue where people are creating fake accounts. You actually have bots. They can create fake profiles. So if you look at the Instagram model, Twitter, whatever, those sites, I think Twitter is getting better, but specifically Instagram, half the followers are just these bot farms. So it’s not like a real person.

Chris: Yeah, that’s interesting.

Chris Wright: Yeah. And, you know, you can buy followers, and it’s like, well, what is that? You know, if I can buy followers, and that’s a bunch of crap. It’s not even real. So somebody’s. So you’re ultimately tricking people. You know, when, when you look at Instagram and you’re like, hey, this guy’s got 300,000 followers. And it’s like, okay, well, what if, you know, most of them are bought? Then it’s like, well, then I’ve just been fooled into thinking this guy’s popular when he’s really not. And so that then it comes into how much should I actually trust that person?

Chris Wright: And so we have a unique way of displaying the information and showing people. And the whole idea is to crowdsource trust so that you’re in charge of it. You’re in charge of the information, that it gets exposed to the world. And if you want to be public, then you can earn money on your metadata. But it’s really a foundation of trustworthy individuals. To start, we’re going to close the window here shortly on the people that are the founding members of it.

Chris Wright: So the founding members are commercial pilots, humanitarians. People have done work for other humans, firefighters, emts, even air traffic controllers, and also military veterans. And the idea behind it is that those people in society are trusted more than just your average kind of person, but it’s their friends and family that they’ve selected as people that they trust. And so that trust network grows and soon will open up to everybody, and so they’ll be able to jump on and bring people in.

Chris Wright: And, yeah, and so as it grows, it becomes a network that you can actually identify. You know, is this post to be trusted? Is this website or this profile to be trusted? And so you kind of think of it as like consumer reports slash LinkedIn. So you can understand how am I connected to the person? Do I actually know them in the real world, or does somebody that I know know them? Can they vouch for them?

Chris Wright: You know, so it’s like the endorsements on LinkedIn and then with the consumer Reports rating system.

Chris: Love it, man. It’s a verified network. Are you on social media at all?

Chris Wright: I’m mainly on LinkedIn. So, yeah, if you go to Chris at the AITC, that’s my personal site, and then, yes, we’re still not launched yet with the website. We’re doing development right now. So right now we just have our landing page. But anybody who signs up today, you’re a founding member, and you’re really leading the world. And this whole AI trust dynamic. And one of the benefits of this is that, you know, we have polling on the site. And so the idea is that we can determine, hey, is this good AI or is this bad AI?

Chris Wright: And the idea is to have some sort of opinion from people who are pro human, not just tech leaders, but actually pro human people that care about the outcomes for humanity in this time. And so that’s a critical piece. And so that voting and polling on where people think AI should go is going to be an important voice that we can then take to legislators and actually say, hey, look, this is what the trusted people say. This is like the normal folks that kind of run society, the people that you call for an emergency, you get on a plane and you don’t ask questions about the pilot ability to trust the pilot, or if you call the fire department, you know, that kind of thing. There are people that care about humanity, and that gets into that split between big tech and some of these people who are, you know, you could say anti human. I mean, they’re not. They’re definitely not pro human.

Chris Wright: They’re more pro AI. And so we’re a pro human organization. And so, you know, with that, then, yeah, we can really push. Push forward with AI in a responsible way so it doesn’t have to go sideways. It can go in a positive direction.

Chris: Yeah, absolutely. So I know you’re traveling now. You’re overseas. Are you based here in the US?

Chris Wright: Yeah, I’m in Vegas.

Chris: Oh, nice. Okay. Where’s your go to place there? If you want to go out and have a drink? Where do you typically go to? Or tell me a unique venue that you found there.

Chris Wright: Yeah, the foundation room is really cool. They got a balcony in the foundation room that overlooks the sphere?

Chris: Yes.

Chris Wright: Yeah. Foundation room is good in the wind. It’s hard to go wrong at the wind. The wind’s pretty much the nicest place on the strip. But it’s funny because when you live in Vegas, you actually don’t tend to go on the strip much. You actually tend to avoid it because it’s like, there’s always traffic. It’s crazy. So you end up kind of, like, going to some. A lot of people go to the smaller casinos, like Durango or some of the other ones, you know, just because they’re easy to get in and out of.

Chris Wright: So.

Chris: Yeah, you want to get away from that.

Chris Wright: Yeah. It’s usually when you have friends in town.

Chris: Yeah, yeah. All right, well, I just heard last call here. Do you have time for one more?

Chris Wright: Yeah, yeah, for sure.

Chris: If you decided to open a cybersecurity themed bar, what would the name be, and what would your signature drink be called?

Chris Wright: Hmm, that’s interesting, the name, huh, let me think. I probably call it the singularity, and then you could call it drink her. That’d be kind of interesting. Sex on the beach, but you just call it her.

Chris: Love that, man. I love that. And then we’ll just have her playing on the tvs, like, nonstop.

Chris Wright: Yeah, yeah. I mean, that’s the cool thing. I mean, with some of this technology. I mean, you can do some really, really cool graphics on walls, you know, different effects and stuff like that. So I think that our world is going to get much more interesting. I think a lot of the visual stuff with AI is spectacular, and that’s what we need. We need stuff like that that’s fun, cool, but it’s not stuff that’s going to upend humanity.

Chris: Yeah, I love that perspective, man. Chris, thanks so much for joining me, man. I really appreciate your insight and what you’re doing with the AI trust council. I encourage everyone to go online, check it out, and next time I’m in Vegas, man, we’ll catch up.

Chris Wright: Absolutely.

Chris: Thanks again.

To top