Unreal

Izzy Traub, an innovative entrepreneur at the intersection of film and AI, has traversed from pioneering visual effects in the movie industry to the front lines of AI software development. With qualifications from UCLA and the University of Texas, Izzy co-founded Inspira with his COO and father, Benny, where they have patented computerized productivity systems. His expertise in managing large remote teams and pushing the boundaries of AI in VFX illuminates new possibilities for modern workflows. Izzy shares his journey from early fascination with green screen magic to his pioneering role in adapting deepfake technology. He provides insights into how deepfakes are disrupting the film industry and ignites discussion on the consequences of this powerful technology, from ethical implications to its rapid integration into advertising and beyond, painting a thought-provoking picture of AI’s burgeoning role in content creation.

TIMESTAMPS
00:00:16 – Introduction to deepfakes and their impact on perception
00:02:19 – Background in film and visual effects
00:11:23 – Interest in AI and learning coding
00:14:02 – Increase in deepfake inquiries and major deals
00:16:53 – Responsibility of AI developers in shaping the ethical advancement of deepfake tools
00:20:23 – Simplifying the deepfake production process
00:24:30 – Concerns about AI’s impact on the filmmaking process
00:26:42 – Narrow application in AI leading to powerful outcomes
00:31:28 – Lack of identity safeguards for actors in the entertainment industry
00:35:03 – Potential benefits of actors adopting deepfake technology
00:39:55 – Potential impact of deepfakes on politicians and lawmakers
00:41:00 – Potential for real-time deepfakes and their applications in scams and fraud
00:44:30 – Company focus on predicting behavior, implementing AI managers, and automating high leverage tasks
00:50:23 – Benefits of a hybrid approach combining AI and human management
00:51:53 – Utilizing AI for detecting user behavior anomalies and insider threat detection

SYMLINKS
VFX LA (company): https://vfxlosangeles.com/
Sin City (movie): https://www.imdb.com/title/tt0401792/
After Effects (software): https://www.adobe.com/products/aftereffects.html
UCLA Extensions (educational institution): https://www.uclaextension.edu/
Ender’s Game (movie): https://www.imdb.com/title/tt1731141/
University of Texas (educational institution): https://www.utexas.edu/
SSRN Paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4739430#
Inspira AI (website): https://www.inspira.ai/
Han Barbecue (restaurant): https://www.yelp.com/biz/han-bbq-burbank
Scum and Villainy Cantina (bar): https://scumandvillainycantina.com/
Buena Vista Cigar Lounge (bar): https://www.yelp.com/biz/buena-vista-cigar-club-beverly-hills

This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors.

Chris: Izzy Traub has a unique background that spans the competitive challenges of the movie industry to the cutting edge of AI software development. After studying film at UCLA and founding the visual effects house VFX Los Angeles, Izzy did pioneering work in the area of deepfakes and automation of the VFX pipeline using artificial intelligence. As AI emerged as a bigger portion of the VFX business, Izzy returned to school to obtain a certificate in data science from the University of Texas and co founded Inspira with his COO and father Benny, filing their first joint patent application on computerized productivity systems in 2021.

Chris: Izzy’s pioneering AI work in the entertainment industry, coupled with his experience managing large remote teams, has unearthed rich insights into AI potential to optimize modern workflows. Izzy, welcome to Barcode.

Izzy: Hey, thanks so much for having me, man.

Chris: Of course. So, Izzy, please tell us about your background. You know, how you broke into the film industry and ultimately started VFX?

Izzy: Yeah, that’s. Well, it goes back. Let’s see here. I was 14, and my parents had sent me to film school on Vancouver island. So it was like a little week long summer camp where you go and you make films and they taught you everything about it. And obviously all of our films sucked. But while I was there, they had told us about the movie Sin City and how it was shot on a green screen, and there was a really popular tool at the time, which is even more popular now called after effects.

Izzy: And after effects was kind of the entry level visual effects tool that if you wanted to get into compositing, that’s what you would use. And so I saw clips of Sin City. I didn’t see the full thing at summer camp there, but I just saw clips on YouTube and I was just hooked. I mean, it was just unbelievable. I don’t know what it was. It was like the magic of watching the before and afters of the green screen and then the swipes and then the background was put in and it just floored me.

Izzy: So then when I got out of school, I’m sorry, when I. Not out of school. When I got out of the little summer camp film school there went back home, dad got me after effects, and I basically just spent every waking moment that I could when I wasn’t in school just learning visual effects. And then I got into 3d animation and animation and then literally ended up dropping out of high school at the beginning of 10th grade just to focus on like, visual effects.

Izzy: And so I started doing that and then got a couple of certificates at UCLA extensions, which is why I’m from Canada. So that’s how I was able to even get into the country for the first couple of years. And then I went back to Canada and got my first job at a studio working as a motion graphics artist for a movie called Ender’s Game. And my contract was about two and a half months, but I learned an incredible amount about VFX pipeline and kind of the, what it really took to be a really good visual effects artist. And I kind of realized that I would never be a really great visual effects artist.

Izzy: Like, the guys who were with me were just so much better. So anyways, flash forward to a couple years later. I had started a business where basically doing like animated videos. So I had hired a bunch of people for that. I was doing full time sales. I had a little marketing engine going and so had like a proper little business there for about three and a half years. And then the marketing system I had, which is basically like cold email, the method that I was doing just was no longer just basically was hitting too many spam lists and it just wasn’t working anymore. So the business basically died because I didn’t have a sustainable marketing funnel in place. But then a couple years after that, long story short started visual effects Los Angeles.

Izzy: And I had done a feature film prior to that where I, VFX supervised like 200 visual effects shots. And so that was really where the movie itself did. Okay. It actually still brings in money to this day. I was about 22 years old when I directed it, but it was the visual effects, the pipeline management and supervising that many shots and hiring that many artists over the course of, you know, whatever it was, six months or something like that, obviously, I was kind of funding it as I’m going. So it’s like here and there. But the 200 visual effects shots enabled me to launch the visual effects company.

Izzy: And so what was the name of that movie? Natural vice. It’s on Paramount plus right now. Yeah, it kind of moves around. So it was on Hulu for a while, and then I got sold to Paramount plus. And I think you can also see it on Amazon Prime. I believe so, yeah. Anyways, long story short, that was how I got into visual effects, like, professionally. And I did a lot of, like, rinky dink projects, obviously, over the years of doing visual effects here and there.

Chris: When you first started VFX, who were you marketing to or who were you doing business for? Was it film? Was it ad agencies? What type of work did you do when you first spun up VFX? And how long ago was that?

Izzy: Yeah, good question. So for VFX La, that was so basically. Okay, so VFX LA went through two metamorphoses. So it used to be vanity effects. That was the first name. And I had gotten my first slew of clients, which was film. I got a couple of feature films, and then I got a few commercials, and then there was a couple of projects that went really south, and they just, like, were costing so much money. And I ended up having to tell the client, I was like, hey, listen, like, I just ran out of money. Like, you just asked for way too many revisions here. And we just keep going in a circle. Like, I just, you know, I’m not going to charge you for, like, the work that we haven’t done, but please pay me for the work that you have approved and we’ll go our separate ways. But I’m like, okay, no problem. And then I closed down the business for about a year off of that one client.

Izzy: Yeah, well, technically it was two. It was two. That really did not go very well because I just didn’t. The VFX pipeline is really complicated, and we didn’t have the expert, I didn’t have the expertise to be able to get some of those projects done, and it. They just went south, man. What can I say? It was just.

Chris: So that was, that was VFX version one.

Izzy: Version one, yeah, exactly. Then about a year, year and a half went by, and I ended up starting VFX Los Angeles. I was like, you know what? I got to have another stab at this and redid the website. I kind of learned a lot more. I did a lot more research, and I just kind of got quite lucky. And we did a short film for this guy, Lee Whittaker, who’s an excellent director, and he’s a really big action director as well.

Izzy: And he had this film called Amy, which is now the vigilante. And we did a short film of his through the vanity effects days. And then when VFX LA went live, he ended up hiring us for his feature film. And so we did, like, I don’t know how many visual effects shots. I. It was close to 200. It was like 100, 8190, something like that. So that was that standard for a.

Chris: Feature film to have that many? Like, I’ve never counted them as I’m watching.

Izzy: No, totally. And you probably wouldn’t know most of them, but I would say most. Most of the. Most of these films have, like, around 50 v effect shots. And most of them are going to be invisible. Like, it’s gonna be a set extension or removing a logo or something like that.

Chris: Minor.

Izzy: Yeah, yeah. But then you get a film like the creator, and you got, like, the whole every shot, visual effects shot thousand or whatever.

Chris: Interesting.

Izzy: Yeah. Anyways, and then I kind of ended up getting involved. Like, I follow where the world was going in terms of AI. Me and my dad were actually having quite a bit of conversations about it. And I watched this video on machine learning. This was like, yeah, this was about a 2021. So I think it was actually about three and a half years ago now. When I saw this video, and it was just a five minute video, and it walked through the different steps of machine learning. It was very dry. It was like data preparation, data transformation.

Izzy: Build your model and then look at the output and compare the different models. It was like, literally that. I don’t know what happened to me, but at the end of the video, I was like, oh, my God, this is just the coolest thing. It was the first time I’d experienced the level of joy and excitement about something since I had seen Sin City.

Chris: Okay.

Izzy: It was like, just totally gripped me. So I immediately, I’m like, man, I got to learn how to code here. So I signed up for code Academy and then enrolled in school. And after the nine months, I mean, they pretty much covered all the basics. The only thing they didn’t cover predominantly was time series forecasting, which I ended up having to learn after I was out of school. But did you know at that time.

Chris: That you wanted to integrate AI into the VFX platform, or was this just something that interests you?

Izzy: At the time, it was basically just something that interests me. And I realized if I don’t learn AI, like, if I don’t learn, if I don’t get involved in this now, I may really miss the boat. I don’t know why. I just had that kind of fear. And I was also. Visual effects is really cool, but you don’t really get to invent products. You’re inventing the creation of illusions, which, from a societal point of view, I don’t think is that important.

Izzy: Like, it’s cool stuff, but it’s basically like selling a can of coke. Like, it’s not really helping anybody. And I can’t say that I would really get the fulfillment that I was looking for from my work. When doing visual effects, I got a lot of fulfillment from building systems, from watching those system like systems that I would build or help build get run by other people. For some reason, that was unbelievably fulfilling to me. Hiring people and then watching them execute a plan was like, I just absolutely loved it.

Izzy: Which is probably why, you know, autonomous agents is so exciting to me, because it’s essentially you’re building a system and then having the agent run it. I mean, it’s very cool stuff. So, yeah, so I guess when you.

Chris: Were doing FX, it was like you had a script that was very defined and your level of creativity was limited, whereas now you have much more creative reign.

Izzy: Totally, totally. It’s very much like legos. So because of that, been able to. All I can say is that the doors, after I was, it was like the month before I finished school, the doors just kind of open, where we just started getting deep fake inquiries, like, just out of nowhere. Nobody had ever requested it. Nobody had ever even thought of it. Like, it was like, this was not on people’s minds at all. This is about, this is about two years ago.

Izzy: And so we ended up doing our first project, and then we ended up doing another project, and then we ended up doing another project, and, like, right out of the gate, because there was almost nobody doing it. We landed some pretty major deals with an app like Jennifer Lopez, the movie, the weird, the Al Yankovic story with Daniel Radcliffe. And that was really, really cool. And then the next year, I would say it’s 2023, saw, like, monumental shift in the market, where our main business became deepfakes, where it was like 50% of the leads that would come in to 60% of the leads. Were old. Deepfake just happened out of nowhere. I have zero explanation for it. It was just like, people just all of a sudden saw that there was the technology and just started to be like, okay, maybe we can use this.

Izzy: And now we’ve been seeing this growth happen for deepfake. So through that, because there’s not a whole lot of providers, we got them really early, and we just started developing the technology, started really getting our hands dirty, figuring, okay, well, how do you shoot this stuff? How do you cast for this person? So now VFX LA, we also do voice cloning. We’ll do like, lip syncing for, like, translation stuff. So if somebody’s speaking in English and they need to be speaking in French, then we can do the AI translation and do a voice to voice conversion where we’re now they’re speaking in French, and we replaced the mouth as well. And it looks very, very, very good.

Izzy: And that’s a recent thing that we’ve gotten into. We’ve been developing that over the last. I would say the last, like, properly, technically, we started six months ago, but I would say we made some monumental shifts over the past couple of months and especially the last 30 days, made some real breakthroughs, just technically, how to make it that much more convincing.

Chris: Very cool, man. So last year I had the opportunity to interview you for the inhuman documentary, which we ironically filmed at the height of the 2023 writers strike, which in part was to address protections against the perceived threat of AI in creative work. As an expert navigating the potentials, but also considering the ethical challenges of deepfakes, in your perspective, what responsibility do you feel AI developers have in shaping the responsible advancement of these tools in creative industries?

Izzy: That’s a very, very good question and really not an easy one to answer, but I’ll do my best. What I’ll say is probably more than expected, a good portion of machine learning experts out there that specialize in deepfakes who are also kind of like artists, because it’s not just straight machine learning where you’re just developing these neural networks. There’s also a large amount of artistry that’s involved. I mean, it’s really complex stuff. It’s not like, hey, just take all this data and feed it into a model, and it comes out great.

Izzy: Far from it. I mean, the amount of manual, meticulous artistry that is required to really produce a convincing deep fake is quite elaborate. And so most of the developers that I have met are really conscious about the types of projects that they take on. They don’t want to be involved in anything that, like, they’re, they have a really strong, probably more than most industries that I’ve seen. They have a really strong moral framework. Like, they don’t want to do any project that could potentially be used to hurt somebody, which I think is a great thing. I mean, that’s obviously my personal stance is, you know, we don’t want to, we don’t want to hurt people, we don’t want to ruin people’s lives because the technology absolutely has the ability to do that.

Izzy: And so I think as a developer, you probably don’t want your technology to necessarily get out into the wrong hands. You want the work to be used for good, or at least, maybe not good, but at least entertainment. And I really do think largely the responsibility falls on the individual who’s using the technology. You know, I mean, you could turn anything into a weapon, obviously, and I’m not going to use the gun trope.

Izzy: Well, guns could be used for hunting or killing people, but, you know, I mean, genuinely, you can take a fork and I could go stab somebody with it, and now that’s a weapon. So I think deepfakes are, for one, it takes a high degree of skill right now, and that may change in the future, but right now it takes a high degree of skill. And two, I do think that there’s a really good, at least in the deep fake community, there’s a really good culture around people not wanting it to be used for bad, per se. At least. These are the really high level guys.

Izzy: Most of the concern around deepfakes right now, yeah, there’s misinformation, but majority of it is all pornography. Like, that’s probably 95% of it is being used for. For porn.

Chris: Yeah. During that conversation that we had last summer, we also spoke at length about the technical process of deepfake creation. And you just mentioned that it’s a very intricate process, a very technical process, and it’s a lengthy process too, because you had walked me through it. So for sake of this conversation, if you don’t mind, would you simplify that and, and talk us through, at a high level, the deepfake production process?

Izzy: Yeah, absolutely. The first part is you need 20 to 30 minutes of training footage. Essentially, all that means is more or less a locked off camera, ideally evenly lit on the face, and a medium shot. So not super close, not super far. Need enough data in the face to feed it into the, the neural network. And so once you’ve got that training data, then you got to clean it. Up, you got to go through some post processing.

Izzy: You feed it into a model. It outputs something that hopefully looks more or less like the individual that you’re trying to replicate. It has been fit to a target individual. And then you go into the compositing process, where you’re making little tweaks here. You’re blending it into the face. You’re maybe keeping the eyes. You’re looking at what looks weird here, what looks weird there. Sometimes a cheekbone might look just quite odd. So you then have to go back and create a new model just for that cheekbone.

Izzy: So then you’re compositing that, and now you look, oh, actually, the guy’s mouth is kind of looking like this. And you’re like, why is that? So then you look back in the training footage and realize, actually, the guy’s got a bad habit of putting his mouth like that, but that guy doesn’t want to look like that, maybe. So then you have to retrain the model just for the mouth, for those particular sections. So by the end of the output, you might end up having used five to six different models that you’re combining into one.

Izzy: And that’s one of the reasons why this stuff takes so much time. Just training a base model right out of the gate. Oftentimes will only take about two weeks. You can even get it done faster sometimes depending on what gpu’s you’re using.

Chris: Got it.

Izzy: Okay.

Chris: But to get that target individual, how difficult is it to find that person who aligns with the intended deepfake? I mean, that’s a critical component, correct?

Izzy: Very much so. It is not easy at all. When we’re casting for a project, we’ll go through hundreds of people and we test. We’ll build a preliminary model to test. Basically two things we do. We got this software called helix scan, which is fine tuned for the features on an individual’s face that matters for a deep fake. So, like, distance between the eyes, for example, the distance between the mouth and the eyes, or the distance between the upper lip and the nose.

Izzy: So we run through, you know, let’s say there’s 200 or 300 applicants for actors. Basically feed it through this software. It automatically ranks who the. Who the. Let’s say we’re, you know, I don’t know, Tom Cruise. Okay. It’ll find the guy who looks most like Tom Cruise, and it may not be what our eye would pick up. In fact, consistently, our eyes are very unreliable for being able to cast the right individual when it comes to replicating, doing a deepfake.

Izzy: So the software is very helpful. It basically gives us a short list of like five to ten people. Once we have those ten people, then we have our preliminary model that we fit Tom Cruise’s face to, to see who looks the most like him. And through that, now you’re using your eyes and you can make kind of a judgment call, like, okay, this guy’s going to look the best here.

Chris: Gotcha.

Izzy: Anyway, so that’s basically how a deep fake is done. And that casting process is so important, and it’s definitely not easy to get it done right. And if you have a very limited amount of time, it’s, it’s. You’re basically just going by your eye, and that’s not what you want to do with this sort of thing. You want to have at least two to three weeks to properly cast an individual to get the best results.

Chris: Interesting. So I’m sure that you’re aware of this, but recently Tyler Perry paused his $800 million studio expansion due to AI concerns. Have you seen this?

Izzy: Yeah. Okay.

Chris: Um, particularly OpenAI Sora and its capabilities to create realistic scenes. Um, do you feel like this is a valid concern and do you think that AI will, or maybe it already has, be able to generate synthetic filming locations, et cetera? And I think his ultimate concern was, you know, eliminating certain jobs that are critical in the filmmaking process. So, yeah, just curious to get your thoughts on that.

Chris: And also if you’ve had the opportunity to see or experiment with sora yet.

Izzy: Yeah, that’s a good question, man. I’ve tried to get access and I just can’t. I’ve been through all the sites and they’re onboarding key individuals to help test, like VFX artists and creative artists and whatnot. I’ve been trying to get access to it to try and play around. Yeah, I haven’t been able to. So I don’t know. I don’t know if they actually are onboarding people. Maybe they’re just saying that. I really have no idea, but I.

Chris: So he’s basing his concern basically on what we’re saying. Like, just.

Izzy: I think so, man. I don’t think he’s wrong, man. I think if it was only a $5 million expansion, I think it would be. Yeah, obviously, just do the expansion. Like, it’s not going to take over that fast, but 8800 million, it might take two or three years for him to complete that. And within three years, we are going to be in a completely different environment when it comes to this sort of stuff. I mean, in all honesty, I thought something like Sora was going to come out in like three years from now.

Izzy: Yeah. When I saw it come out, I was like, oh my God, this is profound. And it’s obviously not perfect. There’s a lot of different flaws. It’s not cinematic quality, and you’re also going to have the same resolution output issues. Like, it’s probably not going to be able to do 4k out of the gate, but eventually it will, and eventually it’s going to be perfect. And for the next, probably a little while, you’re still going to need to clean up the footage.

Izzy: And I think getting really specific stuff, just like anything with AI, man, like any kind of machine. I mean, one of the fundamentals of machine learning is like, narrow, narrow, narrow application equals the most powerful outcome. So, like, rather than, okay, I want to predict a million different things here. I’m going to predict one thing, I’m going to predict this single variable, and I’m going to do everything in my power to create a data source that is good enough, that has enough variables in it where I can predict this one single outcome.

Izzy: So the more like, for example, time series forecasting, the farther out in a prediction that you’re trying to make, the worse the performance is going to be. It just kind of, as a general rule, there’s obviously exceptions to that rule, but as a general rule, so you get large language models. A large language model is technically, it’s time series forecasting, where you have a sequence of words and you’re predicting the next word or the next sequence of words that the model deems is going to come after it.

Izzy: But when it comes to large language models, let’s just use that because people are most familiar with it. The most profound applications that can come out of a large language model is still going to be in the most niche application that you can think of, because there’s going to be far less hallucinations, it’s going to be far easier to control. You’re going to be able to generate data that that model can use that is very specific. It’s much easier to create guardrails for it.

Izzy: When you have a model that can do a whole bunch of different things, for one, you know, I mean, obviously chat GPT is amazing, but the thing still hallucinates. It’s like you wouldn’t want to, you wouldn’t want to have that thing do one specific job in a company. For example, it’s a great assistant, it’s a great tool. I use it all the time. It’s totally awesome. Just like the image generation or Sora, it’s really going to come down to what are the niche applications that this model can be used for, and then constructing a pipeline and products around it that will be used for that one particular thing.

Izzy: And then you might have a whole bunch of use cases utilizing that one model. That’s my general theory around product development, utilizing AI models, but maybe not everybody would agree with that. But certainly you’re going to see the most powerful outputs when you think of it in that term. Even with Sora, even though it’s an amazing model, you’re still going to have to, for it to really start taking jobs.

Izzy: There’s a lot of engineering that will have to go into play in order to develop niche applications for that particular model. And that might be difficult to visualize because it’s such an outstanding thing. And you look at it and it’s like, wow, this thing can do everything. But when you actually want to get it to do very specific things, that’s when the flaws are going to come forth. It’s going to be like, oh, I don’t like this guy’s single strand of hair right here, or I don’t like his hair is a little bit too long. I want the identical scene that you just produced, but let’s shorten his hair a little bit.

Izzy: Now watch the model. Try and produce something that you’re actually happy with. It’s going to be very, very, very difficult. And then again, congruency between characters. I don’t know. I mean, the list goes on. It’s not an easy thing to productionize, so I don’t know if it was the right move. I also. It’s probably actually pretty wise from a business standpoint, like investing $800 million.

Chris: I was going to say 800 mil. How long that’s going to take. And then consider the pace that we have seen AI technology up until this point.

Izzy: Exactly.

Chris: If you look at that, you have to be concerned.

Izzy: Absolutely.

Chris: I would pull me to, me too.

Izzy: I wouldn’t even think about it.

Chris: And not even being in the tech industry, you can see it coming. You can still see it coming for sure.

Izzy: Yeah, I agree. So I think it was very wise that he did that for sure.

Chris: So beyond that, what are your thoughts on the future of digitally replicating individuals, like actors, actresses in the form of deepfakes? And are there any identity safeguards individuals can use in the entertainment industry specifically to either prevent or allow their image from being used, whether in film, YouTube, or other forms of content?

Izzy: Well, right now, there’s unfortunately not a whole lot. I think maybe one of the biggest forms of protection is adoption. A lot of actors are running away from it right now. I haven’t met a single actor that has a model, a neural network model built of them that they can use and license out to other companies. Not a single one. I’ve even talked about it with a couple of big agencies, and, like, nobody has any interest.

Izzy: It doesn’t mean that they’re right. It just means that they’re terrified and they are running away from it rather than running into it. And I think if you don’t get in front of it, you can’t protect yourself, because now any, anybody, if there’s the moment there’s money to be made there, the moment laws are going to be created, because right now there’s no laws pretty much, that protect against you. Creating a deep fake of me, making me say a bunch of horrid shit and then posting it on YouTube.

Izzy: Like, there’s nothing. There’s literally almost nothing. Like, I think California has one law that is, at least this is of a. As of a few months ago, maybe it’s changed a little bit. I know there’s some discussions going on, but, like, you can’t do a deep fake of a politician within 30 to 60 days of an election. Something like that. That’s the only law. That’s it. They got it. Yeah, they got another one in Texas where, like, you can’t do.

Izzy: I think revenge porn using deepfakes is illegal or something like that. So. But it’s like, you know, there’s not.

Chris: Very limited, very limited.

Izzy: And what I can say is that the people are going to adopt deepfakes in ways that we probably can’t even think of. That includes weaponization of it. That also includes entertainment, commerce. Like I’ve seen since the beginning of 2023 was just a massive influx in people wanting to use this technology, but not knowing anything about it, for one, and not knowing how much it costs. So a lot of the time, somebody would come in and say, well, this is very difficult work. It’s highly technical.

Izzy: Everybody who works for us is not cheap when it comes to deep fakes. Like, it is very high paid work, and it’s not cheap and it’s not easy and it’s not fast. This stuff is very, very difficult. Basically pure engineering. So there’s not a whole lot that somebody can do. But personally, I think if people were to get in front of it, like, an actor would get in front of it, and they just start updating their model every single year, and they’ve got all of their data in a secured infrastructure, and they control it. They’ve got contracts around their likeness with studios and everything. And commercial agencies and their own agents and their agents can go and pitch this to different people. Now all of a sudden. One, the actors have a brand new revenue stream.

Izzy: Two, it makes it possibly a little bit more affordable to get Leonardo DiCaprio in your commercial if that’s what you want. If you get injured or you get sick, a digital double or a double can still go in your stead on set. So it’s also a beneficial insurance plan for unexpected things that can happen to an individual who needs to show their face on screen.

Chris: While protecting you, too.

Izzy: While protecting you. Exactly. So, you know, right now, there’s not a whole lot that can be done. But like I said, adoption helps with, with protection. And when nobody’s adopting it, it’s very difficult to predict.

Chris: How cognizant do you feel Hollywood celebrities are of deep fake attacks? In the current state of affairs? I mean, I’d assume that there’s more awareness now with these high profile cases being publicized more. Although I’m just curious, with you, being in that industry, if you’re more aware of how celebrities are perceiving it, is it a growing concern or is it being more perceived as just one offs?

Izzy: I mean, it’s definitely a growing concern amongst, I mean, that was largely what the, you know, obviously, the whole industry shutdown with the SAG negotiations. You know, they were talking about deepfakes, and I don’t know what they like, in my opinion. They didn’t have the right. They had a bunch of people that I heard on podcasts that were like, computer scientists that we’re talking about deepfakes and the risks of deepfakes and all of this stuff and how dangerous it is for the industry without them actually having any experience doing it themselves or very little knowledge of what actually went into the process.

Izzy: Like, everybody just made it seem like it was this big magic thing where you just feed a bunch of people’s faces into a neural network and then outcomes spits out a deep fake. Exactly. And it’s like, what are you even talking about? Like, there was the. Yeah, so I, I just think the, I don’t think people are aware. I don’t think people even want to learn. I don’t think people want to get in front of it until there’s forced adoption, which eventually, I don’t know exactly what that’s going to look like, but eventually it will happen.

Izzy: I’ve already, I can already tell you now that much bigger companies, big, big companies are starting to use deepfake technology in ways that I had never even thought of. We’re working on a few projects right now for a couple of large companies, and these companies are going full out. Like, they want to make this a thing. They want to utilize deepfake technology in their majority of their productions, or at least using it in their marketing campaigns for a particular company that we’re working with.

Izzy: They sell products and that sort of thing. I won’t mention the name, but they want to implement deepfakes and AI voice clones and all sorts of stuff into their whole marketing campaigns. So they’re a massive company. And we keep getting more and more inquiries. We’re talking with lots of businesses right now, and we’ve worked with lots of businesses already where the use of deepfakes right now are kind of, it’s been around for a while, but I would say the use is kind of becoming known.

Izzy: Like, people haven’t really known how to use it. And now it’s like, okay, I can replicate anybody’s voice. Like, what does that actually look like? What can I use that? Okay, I can replicate anybody’s face. Okay, well, let’s actually use a commercial. We can’t afford this guy over here, but we can afford his likeness. So let’s go negotiate with his likeness, and we’re going to put him in a commercial.

Izzy: So we’ve already done that before, and that’s the type of stuff that they’re starting to do. So I would say it’s going to be forced adoption because companies are going to want to do it, and eventually the market will just squeeze and it will force people to, or something really bad is going to happen where they’re going to create a whole bunch of models and somebody’s, I don’t, I can’t even fathom how it would be used, but it’s either going to take a crisis or, like, a monetary explosion to get laws to be put in place or to get people to actually take action.

Izzy: And I definitely have seen a big change in people’s willingness to adopt the technology, but I would say in the acting realm, it’s definitely not great, and I understand why I’m not criticizing them. Yeah.

Chris: And beyond the acting realm, I mean, you have politicians and other individuals that are prime targets for an attack. So, you know, with it being election year, it will be interesting to see what happens when lawmakers are involved. You know, it may be a stronger call for urgency, dude.

Izzy: Absolutely. Well, it also makes me wonder, not to be a conspiracy theorist here, but, like, why haven’t their laws been put in place? It’s been around for a little while. There’s already deepfakes that are being done of politicians. Like, there’s, whether it’s audio, whether it’s visual, like, that stuff is already going on, so why aren’t they doing anything about it? And that’s very interesting. I don’t know.

Izzy: I don’t know. But I. The only thing I can think of is that it’s intentional. Like, it’s been around for a long enough time. Damage has already been done because of it, and. I don’t know. I don’t know.

Chris: Yeah. What does it take? You know what I mean?

Izzy: Yeah, yeah, absolutely.

Chris: So what are your thoughts on real time deepfakes? Is that something that you see evolving more?

Izzy: Definitely, and it’s frankly something we haven’t. We have developed, played around with it. We’ve developed a bit of tech here and there. Not a whole lot. Like, I can’t say we put a bunch of money into it because there’s a company called metaphysics, and they have basically got the market on real time deepfakes. And so, yeah, we just, with the market that we’re in right now, with the type of stuff that we’re asked to do.

Chris: I can’t imagine the compute power that needs.

Izzy: Yeah, totally, totally. It’s massive. And, like, our team, like, I’m personally not an expert on real time deep fakes. Members of our team are, though. Like, they’re really, really good. So I’ve had quite a few conversations with them about it. And, like, it’s trickier. It’s trickier. Especially, like, if you want to, you’re limited to the, to the, to the angle of the face. Like, you know, the moment I turn my head, like a 90 degree turn or 45 degree turn, it’s gonna bug out. Yeah. And so I think the application of real time deepfakes is probably most prominent. Going to be in scams, and majority of the money is going to be put towards schemes, fraud.

Izzy: There was that thing that just happened in Hong Kong where the CEO called up this employee, and there was twelve other people on the call, and it was a real time deep fake and convinced the employee to set up a wire for $20 million or something like that. Wild. That’s where the money is going to be going. You know what I mean?

Chris: Yeah.

Izzy: So it’s not going to be like, awesome. Production is going to, who’s going to pay a $20 million to do a real time deep fake? Like that’s where the money’s going to go. So anyways, that’s partly why we haven’t focused on that. But the moment there is an opportunity, I mean, we’re set up for that to engage with it. But, you know, I’m not an expert in it.

Chris: And real time video deep fakes are way different than audio deep fakes. I mean, you see how accurate audio deep fakes can now be with tools like eleven labs. So, you know, I can’t imagine it taking too much longer to have a tool produce a high quality audio deep fake in real time.

Izzy: No, no, not long at all. I mean, we’re probably only a few months away from that. I know, like, I do know the, like the cartel puts actually a pretty decent amount. Well, I don’t know how much, I don’t know the actual numbers here. But what I do know is that they have developed some pretty cool technology, at least based on some of the scams that I’ve heard coming out of there where they’re doing real time audio deepfakes, where they’re acting like somebody’s, you know, somebody’s child or something like that, and they’re like, you know, I’m hurt, I’m being held hostage, yada, yada, yada. In reality, the guys at home, there is no problem. So I’ve heard some of these schemes going on and I think because there’s so much money to be made there, that’s, you know, I don’t know who they’re hiring for this stuff, but there’s developers that will develop stuff like that, which obviously is not good, but that is largely, I think, where the technology is going to be developed.

Chris: Yeah, crazy. Okay, let’s talk about inspire for a minute. Sure. Inspira provides augmented productivity via artificial intelligence. Talk to me a little bit about inspira and how that came to be.

Izzy: Good question. So me and my dad have started a few different businesses in the past. And over the past, I would say eight or nine years, me and him together have hired about 2000 people. Technically a little bit more than that. I personally hired anywhere between the low end 500 to about 800 people personally. And so we’ve kind of had like a wide range of experience because every single one of them have been remote workers for the most part.

Izzy: Sorry, when I say we have a wide range of experience, what I mean is we’ve had a lot of things happen where people don’t show up for work. They are working on somebody else’s project that we’re not even paying for them to do. They’re billing a client of ours, when in reality they’re billing somebody else’s client, or there’s outright fraud in some capacity, whatever it is. We first created a basically just time tracker, which was just to get the people to track the time. It would have some security measures so that you could investigate if some suspicious activity was going on, then you could investigate.

Izzy: And then that turned into, well, how can we incorporate AI into this? Then it turned into, we have developed cognitive frameworks for large language models for conversational agents, and now these agents manage people. So that’s, it’s kind of taking this massive turn. And we didn’t really expect that to happen. It just sort of happened organically. So right now, what we’ve got is we’ve got basically help companies transform their business with AI. So we look at one thing is human behavior.

Izzy: We’re looking at what sort of human behavior would it be beneficial to predict. So, for example, one of the things that we could predict is if the person is going to show up for work or not the next day. You could predict how many hours they’re going to be working tomorrow or how many hours they’re going to be working the next week. Different predictions like that. And it can change based on the company’s needs, but that’s something that we can do.

Izzy: And then the second pillar is we look at what aspects of employees behaviors do you want to encourage people to keep doing, or what are they doing that you don’t want them to do anymore? So then we engineer these conversational agents that will coach them based on whatever it is that you don’t want them to do or what you do want them to do. So they’re either, and they’ll pop up based on specific triggers. So if the person’s late for work, you’re going to have a manager, an AI manager, who’s going to be popped up on your screen and say, hey, Izzy, why are you late? Like, you’re 45 minutes late.

Izzy: What’s going on? Is everything okay? So we’ve got managers that will do different things like that. We just released a study right now, released a white paper on SSRN that anybody can take a look at. It is called, let’s see here. What is the. Now all of a sudden, I can’t remember the name of the paper. Oh, yeah. Evaluating the impact of artificial intelligence versus human management on modifying workplace behavior.

Izzy: So in that study, we found that essentially, in a lot of things, like it’s a pilot study, so it’s not with a whole bunch of people. But we found that AI was pretty much just as effective as a human being when trying to get an individual to change their habits. And so that’s very promising. So that’s the kind of, the second thing. Sorry. Is AI managers. And the third thing that we do is agents that will actually complete tasks.

Izzy: So looking at your highest leverage tasks that could be automated and then developing agents that will complete those specific tasks, I think that within those three things, one, predicting behavior. Two, implementing AI managers to either completely replace management or assist human managers. And then three, selectively picking out high leverage tasks that can be automated by autonomous agents. Those are kind of the three things, I think right now every company can and should be doing that.

Izzy: And if they don’t, I mean, if two years goes by and your competitor have implemented all of these things, it’s like you’re going to be left in the dust. There’s really no two ways around it. So I think, like I said, getting in front of it as early as possible, a way to protect yourself. But that’s the technology that we developed right now, and that’s kind of what we do.

Chris: That’s awesome, man. And what’s the website so that listeners can go out and check it out?

Izzy: Yeah, it’s inspira AI. So basically just inspira AI.

Chris: Love it, man. Yeah, I love what you’re doing.

Izzy: Thank you, man.

Chris: I’m looking at this report, and it seems like the AI and human hybrid approach provides the most benefit.

Izzy: Yeah, yeah, it does. So that was what was very interesting. So AI and human had a significantly better approach than just human or just AI alone, but AI could, did perform on par, or just slightly under.

Chris: It’s slightly. I mean, it’s fractional.

Izzy: Yeah, yeah, totally. So, you know, you would, as a, if you have a business, you would ask yourself, like, okay, like, because it’s such a marginal difference. Like, it doesn’t really matter, like the, the gains that we’re gonna get from replacing this individual or not replacing the individual, but freeing up this individual’s time so they can focus on more important things, freeing them up through AI.

Izzy: The cost is very beneficial there. And then, of course, AI and human working in tandem, produced by far the best results.

Chris: Yeah, interesting. And for me, as someone in the cybersecurity realm, I see value in utilizing this tool for detecting user behavior anomalies or, or insider threat detection. So, yeah, I’m just curious, have you ever considered implementing or approaching the tool capabilities from that standpoint?

Izzy: Well, it’s something that we want to get into. We haven’t right now, though, to be perfectly honest. But it is 100. I was actually, since meeting you, that I started thinking about. We have our list of use cases that we’re working on, and then I do have a couple of ones for HR. We’re keeping track of employees that may go off the rails in terms of their social media or just a general, like, intelligence framework where you’ve got agents that are interacting with this reams of data. So got a couple of ideas of revolving around there that would be good for HR, but which could maybe turn into play of, like, okay, this individual is a security threat because of X, Y and Z, but I mean, what you’re saying is absolutely somewhere that we’d love to take the software eventually.

Chris: Yeah, let me know, man. I can help you with that.

Izzy: Please, man.

Chris: Yeah, I definitely think there’s a use case there for it.

Izzy: Cool. I love that, man.

Chris: So I’m going to switch lanes here for a second.

Izzy: Okay.

Chris: When we met up, dude, I will never forget the place that you took me to. I probably had the most massive beer I’ve ever had. Do you remember that?

Izzy: Oh, yeah. It was insane.

Chris: The mug, like, dude, it was heavy. Like, my arm hurt after drinking that beer. But it was. It was great, man.

Izzy: It was a great place. Yeah.

Chris: And so, yeah, that was near you in north Hollywood. So I’m just curious, where, in your opinion, is the best place where. The most unique place, you know, bar slash restaurant slash lounge to unwind in the north Hollywood area?

Izzy: Yeah, man. There’s basically. So I don’t, I don’t go out a whole lot, but every week to two weeks, me and my girlfriend go and get korean barbecue. Okay. And that’s like a two hour event. So it’s like, we were going. I remember there’s one day we went, like, three days in a row. That was not a good thing. But, like, it’s called Han barbecue and burbank. Okay. And it’s totally awesome. It’s all you can eat, obviously, but they also have sushi that’s really good.

Izzy: But the quality of the meat there is just so darn good, man. It’s great. And then if you’re looking for, like, drinks, there’s. In Hollywood. We’ve been there. We went there in January a couple of times. And it is this place called scum and villainy cantina. And it’s a Star wars themed cantina and like, it is so legit. It’s epic. They got, like, you walk in and you’re in Star wars, you’re in the cantina.

Izzy: Very, very cool.

Chris: That sounds really cool.

Izzy: It really is. So those are my two favorite places at the moment. There’s also one other place in Beverly Hills that’s a cigar lounge. It’s called Buena Vista cigars or Buena Vista cigar lounge, something like that. And it’s super, super classy. Like, just. I don’t know. Looks like. I don’t even know what to describe. I don’t even know the theme of it, but it’s very cool. And there’s a bar in there, too, and cigar. So it’s a great spot to go and hang out.

Chris: All right, well, I just heard last call. You got time for one more?

Izzy: All right, I do.

Chris: If you decided to open a cybersecurity themed bar, what would the name be, and what would your signature drink be called?

Izzy: All right, so I got. If it was gonna be a cafe, the Turing cafe. Okay, nice. The drink, neural nectar, and it’d be some sort of chai latte. Oh, I love that, man. What I got.

Chris: I’m not gonna lie, man. That’s good. So, Izzy, thanks so much for joining me today, man. I really appreciate the time and having you share your knowledge with us.

Izzy: Thank you so much for having me, man. It was a blast.

Chris: Take care.

To top