6: Contentyze with Przemek Chojecki

I have the privilege of speaking with an AI trailblazer and a member of Forbes 30 Under 30, Przemek Chojecki.  We discuss “Contentyze”, a platform he created that aims to fix the inefficiencies in  journalism with automated content generation. We also talk Machine Learning, Deepfake Technology, and also where the intersection of AI and Cybersecurity meet.

SYMLINKS
Linkedin Account
Twitter Account
Medium Blog
Contentyze
Explainable AI
OpenAI’s GPT3
Vocal Synthesis
WaveNet by DeepMind
AlphaGo by DeepMind
Obama Deepfake
edX platform
Robot Bar

DRINK INSTRUCTION
THE ANOMALY
4 oz Peanut Butter Whiskey
1 1/2 oz Cranberry Juice
Combine all ingredients. Pour over ice into whiskey glass.

CONNECT WITH US
Become a Sponsor
Support us on Patreon
Follow us on LinkedIn
Tweet us at @BarCodeSecurity
Email us at info@barcodesecurity.com


This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors.

Chris Glanden 01:32

With me at the bar today, direct from Poland Przemek Chojecki, an AI entrepreneur with a PhD in mathematics and a member of Forbes 30. Under 30. He completed his PhD in Paris at UPMC. And then became a research fellow and a lecturer at the University of Oxford. Currently, he focuses on building content eyes, a content generation platform, which he founded and serves as a CEO. Przemek, thanks for stopping by barcode.

Przemek Chojecki 02:32

Thank you and pleasure to speak with you, Chris.

Chris Glanden 02:36

So, I want to kick things off by getting to know a little bit more about you, and your journey into AI. And then talk to us about what led up to the inception of your company content eyes.

Przemek Chojecki 02:47

Sure, I’m happy to share my story. So basically, my background is really academic, I thought when I was in my 20s, that I want to be a professor and I want to spend my whole life doing pure mathematics. And basically, that was my life until I turned 30. I guess that was the breaking point, more or less. So I did my PhD in pure mathematics. in Bahrain, then I moved to Oxford as to be become a research fellow still doing mathematics. And around that point, when I was 27, 28 I started thinking, I’d like to make a change, and they like to have more impact on the world. And pure mathematics is not enough to do that. So I switched, but it was very slow process in the end. I switched from pure mathematics to machine learning AI. And it went from academia to business.

So, for the past few years, I’ve been developing and working on different startups or collaborating with different companies, enterprises on various AI related projects. And basically, since the start of this year, I’ve focused entirely on building componentized, into integrated company, hopefully. So that the origin story company, itself is; as you said, it’s a content generation platform, we use a bunch of machines learning models to generate content at scale. And the reason for doing that, for me was; I was always interested in literature, I was always interested in writing, I had a couple of blogs myself, I’ve written a couple of novels, even before switching to business.

So, from my perspective, doing something with content, and doing something good writing was the dream come true, I could keep my writing skills with my mathematical and national learning skills and do something together so that there is direct compensation itself.

Chris Glanden 04:38

Excellent. Are you generating content? Or are you providing the model for other organizations to generate content?

Przemek Chojecki 04:44

So actually, both you can either… So, most of the users actually, using content dies, because it’s like a SAS’s platform. You can just sign up at upcontentized.com and you can just provide a headline and based on that headline, you get the whole text or draft of a text. So, you don’t even have to write anything yourself apart from the title of the text. And that’s the goal in the end. But also like on the other hand, larger companies that I work with, they have templates in mind, the kind of content they want to create. So that might be related to different financial statements, or a different FAQ’s. This is really templated is fully scripted. But they need to get gathered the data from different sources and merge that together into something better and something which is much easier to digest in the end, and make the decision.

Chris Glanden 05:36

Got it. Now with me being in the cybersecurity field. I’m personally curious about where the intersection of AI and cybersecurity meet and where it will ultimately take us. And without a doubt, the AI technology is a game changer for not only generating this type of dynamic content, but it’s also changing the landscape for cyber criminals, and also organizations that are implementing defensive controls. Have you seen the advancement in the cybersecurity slash AI space? If so, what have you seen? And where do you see it going?

Przemek Chojecki 06:15

Yeah, I think that the most common application right now AI, and from the moment actually AI took off. So, like in 2012, was Anomali detection. And by Anomali detection, I mean, looking for the outliers in different transactions. So, I phrase now the problem in a way that it might become apparent, where’s the application.

So, for example, in banks, where you have, where you process like millions of transactions on a weekly or maybe on even a daily basis, you need to look for the outliers in order to spot those players, those agents who actually might be the bad actors, and there might be some kind of a fraud going on. And in order to do that, to start classifying your customers, you start to classify your users. And then looking at those classes and looking who’s actually the outlier from those classes. And that was the early application, it’s really a deep subject of like, go and trying to understand what the anomaly who’s the outlier, it’s really a broad and deep subject. But this is the common the common use case of AI and machine learning. And it will be so for the next few years, as well, because as we progress with technology with more computing power, and more thanks will be possible.

Also, if you deal with such a huge scale of operations, like millions of transactions per day, then in the end, you need a lot of computing power to do to really observe what’s going on. Probably does, that’s the that’s like the standard thank well, and on the other hand, what we have is, and that’s probably more, that there’s more overlap with what I’m doing currently as a content generated at scale for bad purposes.

So, for example, you could generate fake news, you could create Twitter boards with bad accent in mind, and you can do a plethora of stuff with India and brings you some kind of political benefit. And those kinds of systems are, well, there’s an ongoing combat between trying to spot them and on the other hand, trying the hackers or like whatever you call those kinds of groups of people, which want to have the political influence, maybe not in, in the best way. There’s an ongoing combat between the good guys and the bad guys of who’s got the better technology who can detect who and whether you can produce this kind of and whether you can influence the public opinion by doing those kinds of stuff.

Chris Glanden 08:51

Interesting. And from what you have seen, who has the edge on that? Is it for the attacker?

Przemek Chojecki 08:57

I guess the attackers had the edge and the reason for that is and they will always have just the minimal one. And the reason for that is, it’s easier to work with AI at least, it’s actually easier to construct than to deconstruct. There is the whole thing within a community called explainable AI with a goal to try to understand why AI is taking certain decisions in certain categories.

And it’s really hard to understand sometimes because you have this convoluted function within those AI algorithms. And if you think from the cyber security perspective, then what the good guys are doing, they’re trying to deconstruct what the bad guys constructed. And it’s much harder problem than actually constructing something. So, without the AI it’s always been easier to actually create a virus than to detect it and defend against it.

Chris Glanden 09:58

Absolutely. And it’s a lot Different than reverse engineering malware versus reverse engineering, you know, an AI generated attack?

Przemek Chojecki 10:06

Yeah, exactly. So, there’s like an additional layer of complexity to that. So, I guess I think constructing malware was already difficult. And with AI on top of that, it’s becoming even more difficult than it was before. Yeah, so that might be problematic in the long run. But on the other hand, I guess what might go wrong is, we might have bad actors trying to, for example, flood the system with fake information, but people are also getting more cautious about what they see on the internet. So, I think the natural mechanics within ourselves, which is getting better with time,

Chris Glanden 10:47

Yeah, and some of the adversarial tech that’s out there and sort of cutting edge and tech that I read about is just mind blowing. One article I read not long ago, talked about open AI, releasing what’s called GPT-3, have you heard about this?

Przemek Chojecki 11:04

Yeah, sure, of course.

Chris Glanden 11:05

And apparently, it’s a new language model trained with almost 200 billion parameters or something like that. And it’s super-fast, capable of doing programming, designing, and even holding conversations about politics or the economy. So, what do you think about this tech and its capabilities.

Przemek Chojecki 11:23

So, this is a perfect example. Because typically free in the end is just a language model, which means that it’s learned to predict the next word, the next sentence, and all those Marvel’s applications are done by other people who built on top of that. So, for example, to maybe give your audience more context on Kapiti free, what you can do is, for example, you can try to translate plain English sentences into cold really easily. So, for example, you can create just saying, for example, I create for me a website with a green button, which would say sign up, and which would take me to this and this website, and you get HTML code that you can use right away. And similar stuff with like mock ups for applications, or like different designs may be SQL code, stuff like that.

So, it’s more like a super smart interface, which does things for you. And then again, it’s more like a knife in the sense that it’s just a tool and you can use it how you want to use it. So, bytes, GBT-3, is just a great tool. But the commercial applications depend on how you envision it, to use it. And the same goes with using algorithms like GPT-3 for that purposes.

So, for example, you could use GPT-3 a prior to generate spam content, and then send that to millions of people basically, on the other hand, someone can use the same GPT-3 to actually detect spam content and defend against that. So, this has two sides, you can both with this kind of knife, you can both attack and defend yourself.

Chris Glanden 13:11

The GPT-3 war.

Przemek Chojecki 13:12

Yeah, exactly. But I’m a huge fan of GPT-3, some of the algorithms actually we use on contentize are very related to GPT-3. I won’t go into technical machine learning details, but there’s like a common streak between those as a huge file called OpenAI did. And there are other players in here, like, of course, Microsoft, Google, doing similar stuff, and VBS. Well, GPT-3 is really mind boggling.

But from the perspective of actual algorithm, it’s nothing new in the sense that those are the same type of algorithms that were already part of GPT-2, but the computing power used is much larger, the data set is much larger as well to train those machine learning algorithms. And this probably also come on street in other applications of AI. It’s not only thinking about algorithms, but also who’s got more computing power. So, in the end, I think enterprises might have a better edge in the end in defending themselves? Because they probably have much more access to computing power than those bad actors, especially from smaller organizations.

Chris Glanden 14:22

That makes sense in my next question was going to be asking you about the computing power. So, I can’t imagine this is something that if someone wanted to go online and download a program, it’s not possible or I assume there’s no SAS based offering right now that provides that computing power, it seems like it would be an expensive venture to go through.

Przemek Chojecki 14:41

Yeah, so for example if I were to do something like I don’t know, spanning millions of people, but I could probably do that in my head. I know how to do it, but it would really be quite an adventure to do in a sense of like, the computing costs and then doing everything in a correct way, so it’s not like so if you think about this and think about like the malware from the 90s, when you have those simple viruses, which just went viral, with emails, it’s a completely different layer of sophistication that you have to add on top.

So, the hackers who can do those kinds of stuff are no longer I mean, they need to be much better educated and much more sophisticated with the whole technology stack in order to pull off things like that. But on the other hand, if they really pulled that off, they can affect a much larger group of people than before. So, their pros and cons to using AI to those kinds of stuff. So, you can affect much more people, but it’s much harder to pull it off.

Chris Glanden 15:44

Exactly. Okay. And text to speech model is another. I was on YouTube the other day looking at vocal synthesis, and you can actually hear it, and you honestly would not know it’s fake. So, you have, I think I saw Arnold Schwarzenegger reading Hamlet, if you didn’t know it was a YouTube video, it could really be deceiving. And have you got to see or hear any of those type of voice train models? And you know, how difficult is it to do that? Does it take the same amount of computing power is like a GPT-3?

Przemek Chojecki 16:19

Yeah, to be honest, it’s much less and it’s much better understood. So, there’s a model called wave done by deep mind already, I think, like two or three years ago. And it’s pretty much available on the web. So, if you know what you’re looking for, you can do it yourself. I mean, you still need computing power, but much less, I mean, for those kinds of models, you probably won’t spend more than $1,000 per month to really buy the necessary infrastructure on the cloud.

So, it’s not inaccessible. And I already have like bad actors using those kinds of technology. I’ve read about a fraudster important person that you can get Chief Executive Officer boys and demanding a transfer of a couple $100. Basically, I was in 2019, I think he got called to the end. But uh, I I don’t remember whether the transfer went through or not, but still, it already happened. So, there are people who are already trying to use this kind of technology for bad purposes.

Chris Glanden 17:22

I think of that as deep fake audio, where detection would almost be impossible if you had enough audio to use. And I guess that would be easier to obtain with a CEO or CSO versus gathering a video of them to produce a deep fake video.

Przemek Chojecki 17:41

Definitely, you’re right. I mean, men, it’s especially with public people, like CEOs, or like politicians, it’s much easier to actually get gather all the data from YouTube videos from TV, and so on. So those people are at risk when it comes to those kinds of technologies.

Chris Glanden 17:57

So, on the topic of deep fake videos, would you briefly be able to talk us through what a deep fake video is and how it’s generated?

Przemek Chojecki 18:08

Sure. So basically, a deep fake video is a video which was generated by AI to show it person which might be alive, but actually never acted in those circumstances. So probably the most famous right now is a deep fake of Obama. And you can Google that and go on YouTube to see the video, there’s an Obama talking. And it’s really looking real. And it’s already from, I guess, three years ago, and this technology is really getting better with time. So, the way it’s generated…. During a couple of outdoor it’s basically under the name of a Gaussian generates adversarial networks do it, they were introduced like five years ago. And the idea is very simple at its core. The idea is that you have basically two AI algorithms, one of them is trying to generate something fake and the other one is telling what is fake or not. This first algorithm just trying to deceive the second one. And once you have that, the first algorithm is pretty good at deceiving the other one, then you probably good to actually use it for your commercial applications, or whatever it is.

So, this is how those kinds of algorithms work. And to be honest, like they’re pretty well understood right now and maybe they’re not as good to use them in films yet or movie productions. Because once you know what you have, what you can look for, then you always find those so-called artifacts. So, those are like small things which will tell you right away that this this piece of content is generated.

So, for example, if you look at her especially just above the errors, or like small details in the eyes, and sometimes you can see differences between those generated videos and the real ones. But this kind of techniques are getting better. And it definitely will be a problem very soon. On the other hand, that might be also useful to, for example, restore old movies or use all the actors from the old Hollywood play in the new movies. For example, I think it’s still in the ongoing production, but I guess James Dean is going to play like a new movie, I think next year, thanks to those kinds of techniques. Oh, interesting.

Chris Glanden 20:29

Do you see any benefits of deep fake videos, outside of just being purely entertaining to watch? I always think of the negative, but I’m trying to have a good use case.

Przemek Chojecki 20:39

Yeah. So, I mentioned this James D movie. I think that’s pretty cool. So, in general, and if you think about, I think that was the was the name of that movie with there was this movie on Netflix with Albertino? Oh, yeah. The Irishman. Yeah, sorry. Yeah, I wish man. So that was a deep fake, but look like entire deep play. They didn’t deep fake the actors. But what they did is they make them look younger, for different AI techniques. So that’s also deep fake but it’s not like…. it’s not exactly the same techniques, but it’s being done for the purpose of like, you know, making the movie look better, or, in this case, making it more coherent, because those actors were supposed to be younger at that period in a movie.

So, they make it like that. So, I guess the movie industry is like, a big place for where those kinds of techniques can go and be really useful on a commercial level. And on the other hand, you also have the whole entertainment sector with news anchors, or general youtubers doing dogs for the purpose of like presenting news. So that definitely that’s a good thing, I would say, because you can enhance the human performance. You can make it easier. In the end, you can make the movie, or the presented news, more polished.

So, for example, right now, we were talking and you can edit everything afterwards, right? But it would a video, it would be much harder, because your audience could see that, for example, you make the [inaudible 22:21] or the lips are not synced with what I’m saying or stuff like that.

Chris Glanden 22:24

Gotcha. Yeah,

Przemek Chojecki 22:25

Thanks. Those kinds of techniques, you could make those cats in a way, which is again, like this a good thing and a bad thing. Because you could put words that weren’t spoken. But on the other hand, you can make those edits to make the whole show look better, smoother.

Chris Glanden 22:41

Yeah. But I’ll tell you what, I wish I had an AI tool that would go do this editing for me, because it’s a lot of legwork.

Przemek Chojecki 22:47

Oh, yeah. So, for example, the techniques, the same kind of techniques would allow you in the end to do that. But I think we’re not yet there. But definitely, there should be something possible within the space of like, editing videos, editing podcasts, automatically.

Chris Glanden 23:05

I would love that.

Przemek Chojecki 23:06

Yeah, I would bet like two years maximum. There’s like a SAS product doing exactly that.

Chris Glanden 23:12

Interesting. Yeah, there are many use cases I can think of from the cybersecurity angle, you know, you have the evolution of ransomware. So, you can even use this tactic towards extortion or bringing down organization as fast as you would with any other cyber-attack. Why don’t we see deep fake technology more in the attack landscape? Is it primarily the cost and complexity that goes into producing one and just not having enough material on hand?

Przemek Chojecki 23:41

Well, that’s a good question. To be honest. I don’t know because I’m on the good side, basically. But no, that’s a good question. I mean, you need to have more technological sophistication, definitely. So, I guess, there might be a problem in hiring the right people, for the bad actors, because especially people are interested in and capable of doing those kinds of stuff with AI and machine learning algorithms. And they’re super hard to hire even for Google, Facebook, Microsoft. So, it must be even harder for the bad actors to hire them.

So, I would see this as a primary reason because other than that, the technology is there, there are plenty of Open-Source technology that you can use a computing power is there because you can use Microsoft Azure, or like Google Cloud, or Amazon, AWS, or whatever else you prefer. So that’s not a problem, and the costs are really low right now. So, I would primarily think it’s because of like the sophistication you need in order to enter this market, which might be a detriment to the bad actors, which is a good thing, but the problem is, once they get in and they’d be able to do that I think they can do a lot of harm with those kinds of techniques.

Chris Glanden 25:03

Sure. And those bad actors, they’re often looking for a quick drive by, they want to get in and out, they target as many people as they can. So, like you said, with the sophistication, and maybe the lack of simplicity that is involved with creating these videos, it’s just not available to them right now. But as the technology evolves, maybe, an off the shelf tool becomes available, and maybe you’ll start to see that more. What would you say to deep fake detection in terms of tooling? I know the detection has been in the field of research for several years now. But are you aware of anything right now? And how far we come with having a true accurate way of detecting?

Przemek Chojecki 25:45

Yeah, that’s a good question. But actually, there’s no tool I’m aware of that can say with like, 100% certainty that something is generated or not. So, it’s super hard when it comes to text to say what a text is generated or not, there are a couple of groups of researchers which are doing that. But that’s definitely wide-open problem. With images and videos, it’s a little bit easier, also with the voice, but I’m not aware of any tool that you can take off the shelf and start using for your organization. So that’s the problematic part. I mean, the thing that you’ve already mentioned, where there’s no SAS offering for those kinds of products, and the projects that I’m thinking about right now is they’re more tailor made for particular use cases, and that there’s nothing that you can just take off the shelf and start using right away.

So, from the point of view of like a secretary, in one of those organizations, there’s no programmatic way to check whether that’s a fake user or not, that person would have to double check with her boss to actually, or check the recording, to make sure it’s really coming from the real person in the end.

Chris Glanden 27:03

Got it. So, it comes down to just being aware and process [crosstalk]

Przemek Chojecki 27:07

Like with that it is the same with like messages on Twitter. I’m not sure if you are aware, but I think like 60% of older, older traffic on Twitter is bots. So that’s like a huge, like, huge number like that. And to some extent, you can’t really tell whether, apart from like the most of these use cases, and most of the time, you can’t tell whether a given profile is a bot or not. So rather, I would think about this in a different way. In the end, it’s maybe not a such a bad thing that it’s a bot or not. What really counts is the purpose, like if the purpose of a given Balta for a given algorithm, or given an agent be that human or non-human is good, it’s fine. I mean, it’s for the betterment of humanity, then it’s great. What we want to do is we want to catch the bad actors be the human or non-human.

So that’s like the principal problem here. Because if you just look at the fake news themselves, then fake news can be generated by AI, but they can be well written by people as well. And that’s what’s being done. I mean, you can have those kinds of farms of trolls of people who are just hired to write defamatory articles, for example, or spread misinformation, go on social channels, and spread that fake news even farther. This can be done by humans as well. So, the distinction between using AI hitting here or no or using people is really not as important as the distinction between whether the purpose of a given thing is good or not.

Chris Glanden 28:46

You can almost clone someone’s personality, if you have enough information, and even looking at social media, right, and being able to just scrape posts. I’m sure there’s AI engines out there that could clone a profile and be able to post things that the real person may be thinking of posting ahead of them. I mean, I’m sure it’s getting to that level of intelligence.

Przemek Chojecki 29:08

Oh, definitely. Definitely. Well, to be honest, I would like to have something like that for myself, to not have to post on social media every second day, and just have some kind of algorithm to do that for me.

Chris Glanden 29:22

Exactly.

Przemek Chojecki 29:23

That’s actually the fun story to share. I mean, I tried running Twitter, both for myself, like my alter ego, a month ago, but unfortunately, it was banned by Twitter, because it reposted too much of tech related content.

Chris Glanden 29:39

Interesting. Okay, so Twitter does have a detection there for bots.

Przemek Chojecki 29:44

Oh, yeah, definitely. Yeah. I mean, like all the major platforms, both Twitter, Facebook, LinkedIn, Google also with the content you put on YouTube, they have a Reddit also, all those platforms have different measures in place to counteract those kinds of things. But still once you have the control measures there are people who are trying to circumvent those. It’s a game in the end of who’s ahead? And who’s doing what?

Chris Glanden 30:12

Is it looking at things like post frequency or terminology use things like that I’m sure there’s a number of different criteria and algorithms it’s looking for to detect those bots?

Przemek Chojecki 30:23

Yeah, exactly. Like you can look at, where’s the location that a given person is that writing from IP address or also what’s being written, how often, and so on, but then its kind of goes deeper in like trying to analyze the sentiment behind those tweets, whether that’s like spreading hate, and the massive scale. So that’s also possible. But to some extent, you get to the point where you have to decide what you are low on your platform and what you don’t allow, because bots basically are trying to behave like humans, you trying to [inaudible 31:03] like, from the perspective of the bad actor, you want to have bots, which are behaving pretty much like humans, more or less, but they might spread the bad information.

And the reason why that is possible is because you have those humans, which are bad actors, meaning they spread hate the online they spread misinformation. So again, it’s less about with social media, it’s less about catching, whether that’s a robot or not, it’s more about catching, what’s the purpose behind writing this piece of content. And it doesn’t matter whether that is human or some kind of algorithm that you use to write that.

Chris Glanden 31:37

And the audience really should be aware that this technology exists and not really rely on Twitter, because I’m sure there are some that get through or there’s such a massive amount and the sheer volume of bots out there. They’re not going to detect everything. So it’s important for users of these platforms to understand that this type of technology does exist.

Przemek Chojecki 31:58

Yeah, definitely. I mean, I learned this key, here’s the key.

Chris Glanden 32:01

Yep, absolutely. So, is there a solution out there to ease the fear of those that are anticipating the iRobot attack, if you’re familiar with the movie iRobot when the machines come to life takeover, and not necessarily from a machine takeover standpoint, but from an Intel driven attack? I would say that most consumers out there and look at AI in a good way for you know, the consumers of products. But for those that have that fear of a takeover, what would you say, to help ease that fear a little bit? Is there something you can say?

Przemek Chojecki 32:40

Well, it’s a hard question, because I don’t think I can hit in the sense that. Well, the thing I can say in this situation is similar to the fuse people had, I guess, in advance of the Cold War with atom atomic bombs, it’s exactly the same situation. So, I think that on the level of ordinary people, there’s probably nothing to be scared of, because those kinds of technologies require much more sophistication. And the computation costs are still considerably higher than just doing like an ordinary come through outline. There’s no danger in being a target of dose. Unless, you’re a public figure, you’re a CEO of the component, or politician, then there might be reasons to target you. And those techniques can be used for that.

So, I wouldn’t be scared. If you’re not a public figure, then there’s nothing to be scared about, I guess, because there’s no real benefit to doing that versus something much simpler than I don’t know, there are plenty of scams already online, I guess, was stealing your ID even without using an AI that people should be aware of, or giving your credit card information in the wrong place. So those kinds of things are much easier to pull off. From the perspective of a bad actor. I will be more scared about being a public figure in the coming years, because definitely you can use some of those kinds of techniques to target these people and maybe try to extract a vital part of their wealth or have influence on them.

Chris Glanden 34:30

So high profile targets.

Przemek Chojecki 34:32

Yeah, definitely. I wouldn’t be scared about also like iRobot scenario, because in order to pull something like that off, you really would have great resources to do that. You would have to basically be either a country or terrorist, really established terrorist group to do something like that. And still, it wouldn’t be probably the best way to attain your goals. I mean, what are your goals are as bad actor. The problem is… The costs of implementing AI are still much higher, especially from a technical perspective and the highest you have to do, and people you have to attract to do that for you that it’s not an easy option to do. And there are much easier options to attain the same goals.

Chris Glanden 35:22

I think we really need to perfect the AI versus AI model in the future in terms of detection and defense.

Przemek Chojecki 35:30

Yeah, definitely. I mean, for the perspective of large enterprises, banks, insurance companies, that’s like a… I will definitely invest in AI in order to also detect Anomalies or do something even more, something hard, they’re here in order to counteract those potential attacks. So being part of the research community is really important here. Because you can learn what people are working on and prepare for the next couple of years.

Chris Glanden 36:03

Are there a lot of opportunities there for AI research?

Przemek Chojecki 36:06

You mean, for the researchers or like?

Chris Glanden 36:09

Yeah, I guess for the community in general. So, like I said, it’s something that I’m interested in. And normally when a LinkedIn post comes by, you know, grabs my attention, I think I sent you one not long ago, that was really around AI security. So those type of things really grabbed my attention. And to my knowledge, there’s not many courses in the space yet. So just looking for not only myself, but maybe others out there that are interested in the advancements of AI, what would you recommend? What would be your recommendation for a path of learning?

Przemek Chojecki 36:41

Oh, yeah, I mean, that’s a great question because I think I’m not aware of any single course, on cybersecurity, which will available on Coursera, Udemy, or any other platform. Because if you were to just jump into, machine learning and data science, though, plenty of courses from IBM, Google or other big organizations to top universities, Stanford, Harvard, and so on. But the problem is very like general purpose. And I’m not aware of more cybersecurity-oriented courses that you can just pick up and learn about what you should exactly learn in order to defend your organization to be able to implement security measures.

So that’s niche to be filled still but maybe the reason for that is, if you look at AI, ai is machine learning. So those learning algorithms, and it’s still pretty new field, it basically started to boom, again in 2012, as I said before. So maybe the reason for the lack of content in this particular niche of like cybersecurity plus AI, is because of the short history of AI itself in this [inaudible 38:01] iteration, because there were previous iterations of AI in the 60s and 80s.

But those were different techniques that were being used and much less computing power, which was the reason that AI back then didn’t really took off. So yeah, so I said, answering your questions, I can’t really answer it well, meaning that I don’t have a really great courses to recommend that you could go on, for example, Coursera and learn something. Probably there are some things at Stanford right now. And you might be able to access something online. Also, I think I would go to EDX platform, I’m not sure you know them. It’s either run by Harvard or one of those schools, and they doing courses online. And most of them are for free. So, it’s like super easy to pick something up. So, I would search for cybersecurity over there.

Chris Glanden 38:52

You know I didn’t see that guy. And if I can find the link, what I’ll do is I’ll post that on the barcode website underneath the episode notes and point some listeners that way. You’re right. I did come across that. And I believe there may be some AI, ML type of courses there.

Przemek Chojecki 39:10

Yeah, there’s definitely something there. But there’s nothing like established a classical cybersecurity course you have to take, but definitely there should be something.

Chris Glanden 39:19

I agree. I think that we need to start incorporating AI into the cybersecurity curriculum.

Przemek Chojecki 39:25

Yeah, definitely. I was just started to think about all those viruses on steroids, that if you take the viruses from the 90s, but then power them by AI, they can quickly become much more powerful and much harder to spot extract. And I can think of different applications. But you know this is what we discussed. It’s much harder to do that and do that at the massive scale right now than it was before. So maybe there’s some time for older people to learn about potential dangers before it actually happens, because I think like when we’re still in this spot with AI, that it’s pretty early on in the game.

I mean, this revolution is going for almost 10 years. But it’s still early on in the sense that there’s plenty of research going on, there’s plenty of open roads, the computing power is changing a lot from one year to another. Things like GPT-3 are a great example of that, AlphaGo from DeepMind, or plenty of other applications. And those are very recent just like the last two years is basically that. So, we probably still not at the burial. There’s nothing really, really dangerous going on right now. But in a couple of years, we should definitely be more cautious about how AI is interacting with what we do online, and how it can breach the security at organizations or private level.

Chris Glanden 41:02

As it evolves, I definitely like to keep in contact with you and always get your take because you are definitely on the front line of the cutting-edge research within AI. where can our barcode listeners go to find out more about what you’re up to at content eyes or any other projects you may be involved within? What is your social media footprint as well?

Przemek Chojecki 41:23

Yeah, so basically, the place I’m the most active is either LinkedIn, you can find me on LinkedIn or the other place is my blog on medium. So, I’m running. If you Google my name, Przemek Chojecki on the medium, then basically should be able to find my blog that I usually write about technology related issues.

Chris Glanden 41:46

Excellent reading, I believe that’s where I came across some of your articles, and I’ll get that link posted as well. So, this is last call here, barcode. So, I have one final question for you, that will definitely involve a large amount of processing power. If you opened a cybersecurity themed bar, what would the name be? And what would your signature drink be called?

Przemek Chojecki 42:08

Okay, that’s a great question. So, I’ll open a cybersecurity bar and the signature drink. Oh, well, you took me off guard here. So actually, I have it. So, the bar would be called the transformer bar. Like transformers. And the dream could be Megatron.

Chris Glanden 42:29

There you go.

Przemek Chojecki 42:31

And the reason for it. The reason for that is the machine learning models behind GPT-3 are called transformers. And while of similar models is the model called Megatron from Nvidia.

Chris Glanden 42:47

That’s awesome. I really liked that I would go,

Przemek Chojecki 42:51

I wanted to add this [inaudible 42:52] and then some relation to AI as well.

Chris Glanden 42:56

Now, when you walked into the bar, would AI be incorporated where your drink is already made for you?

Przemek Chojecki 43:02

I love that. So actually, I was thinking about like having a place which would be run by AI. So, you don’t have to know employees, and you can like walk into an empty bar, and the drink being served for you.

Chris Glanden 43:17

That would be cool. Just as a side note, I was in Vegas for Blackhat a few years ago, and I was walking by a bar, and I think it was called robot bar. It wasn’t really AI. But it was, you know, you ordered your drink on an iPad, and there was a robot behind the bar that would grab the drinks off of the ceiling off of like a grid format, and basically make your drink for you and then send it out on a conveyor belt to your table. And there was no one else in the bar.

Przemek Chojecki 43:46

Oh, that’s right. Yeah,

Chris Glanden 43:48

It was pretty crazy. You know, ai facial recognition. Imagine just walking in somewhere. Oh, okay. You know, you come here on every Friday night, and you order these nine out of 10 times? We’re going to have it ready for you.

Przemek Chojecki 44:01

Exactly, exactly. On the other hand, it’s giving up your privacy to some extent, right? You have to giving away data by you because something like that would be possible already in China. And I bet they have bars like that, like the massive scale where they can predict your mood and what kind of drink, you’d like to have on a given time. But the problem with that is giving away information, it’s like the usual problem with social media platforms like Facebook and LinkedIn is that. That’s great that they can suggest you all those things, but at the same time, there’s this risk that your data is out there.

Chris Glanden 44:39

It feels invasive.

Przemek Chojecki 44:40

Yeah, exactly. But anyway, the bar idea is really great. I would love to have something like that.

Chris Glanden 44:47

We’ll get there. Well thank you for your time today and sharing your insight explaining a lot of what AI is what it does some of the capabilities, and I’ll be sure to post those links. On the website so our listeners can keep in tune with what you’re doing and the knowledge that you share. I really appreciate you coming on.

Przemek Chojecki 45:07

Thank you. That was really a great conversation.

Chris Glanden 45:09

Take care.

Przemek Chojecki 45:10

Thank you. Bye

New Podcast Episode: HUMAN ELEMENT
This is default text for notification bar