BarCode

BarCode @ Barcode 2 LIVE

August 16, 2024 Chris Glanden Episode 100

In this milestone episode, Chris reconnects with old friends at the bar, reflecting on his journey from starting a humble podcast to launching a thriving security firm. The episode sets the stage for the live event in Vegas, where Chris is joined by an impressive lineup of experts, including George Gerchow, Justin Hutchins, Len Neo, Chris Wright, Matthew Canhum, and Izzy Traub.

The panel dives into a series of thought-provoking discussions centered around AI's far-reaching implications. From exploring the ethical dilemmas and security concerns to understanding the dangers of deepfake technology. Industry icon George Gerchow also opens up about the deeply personal story behind the X Foundation, highlighting the critical issue of fentanyl poisoning awareness.

As the conversation unfolds, the experts engage in a compelling exploration of AI's future, its societal impacts, and the evolving relationship between humans and technology. The episode highlights the importance of forward-thinking leadership in guiding us through this transformative shift.

TIMESTAMPS:
00:04:00 - From Bar Talk to Episode 100: A Podcast's Journey
04:17:00 - AI's Impact on Job Automation and Cybersecurity
09:41:00 - A Father's Heartbreaking Story and the Mission of the X Foundation
17:29:00 - The Future of AI: Security, Ethics, and Human Impact
28:43:00 - The Complexities and Ethics of Creating High-Quality Deepfakes
34:05:00 - The Future of Humanity and AI Integration 

SYMLINKS
BarCode Security: https://barcodesecurity.com/
BarCode (LinkedIn): https://www.linkedin.com/company/barcodesecurity/
X Foundation: https://xfoundation.org/
Barcode Burger Bar (Las Vegas): https://www.barcodeburgerbar.com/
ThreatLocker: https://www.threatlocker.com/
Exploit Security: https://www.exploitsecurity.io/
Ironwood Cyber: https://www.ironwoodcyber.com/
Sevn-X: https://www.sevnx.com/
 The Language of Deception: Weaponizing Next Generation AI: https://www.amazon.com/Language-Deception-Weaponizing-Next-Generation/dp/1394222548/
TED Talk - Fentanyl Poisoning: https://www.youtube.com/watch?v=z651z4pfMZs
Time Magazine Article - Fentanyl Crisis: https://time.com/6277243/fentanyl-deaths-young-people-fake-pills/
OpenAI (Stargate Supercomputer Project): https://sidecarglobal.com/blog/an-overview-of-microsoft-and-openais-ambitious-vision-for-the-future-of-ai-supercomputing
AI Trust Council: https://aitrustcouncil.org/
VFX Los Angeles: https://vfxlosangeles.com/
Inspira AI: https://inspira.ai/
PsyberLabs: https://psyber-labs.com/

DRINK INSTRUCTION
Keep it 100
1 oz Captain Morgan 100 proof
1/2 oz Coffee Liqueur
1/4 oz Simple Syrup
1 ½ oz Espresso
Add all ingredients to a shaker and shake. Strain into a coupe glass.

CONNECT WITH US
www.barcodesecurity.com
Become a Sponsor
Follow us on LinkedIn
Tweet us at @BarCodeSecurity
Email us at info@barcodesecurity.com

This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors.

Chris: Yo, Tony! Danny boy! Damn, it’s been a while since I’ve seen y’all in here.

Tony: Damn right it has. I was beginning to think you had forgotten about this place. So what have you been up to?

Danny: Yeah, Chris, last time we saw you, you’re strictly a podcaster now. You’re just walking around like you own this shit.

Chris: Well, y’all won’t believe it, but the podcast turned into something way beyond my own expectation. You know, I ended up launching my own firm where we now offer security services to organizations. And I just did a documentary on AI weaponization called Inhuman.

Tony: Yeah, I’m not gonna lie, that’s pretty impressive from a little idea we started here in this bar four years ago. So all that. We really had no idea what we were getting into, did we? As a matter of fact, we started this here at the bar. On a cell phone.

Chris: Yeah, that’s exactly how I remember it. You know, some major tech upgrades, and 99 episodes later, here we are. I literally had no idea that it would develop into this. I couldn’t have done it without you guys. We started this thing without knowing where it would lead us, and now just look at it.

Danny: Hey, don’t forget about me. I poured a few cocktails in this establishment. And don’t forget boozebot, too. That robotic bartender almost stole our jobs.

Chris: For a while, man. I could never forget boozebot. But listen, you guys were always the core of the show. That helped shape it to what it is today. Oh, shit. I’m actually late for my flight to Vegas. I’m performing episode one, hundo live. Dude, I’m pumped.

Tony: Ah, shit. Well, congrats, bro. But before you take off for Vegas, let us make you one more special drink for this special occasion. We call it keep it 100.

Chris: Nice. Make it quick, though, bro.

Tony: It’s easy. You just add 1oz Captain Morgan on a proof half ounce coffee liqueur, quarter ounce simple syrup, and one and a half ounce espresso to a shaker. Shake it straight into a glass.

Danny: That’s the spirit right there. Hey, guys, here’s to episode 100. Into whatever comes next.

Chris: Cheers, fellas.

Tony: Chris. Now go kick some ass in Vegas.

Chris: You know it.

Tony: We’ll see you next round.

Announcer: Welcome to Barcode, the Cocktail powered podcast that explores the technology, personalities, criminals, and heroes shaping modern security across the globe. Tonight, we’re live in Sin City, baby. Yes. Episode 100 is finally here. Please welcome your host, Chris Glanden.

Chris: Welcome to Barcode at Barcode two, episode 100 live from Vegas. I want to thank our venue, Barcode Burger. Voted best burger in Las Vegas. And I also want to shout out to our premier sponsor, Threat Locker, as well as our affiliate sponsors, exploit Security, Ironwood Cyber and I, seven x. So with that, with deep expertise in security compliance and cloud computing, with over 20 years in it and systems management, my next guest has built agile security teams and is a sought after speaker on cloud security and operational security, including a TEDx talk head of trust at MongodB, co founder of the VMware center for Policy Compliance, ions, faculty member and board advisor, including Barcode security.

Chris: He is also the CEO of the nonprofit X Foundation. Please make some noise for George Gerchow. Yo, thanks for being here, man. It’s, it’s good to see you.

George: You too, you too. But hey, thank you for the warm welcome, man, like that just. I appreciate it.

Chris: So I mentioned that you serve on multiple advisory boards including barcode security and a lot of people are vying for your time. And so I need to ask you, you know, why us?

George: First and foremost, I always start with the human side of shit. And this man was the first person that ever had me on a podcast to talk about fentanyl poisoning and some of the stigma around some of those sorts of things. You know, every time everyone wants to interview me, it’s always about trust. And lately it’s about motherfucking AI, which I know we’re gonna talk about. I’m good with that. But it was the first like really human conversation that I had in this space in public like that ever in a long ass career. Cuz he called me old about five times.

George: So it was a human aspect first. And then I started seeing some of the services you guys were delivering out there and just a lot of integrity, the education, the mediaev, you know, upcoming documentaries. I mean it’s amazing.

Chris: Thank you brother, I appreciate that, man. So I do want to switch to AI and I know you love talking AI. You know, you’ve been in the industry for a long time, you have a deep understanding of strategic system implementation from, you know, seeing it year after year after year. And now we’re all seeing enterprise organizations implementing AI into their workflows. In your opinion and from past experience, what have been some lessons learned on the front lines to achieve optimal business impact and what would be a tactical approach for organizations implementing AI within their enterprise environment?

George: Yeah, no, I mean, I think it’s a fair question for me is there’s something controversial that I’ve always said to everyone that’s worked with me, which is automate yourself out of a job as fast as you can, and it freaks a lot of motherfuckers out. Like, it completely freaks them out because instinct just tells you, and then I’ll be out of a job. In reality. By doing that, you added so much value by moving on and taking on something a little bit more strategic.

George: And I lean into developers a ton. I like watching developer habits, how they behave, and if you can, like, work with them, figure out the best ways to learn from them. Here comes AI. They’ve been doing this shit for a while, a while longer than a lot of cyber people have in different ways, but especially when it comes to coding. So if you can start doing more mundane tasks, but then doing things that are huge and strategic, and I’m gonna piss some people off, and I got some mongo people here, so they can hold me accountable.

George: Thank you. By the way, it means a lot to see you guys. Is that in reality, you will replace a lot of human functions. No one else wants to talk about that shit. They want to talk about the copiloting and everything else. Fuck. Give me a month and I’ll probably replace a whole GRC department with it. What?

Chris: What?

George: Automated compliance by AI. Get the fuck outta here.

Chris: Let’s go. So, from your perspective, will AI replace human functionality?

George: I think in some cases, yeah. But I think it all starts off with bare bones type stuff. You know, researching alerts, all the shit that we’ve been talking about for a while, writing queries based off of human language, not having to know JSON, Regex, everything else. I think those are some skill sets that in some organizations are really missing, you know? And a lot of them are huge corporations that aren’t armed with those kinds of skill sets.

George: And so this will allow them to augment their teams, you know, quite a bit, so that when they do get some talent, they’re not stuck doing that kind of fucking shit, working on tickets all day long and everything else, you know?

Chris: So let’s switch from tactical AI to tactical MMA, and let’s talk about the parallels there, because I know you’re a die hard MMA fan, as am I, as many others in the field. I know that Tim Malcolm Vetter wrote an article on this years ago, but lately I’ve been thinking about the principles of tactical MMA and how it relates to AI, like adaptability, like strategic planning. Those aspects could run in parallel to the approaches organizations should take when implementing AI.

George: I actually see a lot of commonalities between tactical AI and cybersecurity and MMA, because going through. So I was fortunate enough to go to their nutrition center this week and train. And it was fucking crazy how much human assisted technology exists in that fucking place. I mean, these fighters are lasting longer and getting more talented and focused because of human assisted technology. And it’s no different than our industry, healthcare, everything. Everything is moving more and more into that, so it’s pretty cool.

Chris: All right, man, so regardless of weight class, who’s your favorite fighter at the moment?

George: Alex Perea. That dude is a savage, man. He is a fucking savage.

Chris: All right, man. Well, listen, dude, I know you’ve been through some shit, man. Your legs fucked up. Tell me what happened to you yesterday, man.

George: Okay, so this vc was kind enough to invite about ten of us to go to the nutrition center for the UFC. And I was sitting there thinking, fucking 730, by the way, the goddamn morning to get a limo to go from there to the center. Whole different conversation when you’ve been out till 03:00 in the morning and being my age. Anywho, I get there and I’m like, this is gonna be lightweight. You know, like, they’re gonna show us, you know, Forrest Griffin. He’s gonna take us on a tour.

George: We’ll do a couple things that it’s done. Fuck, no. From the minute we walked in there, they started like, this is the warm up. Ten minutes of fucking calisthenics. I was getting ready to have a goddamn heart attack, and I lived 9000ft above fucking sea level. I thought I was gonna die. And then it became hitting bags and then hitting each other. My knee is all fucked up because I got carried away being a small man and kneed this guy right in the fucking liver that was two times my size and then apologized.

Chris: Damn, dude, you lucky I wasn’t there, bro. Cause I would have put you in a kimura.

George: You probably would have, because so any of you know, Jessica Andrade is. We rolled with her for a second, and she fucking tied me up like a pretzel. 

Chris: Well, all I gotta say, man, is they’re lucky I was not there, man. I would have. I would have took care of that for you, man.

George: Next time, next time.

Chris: All right, man. Well, listen, man, I do want to shift to one last conversation point, you know, which is incredibly important. The X Foundation. And, you know, if you don’t mind, for those that don’t know you personally and don’t know youre story, would you mind telling the audience here your story and the meaning of the X Foundation?

George: So this was the piece I was talking about. The first time I got a chance to talk about it in a cybersecurity setting in public was barcode podcast, which was amazing. On March 11 of 2021, I came home from a beautiful snowboarding day, but a storm was coming in, so you figured that I. I’d probably want to stay in the mountains, but I wanted to come home because I hadn’t seen my son in a few days. And came home like any other day.

George: He had just gotten done lifting weights. We had dinner together. He went and played basketball that night, came home, said, I love you. Love you. Good night. Then we snapchatted for a while into the night. That night, while I was sleeping, a couple buddies came over. There’s a good friend of his, by the way, here, Austin, right there in front of my son.

Chris: Shout out, Austin. Thanks for coming in, man.

George: Yeah. And so anyway, go to bed that night, and the next morning, he was found dead in my home. A kid that was there that night had split a pill that looked like a Percocet, given my son half, and it turned out to have lethal dose of fentanyl in it and killed him on the spot. The other kid lived. You know, he went to the hospital, but he had a tolerance to street scripts. So after being in the fetal position for months, and I mean months, you know, I still kept on working because I was trying to look for distractions.

George: I was an empty nester overnight. On Christmas Eve. I was freaked out because what do you buy your dead son? You know, 17 years of Christmas being such a joyous time had turned very solemn. And so what I did was my daughter and I talked, and I remembered something that stood out on his death certificate. It said, fentanyl poisoning toxicity. And so we kept on talking about that term, and we started thinking about the fact that someone gives you what looks like a vodka Red bull, which this is, but if they put fentanyl in it, and you have no idea what’s coming at you or anything that contaminates it, that’s a poisoning. That’s murder.

George: So we started educating people. We started a nonprofit, a 501 C three. And it was based on three things. The first one is the difference between poisoning and overdose. I’m sorry, but every time they say overdose, people write people off. And that’s bullshit, you know? But there’s other folks who unintentionally, kids who unintentionally will take an adderall or take a Xanax, and it turns out to have fentanyl poisoning or fentanyl in it. A lethal dose and so that’s the first thing we focus on. The second one is the stigma.

George: The stigma is bullshit. You know, like, I lost family members over my son. I overheard someone, a sibling of mine, the morning he died, when I’m completely freaked out, talking shit about they think it was drugs. I had never even heard of the word fentanyl before, ever before that day. And so I lost that sibling, you know, never speak to that person again. And I hate holding, hate my heart, but it’s just true.

George: And then the last one is, we take anything that we generate. So we’ve been lucky enough, I’ve done a TED talk on it. As Chris was saying, we’ve been featured in Time magazine, a BBC, because any parent, and for those of you that have kids, you can imagine how a pill can make its way into your home. So anyhow, those are the three big things we’re doing. We’re paying for kids athletic fees in neighborhoods where they don’t have an opportunity.

George: And just trying to save as many people as we can and save lives. 99.9% of the time, I’m a mess, you know? And you, sir, were, like, so kind to me when we walked in and we exchanged stories about our numbers, you know? So I really appreciate you all listening. I miss him. I love him. But let’s spread the awareness, and we can save some lives together. So thank you.

Chris: Give it up for George, man.

Chris: Thank you so much for. For sharing that story with us. You know, I’ve grown to know you and love you as a person over the years. And just know, man, we are. We are here for you, bro. Please let the listeners know how they can help support you and support the X Foundation.

George: So, first and foremost, like, ask me about him. I love to talk about him. A lot of people are uncomfortable. I get that. But any chance I get, I like to visit who he was as a human, because a lot of times, the thing my daughter and I worry about is the foundation focuses on the end, and so we have to bring him alive. That’s why it’s so important to see this kid here, because that’s some of the people that he rolled with, you know, the next thing is just talk to your loved ones.

George: Talk to them about never take a pill or a substance from an unknown source like that. You know, you’re playing russian roulette, you know, and even I’ll say this, too, it’s controversial, but someone wants to do a fucking line of cocaine. Let them do a line of cocaine for it to be cut with fentanyl is bullshit. And so always have testing strips on you and arcan. Yeah, and we distribute that shit by the tons. And if you want more information, just look up our website.

George: I can’t remember it right now. Too emotional. I’m sorry. But yeah, those are the. Probably the three big things you can do.

Chris: So listen man, I, I appreciate you beyond words. And so the website is xfoundation.org. please hit up that site. Support this cause you know, it goes way, way, way beyond security. Thank you. Stay tuned. You’re not going to want to miss this next segment.

Stephen Hawking <Voice>: The primitive forms of artificial intelligence we already have have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever increasing rate. Humans who are limited by slow biological evolution, couldnt compete and would be superseded.

Chris: Justin Hutchins, aka Hutch, is an innovation principal at Trace three and a prominent figure in cybersecurity, risk management and AI. He authored the language of weaponizing next generation AI. Len Ngo, known as the cyborg hacker, has ten digital chips implanted in his body, allowing him to infiltrate phones and access secure locations with a. Yeah, simple wave of his hand. Doctor Matthew Canum, co founder of Cyberlabs, is a former supervisory agent with the FBI.

Chris: With over 21 years of experience in human technology and security research, Chris Wright, founder and CEO of the AI Trust Council, brings his unique perspective on AI corruption in big tech and government oversight. Last but not least, Izzy Traub, CEO and co founder of Inspira, has made significant studies in film and AI. He is known for pioneering deepfake technology and AI driven automation. Let’s make some noise for our AI panel tonight.

Chris: Thank you all for being here. I’m going to make this easy and pull the trigger on this flamethrower towards you all with one single question that will light shit up. How would you assess the current state of AI security? And I’ll let whoever wants to take that, take that.

Chris W: Yeah, yeah, I can take that. First question. So, fundamentally, humanity from this point forward is going to be different than anything we’ve ever known before. And so to take AI lightly at this point is almost criminal. You know, there’s a lot of people making a lot of money off artificial intelligence right now, and there’s a lot of people that are making money without considering the safety implications and the downside impacts to actual human beings.

Chris W: A lot of these leaders are so enthusiastic about AI, to the point that they’re willing to exterminate humanity in order to achieve this AI future. And so they’re actually planting the seeds of an AI future now that is absent from humanity. And so people need to understand that and understand that there is a fundamental difference between people that are pro human and people that are anti human. And sadly, there’s people in the tech community that are not pro human, meaning that they are not looking at the human impacts. And they actually think that we’re going to merge our consciousness with AI in order to become immortal. And so they’re actually excited for that future.

Chris W: They’re not excited to keep humans safe. And actually, you know, AI is a gift. We can actually transform society with AI in a positive direction, but it just takes leadership in a positive way. So, so that’s, yeah, it’s, we’re at a critical point, and people need to understand what, that’s what that looks like.

Chris: Chris Wright, thank you, sir. Hutch, what are your thoughts on this?

Hutch: Okay, so my perspective on AI security is probably no rosier. It’s pretty bleak. I think that we are entering a very challenging time. When we talk about AI security, we often talk about some of the personal challenges related to that, the facts, the impacts that it has on us if we’re scammed or if there’s a deep fake used or something like that to damage our reputation. So we talk about those personal issues.

Hutch: We frequently talk about the corporate side of security as well. But what we don’t talk about frequently is the societal impacts of that. And we really are in the middle of essentially a scaling arms race where we’ve got Silicon Valley that is acting absolutely recklessly by continuing to just try to compete for profits, building increasingly more capable systems at a far greater rate than the speed in which we’re able to effectively align and adjust the outputs of these systems to be consistent with what we want them to do.

Hutch: So, I mean, we see this arms race continuing. We’ve got, of course, we’ve got OpenAI and Sam Altman that is now committing with Sachi and Adela to do the hundred billion dollar Stargate supercomputer. We’ve got Google, DeepMind, that is committed to the same. There was a point not too long ago where Sam Altman was talking about trying to acquire $7 trillion for the future of AI. And that’s a nearly unfathomable amount of money.

Hutch: I mean, we’re talking about pretty much more than the entire cost of our 20 year war in Afghanistan. So I think that we really need to start looking at a yemenite, more holistic perspective of the impacts, not just on our companies, not just on the individuals around us, but really on our society as a whole.

Chris: Len, as a transhumanist, I’m interested in hearing your perspective.

Len: You know, I want to key off of one of the points that you said in terms of how AI is actually affecting humans and individuals. One of the points I want to point out is the lack of consistency and basically the wild west of AI right now. And what I’m talking about is more the integrations into biomedical, you know, integrations. I mean, we’ve got things like brain computer interfaces, smart prosthetics.

Len: 1.1 use case that I can point out to show what I mean by this. There was actually a use case in Britain where there was a woman that actually had a neural implant put in that was actually able to detect oncoming seizures. This was a AI based medical implant that was actually able to modify her ability to live a normal life. It gave her between, I wanted to say it was 30 seconds to a minute and a half pre warning for a seizure.

Len: This was actually done as a human trial. And at some point the medical biomed company that was pushing this out decided to either go bankrupt or they discontinued the product. So when we’re looking at around security, I think we need to take into account, especially when looking in terms of biomedical, how the long term effects of removing these types of things from someone can affect them long term as well as psychologically.

Len: And we have things in the Linux world like long term support. I think when we’re talking about AI security from a biomed perspective, we need to include that long term support for anyone that’s involved with any type of beta trials or have actually integrated AI and technology into their human self.

Chris: Interesting. So Matt Canum, hit me with your thoughts on this.

Matt: I’ve heard people talk about AI as being sort of like a new tool, right? And the thing that’s concerning about that is the autonomy aspect of AI and the fact that it can have agency, right? A hammer doesn’t just decide to get up and start pounding nails, whereas an autonomous AI system has that ability. And I think that’s one of the things that’s really underestimated in terms of humanity because I heard that at some point as well.

Matt: I think that there’s a lot of potential with AI and humanity. And when there’s a lot of potential, there’s a lot of potential for it to go really good. And there’s a lot of potential for it to go really bad. And today I was actually inspired to have a optimistic conversation with someone. So I’m going to go with the optimism for a change of pace here and say that I think there is a lot of potential to do really good here. And I think that these.

Matt: These platforms, these. These models that we’re developing, I think they have a chance or an opportunity to actually make us more human in a way that we really are before. And what I mean by that, and let me explain, I think that these models have an opportunity to actually help us understand how to relate to other human beings in a way that we’ve never had that opportunity before. And something that I think is also very underappreciated is at right now, in this time in history, we are in a severe spiritual deficit.

Matt: Regardless of what you believe about the hereafter or anything else, human beings crave meaning, right. And without that meaning, meaning and that sense of purpose, you get things like tent cities in Portland with 30,000 people who are dying on fentanyl. And I think that AI actually has the potential to help us through this crucible that we’re going through to transcend and become a more enlightened species. So there you go, Chris.

Chris: I agree.

Matt: Fuck you. There’s my happy, happy version.

Chris: I appreciate that, man. I do want to get Izzy’s intake on this. As a Hollywood filmmaker and deepfake artist, you know, outside of our industry, is security a concern for you? And when you’re creating deepfakes, do you consider security implications?

Izzy: Yes, absolutely. No, I. That’s a good question. I. You know, we were very careful with the type of clients that we take on. Most of who we work for are, you know, celebrities or people who own the rights to their likeness, but not always. Sometimes we’re working for a studio who owns the likeness of somebody else or they’ve or another production who has licensed the likeness of somebody, but they can’t afford to actually put that person in a commercial.

Izzy: So they hire us to cast the digital double. They hire us to, you know, build the model of the individual. And you see them in the commercial, and you’d have no idea that it wasn’t them. So, I mean, I can tell you now, I think a lot of the studios, the interest in deepfakes started in 2022, and it’s only grown fairly exponentially since then. 2022 was, you know, we had some inquiries into the matter, and then 2023, it was like the.

Izzy: Completely changed. 2024 completely changed. Now everybody’s thinking like, okay, deepfakes aren’t just going to be used for defaming people or making somebody say things that they shouldn’t have said and trying to blackmail them or extort them, but now they’re realizing that it’s actually a proper filmmaking tool that can be used to tell stories. So in terms of security, it’s not talked about a lot, I’ll tell you that right now. And there’s not a lot of laws that govern it.

Chris: Yeah. Talk to us about the actual process of creating a deepfake.

Izzy: Good question. So, if you want to create a really convincing deepfake, there’s two things. One, you have to cast really, really well. So we have our own software called Helix Scan, which is basically facial recognition software, but it’s fine tuned specifically for casting a person that will match with a deep fake. So it’s going to calculate, you know, distance between lip and nose, or cheekbone width, or eye size, or distance between the eyes, because all of that stuff really matters, or even. Even the nose. So that’s the number one thing. If you cast wrong, it’s just never going to look believable.

Izzy: Two, actually building the model itself is incredibly laborious. It’s very expensive. Most of what you see out there is done on very low resolution footage. You know, you see a politician talking about, you know, saying things that they didn’t actually say. The footage is shit. Like, it looks like crap. It’s super low quality. All of a sudden, when you’re dealing with high definition footage, like, the amount of manual labor that goes into the stuff is substantial. I don’t think people realize that, like, you, you output a model.

Izzy: You spend anywhere from two weeks to 30 days to build a really, really high fidelity model that can infer very well on the individual who you’ve cast. But then the amount of post processing that we have to do is, like, probably anywhere from 50% to 60% of the job. Like, and we were talking to, it’s specialists in compositing just deep fakes. Like, you can’t just take just anybody and just slap it on.

Izzy: And I found that the best, the most convincing deepfakes utilize practices where human beings are involved the most. So, you know, you could have me as Keanu Reeves, but if I can’t act like Keanu Reeves, it’s going to trigger something in your mind that, okay, that’s not quite right. So the casting plays such a monumental role in the whole process. It’s really overlooked, and it’s very challenging. And the compositing aspect of it all is very challenging. I mean, we go in and we edit a tooth for 100 frames, editing the teeth.

Izzy: Okay, sure. We build a model specifically for the mouth, but then when we go and composite, it could take us two weeks of, like, three people working 10 hours a day just to get the mouth right. For what? Single shot, like, the work that goes into it with high quality footage is. Is drastic.

Chris: Yeah, I appreciate that, Izzy, because that’s a perspective that we don’t typically hear. Matt, did you have a comment there?

Matt: Yeah, so I just finished teaching course on deepfakes, using deepfakes in attacks, social engineering attacks at black hat. And what Izzy was saying about the, the actor that’s powering that deepfake, absolutely critical. And I think this is interesting because the attacker, they not only have to have those social engineering skills, but they have to have those acting skills. They have to be able to take on that character and really drive that home. But the other thing that we found is that if you can tell that victim or that potential victim a convincing story so that they, they believe that story, then they will start to fill in the gaps in the errors that you are talking about, and they’ll justify it in their minds, like, oh, the reason that that thing looks a little pixelated is because connection or something like that in the video chat, and they’ll explain stuff away.

Chris: I appreciate that, man. Okay, y’all, so last question. Will humanity eventually become a minority in society?

Len: I want this. I want this. I don’t think it will become a minority. I think we’ll become a hybrid. I think it’ll become a blending of the technology with the human being. Call it, you know, something along the lines of Shadowgate, potentially, or, you know, basically a digitally symbiotic relationship with technology moving forward. So I really see the idea of, you know, if you’re having a problem with your arm, you know, I see the idea of being able to replace body parts with mechanical versions as the next step of human evolution.

Len: And in doing so, we can actually our productivity potentially. Let’s say I’m working at some kind of a foundry. I can replace a human arm with an arm that has multiple tons of pressure, has impervious to heat, you know, so I don’t think that we’re going to become the minority. I think at some point, the singularity is actually going to be the blending of both the human and the machine, not necessarily independent of each other, but symbiotically together.

Chris W: Yeah, I think, you know, when it comes to AI, it brings up, like, a spiritual question about, you know, which, which pathway do we go? Do we go to the digital pathway or almost do we go to the analog pathway? And it’s a fundamental split. And even within the analog pathway, you can look at spirituality and, you know, you can go down the Satanism route, you can go down the godly route. But depending on where this goes in a digital sense, it can lead to human destruction.

Chris W: And so this is a time when all humans need to be aware of the risks of AI but also help lead the future, because the future that we create today is going to set the foundation for the next thousand years of how these technologies are going to be used. And so what we’re trying to do with AI trust Council is we’re trying to say, hey, look, you know, there’s ways that we can control this. You know, we created a stuxnet virus that was able to shut down the nuclear centrifuges in Iran that was a hardened bunker, air gapped, no Internet, you know, so that, so we have the ability to impact, you know, remote and places, bunkers, whatever, with technology.

Chris W: And so the thing is we can control AI. That’s a false narrative to say that we cannot or that, you know, there’s somehow the cat’s out of the bag and it’s just, you know, it’s game over and we just have to kind of like, you know, ride the wave and see what happens. It’s like, no, we’re humans and we’re in charge. Like, we’re in charge of this planet, you know? And soon that statement will be controversial because you will have AI systems that are, you know, in the realm of 1 billion IQ that will be in a leadership position over governments and over police, over military.

Chris W: And so it’s up to the actual human beings here on the planet to say no, like, you know, this is what we want. And so we’re gifted with this opportunity today. And so it’s extremely important for us to take this very, very seriously and say no. Like, let’s, let’s lead, you know, let’s, let’s carry AI into the future. Let’s live a life of abundance and joy. Like, we don’t, it doesn’t need to go to some dystopian nightmare. It could be awesome. And that’s what I’m looking forward to.

Hutch: Yeah. So I think we speculate a lot about what the future holds, how good or how bad this could potentially come. And I think the true answer when asked the question of what does the future hold, I think the truthful answer for anyone is we don’t know. And anybody that says that they know what this thing is going to look like in one year or five years or ten years is lying or making significant assumptions.

Hutch: There’s a possibility, and we’re already seeing speculation as to whether or not we’re starting to hit those diminishing returns on investment as we continue to scale AI. So there’s a possibility that we don’t get much more advanced than what we’re seeing right now. There’s also the possibility that if you continue to follow the trend line, things are going to become drastically different. And the rate of change is going to become so rapid that us as humans, being able to adapt to that rapidly changing culture is going to become harder and harder.

Hutch: And I feel like we’re already seeing that. I’m a parent and have a 14 year old son, and he’s starting to think about what tracks he wanted to take in high school and. And what he wants to do with his future. And it is really hard right now, as a parent of a teenager, to confidently give them guidance as to what makes sense in terms of where they focus their interests or learning. So, yeah, I mean, it’s.

Hutch: I wish I had answers. I have depressing stories more than anything, but I do think that the future is gonna look strange, and I think that we need to start having the hard conversations to make sure that we’re prepared for that.

Matt: Yeah, those. Those were some very. Those were really great points. And something that I’m asked about quite a bit in a cybersecurity context is, can we legislate our way out of this? And I don’t really think that we can. And I think what we’re probably more likely to see is AI versus AI. And some of the companies that I’ve been working with, we’re already starting to look at this as a means of social engineering defense.

Matt: So I think a lot of us probably hate telemarketers, and there are bots now that will actually answer your phone and talk to telemarketers and occupy their time. And the ironic thing is, is that a lot of these telemarketers are themselves powered by AI. So you have AI versus AI, and I suppose it just sort of recursively goes all the way down. The other thing that I’m very concerned about right now with AI specifically, is, like, what you were speaking to about the degradation of the data. So we’re starting to see diminishing returns in some of the large language models because they’re being trained on data that’s out there in the Internet that is becoming increasingly polluted by data that is generated by AI’s.

Matt: Being a little entrepreneurial, I see an opportunity there in that in the previous ten years, data was the oil, right? And now I think it’s actually going to be good quality, human coded data that is going to become really valuable. Deepfake dashboard, everyone.com. we have high quality coded data for you.

Chris: Yeah. So, Matt, I do want to quickly ask you about AI inbreeding, because that was something that we, we spoke about the other day, which is really interesting. Can you explain that?

Matt: Well, yeah, it’s exactly what we were talking about, right. Is that you have AI that is being trained to emulate humans, but the. The data that is feeding that model is itself AI generated. And you keep iterating that over a number of cycles. And with the rate of growth being what it is with AI, it doesn’t take very long to generate those cycles. And yeah, you’re gonna get essentially like what you just said.

Hutch: So there’s a white paper, a research paper, paper called the curse of recursion, which is where they actually prove that exact problem. They take a model and then they train it on a new model, on the output of another model, and they continue to do that in a cycle. And you start with a simple form of model collapse where it starts losing kind of the tails of the distribution. But ultimately, as you continue doing this, you get to total model collapse, where.

Hutch: And of course, if we continue down this path, what we see is all the models that we built everything on top of for the last five to ten years suddenly starts failing because these models, and it may actually accelerate faster than five to ten years. But, but yeah, we end up in a situation where we built everything on top of these models that are now no longer sustainable because we can’t distinguish human data from AI data, and it just continues to feed itself in an incestuous, inbreeding type of way.

Chris: Thank you so much for the knowledge tonight. Before you sign off, I’ll go down the line here, but let the listeners know where they can find you and connect with you online.

Chris W: Sure, yeah. I can’t thank you enough for having. Putting this whole event together. It’s pretty epic. But yeah, you can look at the AI trust council on the. As in the Aitc.com. basically, we’re transforming the future by putting together a council of people that are of high trust, where we can actually weigh in on where AI goes. We can say, hey, this is good AI. This is bad AI. And we’re hoping to influence the future of this technology and actually hold big tech accountable for a lot of the harm that they’re causing. So, yeah, I can’t thank you enough. I really appreciate it.

Len: You can find me directly, directly through Cyberark, or you can find me on LinkedIn. Pretty simple.

Hutch: So, Len forgot to mention he’s also part of a podcast. That’s true. So we co host a podcast together called Cybercognition, where we talk largely about topics related to futurism. So definitely check us out there for myself. You can find me@sociosploit.com and then my book is the language of deception weaponizing next generation AI.

Matt: Canham.ai

Izzy: Yes, you can find company@vfxlosange.com dot my LinkedIn is izzy Traub. Then you can find out more information on my other company, which is called Inspira AI. Inspira been working two, two and a half years now. We are focusing on workplace productivity and AI autonomous management. And so we have been working on that pretty heavily. We spent a lot of money on it, and we’ve seen scary results, honestly. I mean, when right now there’s a company with 80 employees using it.

Izzy: And in some cases, we’ve seen the human output increase by 70% through AI management. Being able to detect bad productivity, basically being able to organize all that data and act on it and have very hard conversations and confronts people based on, you know, if they’re late for work. I mean, we’ve seen crazy stuff where human management versus. I mean, we’ve done tons of studies on this stuff. If you look at it, we have several PhDs who work for us and help us conduct the studies and everything. And so that’s one part of it. And then the other part of it is actual, the, you know, task automation or automation tasks of that sort, but all based on. I mean, there’s no point in automating anything unless you have data to know what to automate.

Chris: Yeah, man, good point. So, thank you all again. This was a phenomenal conversation. Take care and be safe.

Chris: Hello?

Tony: Yo, bro, it’s time to wake up. Get your drunk ass up. How was Vegas?

Chris: Yo, man, it was good. You won’t believe what’s coming next.

Tony: Well, we’ll be here waiting to hear all about it. You know where to find us.

Chris: Yeah, man. Alright, I’ll see you soon.






People on this episode