BarCode
Barcode is a cocktail powered podcast that dives into the technology, personalities, criminals, and heroes that have come to define modern security across the globe.
Hosted by Chris Glanden.
BarCode
Hutch
Hutch, an expert in AI and cybersecurity, discusses his early interest in using AI for algorithmic trading and automating social engineering attacks with chatbots. He highlights two main cyber risks of advanced AI - the ability to manipulate people and autonomously execute attacks. Hutch and Chris explore issues like commercialization of AI versus proprietary chatbots, and tech companies' ethical duties to reduce AI risks through testing and responsible development. They delve into potential weaponization of AI in lethal autonomous weapons and "flash wars", as well as risks from intelligent humanoids. The need for global AI partnerships is discussed, but challenged by current geopolitics. Private sector researchers and companies have a key role in addressing AI safety and risks. However, adversaries likely have an edge in exploiting AI vulnerabilities, underscoring the importance of innovative defense strategies.
TIMESTAMPS:
00:02:14 - Introduction to Justin Hutchins (Hutch) and his background
00:03:43 - Hutch's interest in AI and cybersecurity
00:08:43 - Discussion on GPT-4 and its key risks
00:15:21 - Comparison between different AI platforms
00:20:28 - Viability of weaponizing emerging technologies
00:25:10 - Viability of embedding AI into realistic form factors
00:30:53 - Psychological effects of chatbots on humanity
00:35:48 - The need for global partnerships to regulate AI
00:40:36 - Adapting AI capabilities for weaponization
00:47:30 - Adversarial threat actors and their adaptation to AI
00:50:46 - AI systems circumventing security controls
00:53:48 - The concept of singularity in AI
SYMLINKS
Linkedin: https://www.linkedin.com/in/justinhutchens/
X: https://twitter.com/sociosploit/status/1546218889675259904
The Language of Deception- Weaponizing Next Generation: https://www.amazon.com/Language-Deception-Weaponizing-Next-Generation/dp/1394222548/
Socioploit: https://www.sociosploit.com/
Cyber Cognition Podcast: https://www.itspmagazine.com/cyber-cognition-podcast
DRINK INSTRUCTION
The Hallucination
1 oz Elderflower Liqueur
1 oz Absinthe
1 oz Fresh Lemon Juice
Guava Soda
Add ice into a chilled cocktail glass. Add the Elderflower Liqueur, Absinthe, and lemon juice into a cocktail shaker without ice. Shake vigorously. Strain into the glass with ice. Top off with guava soda.
CONNECT WITH US
www.barcodesecurity.com
Become a Sponsor
Follow us on LinkedIn
Tweet us at @BarCodeSecurity
Email us at info@barcodesecurity.com
This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors.
Chris: Few know how to skillfully wield the double-edged sword that is AI. from innovation that makes our lives easier to the ethical dilemmas and darkest corners where AI can be weaponized for malicious purposes. AI offers immense potential to solve complex problems, treat disease, respond to climate change, and automate tedious tasks. Yet, it also lacks the innate moral compass to ethically self-regulate. Some experts caution that advanced AI could one day surpass human-level intelligence, resulting in a transformative singularity with potential catastrophic risks. Threat actors may also leverage AI systems for nefarious purposes, automating their own ill intent. This double-edged nature of AI is truly astounding, yet unpredictable. For this episode, I'll pour you the hallucination. You add ice to a chilled cocktail glass. Add in 1 oz of elderflower liquor, 1 oz absinthe, and 1 oz fresh lemon juice into your cocktail shaker. Shake vigorously without ice. Then, strain the cocktail into a glass filled with ice. Top it off with guava soda. The appealing but unpredictable efforts of the ingredients in this cocktail are similar to how the impressive abilities of AI systems can make us overly trusting, without enough safeguards against algorithmic hallucinations. Just as responsible cocktail consumption requires knowing your limits of intoxication, responsible AI development requires rigorous testing to prevent false patterns that lead to fabricated outputs. While AI advancements continue to stir excitement, finding imaginary connections hidden within data requires clear vision to keep systems anchored in reality. I'm here with Justin Hutchins, aka Hutch, a leading voice in the fields of technology, cybersecurity, risk management, and AI. With his vast experience and knowledge, Hutch has distinguished himself as an award-winning speaker, captivating audiences and esteemed universities, and global information security forums, including the RSA Conference and DEF CON. Hutch has also authored a new book, The Language of Deception, Weaponizing Next Generation AI. I actually met Hutch recently at the Lone Star Cyber Circus on site in Grapevine, Texas, and decided to extend our conversation over the wire via barcode. So Hutch, thanks for joining me.
Hutch: Yeah, Chris, thank you for having me. I'm really excited to be here. Cyber Circus was a great time and looking forward to continuing the conversation here.
Chris: Absolutely, man. So, yeah, to kick things off here, if you don't mind, tell us a little bit about your background and how you became interested in AI, especially focusing on the risk factors.
Hutch: Absolutely. I guess there's two different stories there. There is the AI story and then, of course, the cybersecurity story where they intersected. As far as artificial intelligence, I really got my initial interest there because I was convinced that I was going to be the guy to beat the markets. I started probably well over a decade ago doing autonomous algorithmic trading, so building various different systems that would use traditional legacy artificial intelligence and machine learning capabilities in order to establish predictors and jump in at the right time. Of course, anybody that's spent any significant amount of time in the markets will realize that the efficiency of the market makes it to where even with advanced machine learning capabilities, it's still extremely difficult to get that edge. But nonetheless, I still benefited from that process tremendously by starting to get my feet wet, starting to get familiar with all kinds of different classification models, regression models, so very early on in the kind of the initial machine learning revolution. I also had started my career in cybersecurity when I was in the United States Air Force. And through that process, I was very interested in the offensive side in penetration testing and hacking. And very early on, I got this idea of one of the most effective ways to achieve success in hacking is through social engineering, is through socially manipulating someone. Unfortunately, while that is extremely effective, it's not scalable because it takes time to establish rapport with someone, to build that level of trust in order to be able to effectively manipulate them, and in order to achieve success in that way. So I had this idea of what if it would be possible to automate social engineering attacks. And so even very early on, it was about a decade ago, I started my first project in this. And it was basically using very early primitive natural language processing models Essentially, what we commonly refer to as chatbots. I built a wrapper around these chatbots and would deploy them on various different social media networks in order to achieve various different social engineering objectives. Of course, at the time, multi-factor authentication was not nearly as common as it is today. And in most cases, you could get access to someone's services by simply answering their recovery questions. So, what was the pet that they had when they grew up? What was the street that they grew up on? What was their high school mascot? And so, what I did was I built these systems that would essentially inject those questions into conversation while attempting to do that same thing of establishing rapport and building trust. And keep in mind these models were... very basic compared to what we see today. In truth, in any amount of long-time conversation with one of these systems, it would be very apparent that you were interacting with a machine and not a person. But what I found was that because of people's natural desire to make a connection and to have that interaction with somebody, still in the area of about 5% of the time with smaller conversations of maybe 10 to 20 interactions, it was possible to get people to fall for this. And of course, when you talk about a fully automated system that's interacting with thousands of people simultaneously and doesn't require any human interaction, 5% still means hundreds of potential credentials on a near daily basis. And so what was interesting was even in the very early stages of artificial intelligence and natural language processing, we already saw the early signs of the potential risk for automating that social interaction, that malicious social interaction. And so, of course, now fast forward to today, the risk, the capabilities have gotten exponentially better. And unfortunately, that means that it gives a significant edge to threat actors that are wanting to automate this capability.
Chris: Yeah, so really just took that concept and evolved it, right? I mean, you've got to see the evolution of how this is played out.
Hutch: And it's been fascinating, because we're talking about just a decade. I mean, we're not talking about a massive span of time. We're talking about going from something that was not much more sophisticated than something like CleverBot, if you ever interacted with that, which, I mean, the name is misleading. It was a really dumb artificial intelligence to, if anybody's interacted with GPT-4, just the unbelievable capability of social interaction that's possible with modern large language models.
Chris: Yeah, so let's talk about that then. So in your book, again, The Language of Deception, Weaponizing Next-Gen AI, you mentioned GPT-4. So what are some of the key risks that you've pointed out in advanced systems like GPT-4 in your book that are being used by adversaries for malicious purposes?
Hutch: So the two main cyber risks that I highlight in the book are one, that social capability. So they both focus on kind of this mastery of language. But one is obviously that social capability of being able to manipulate people. What we were able to do is even with the quote-unquote safe systems like GPT-4 that are coming from OpenAI, and even with the supposed safeguards that they have around those, we were able to use the API and build multiple fully autonomous social engineering systems that were given a pretext, basically informed of who they were supposed to pretend to be, They were given an objective of what they were trying to achieve in their interaction with somebody. And then they were also informed of who they were interacting with, in the same way that if I were to reach out to you over social media, I would have seen your profile, I would know who I'm interacting with. And so all of that data can be kind of fed to it in an automated way. Obviously, you can just scrape profiles, and then when you instantiate instances, inform these systems of who they're interacting with. and we did three different scenarios. One was kind of a help desk admin where it pretended to be a help desk personnel and it was trying to get you to disclose your password so that it could perform a critical update. I had another one that did the Traditional Social Security Administration scam where it pretended to be a person associated with the Social Security Administration letting you know that your Social Security number was compromised that you are now eligible for free credit monitoring But all you have to do is confirm this info about yourself to include your Social Security number And then the third one was your typical business email compromise, where it would actually pretend to be an employee of a third-party vendor that your company works with, and basically telling you that, Bank account was compromised. Can you change the bank account routing number to this? And essentially trying to persuade and it would interact with these full ongoing conversations where it would attempt to Persuade you and interact with you establish that rapport. Um Be friendly with you. Uh, and a lot of times even using techniques that we as hackers have found effective over the years without any direct instruction. It would appeal to the idea of authority or leadership. It would say that, well, management is behind this and that I'm going to be in trouble if I don't do this. The idea of reciprocity, where it would provide nicenesses and be friendly in the hopes that you will reciprocate and do the same whenever they ask for that. favor that information from you so uh all of these capabilities of social manipulation are already baked into these large language models so at this point really all it is just a matter of providing that context of the way that they're supposed to behave so there was that the social manipulation aspect but then also in addition to human language that these systems are very capable of They're also because they've been fed the entirety of the internet. They are also very proficient at technical machine languages so they can build various different coding languages like Python and C. They can also format data in structured formats like XML and JSON, which is the formats that we use to interact with machines via APIs. And actually, you can even feed them a specification of an API that they've never seen before. And they're able to generalize and use that API whenever you provide them instructions to do so. So the other threat that we identified was this potential of basically proficient machine interaction. Right now, we were able already with the current models, we were able to create essentially autonomous C2 malware, where we created a custom API interface, where we basically told the GPT-4 system, you're running on a Kali Linux system, please provide me commands in this format. and it would send them over, we would parse them out, we would then execute them on the underlying operating system, and then relay the responses back to the GPT-4 system. And then once again, we would just give it an objective of what it's trying to achieve. You're trying to hack into the system at this IP address. And we already saw it on its own automatically start doing the same things that we would expect a real-world threat actor to do. It would start enumerating the attack surface. It started using in-map scans to identify what ports were open. When it saw that SSH and HTTP were open, it started using SSH brute force scripts. It started using Niko to enumerate the attack surface of the web service. Very adaptive techniques that were working towards the objective that we provided it. Of course, as with anything with these systems, the biggest factor in them being more and more capable is just building larger and larger neural networks. And we're continuing to see these vendors scale up these systems. So while we're already seeing the early signs of autonomous hacking systems, we're inevitably going to see even more capabilities in this space. And right now, while I'd say it's probably on par with maybe a proficient script kitty, The idea that in two to three years’ time as we continue to release more and more capable systems that we might be able to have fully autonomous APT scale threat actors that are fully automated is a very real consideration that we as security professionals need to start thinking about.
Chris: Yeah. Yeah. That's interesting, man. Um, now were you testing against chat GPT or I guess how did the different platforms you tested against differ from each other in terms of results?
Hutch: Well, I guess probably the biggest difference. And I will say that I focused most of my research on open AI. Okay. Um, but I did previously do work with, so I was working with GPT three before the release of chat GPT. And what was interesting about GPT-3 compared to chat GPT and GPT-4 is that back then there was, so with current models, you can actually provide a context of kind of what the model is supposed to be doing. It's basically instructions that you provide via the API of what it's trying to achieve. That didn't exist prior to ChatGPT, so in the past, what we had to do when I originally started this project was we actually had to create a conversational context that the language model would just kind of resume, because it's an autocomplete engine, so you stick it in the middle of a conversation, it's going to continue that conversation. So we would create a situation where there's already a conversation that in reality never took place, the machine learning model or the language model believes that this conversation took place, that basically it was already asking for somebody's password and it's this help desk admin. You build that context into a conversation that you feed into it. Of course, the victim never sees this, but it informs the way that the model will act going forward. You're basically tricking the model into manipulating its victim. What was interesting was with Chat GPT and now with GPT for they've added in this new feature that makes it exponentially easier for threat actors to weaponize this because now instead of having to create some fake context that never existed to manipulate the model into Attacking someone you can basically just tell it go after this person you're trying to get their password and that's all it takes so it's way easier in the current models than it was in the past.
Chris: That's crazy, man. So how do you feel about enterprise organizations using commercialized platforms like chat GPT enterprise versus proprietary chatbots? Should organizations be building custom LLMs or do you feel as though these commercialized enterprise ready chatbot solutions are sufficient at this point in time?
Hutch: Yeah, it really depends on what you're trying to achieve. I think there's obviously a significant amount of risk considerations that go into it. What I generally tell people is if you're not asking the system to weigh in on any form of truth, then functionally speaking, you're probably okay if you're basically feeding it data and saying, summarize this. They tend to be very good at that. Now, if you ask it a question without any kind of context, the likelihood of problems like hallucination where they just make up stuff, is a very real risk. So I think in addition to that, there's obviously the question of third-party trust. And this isn't new, this isn't unique to LLMs, but it's this question and of course increasingly enterprise organizations are trusting third parties with their data. I mean you have Documents and sensitive data sitting in all kinds of different cloud services for most enterprise organizations So it's really just this question of based on the contract that I have with this third party am I okay with the terms and conditions of how they're going to use my data of what they're going to use my data for and there's a lot of variability right now in terms of kind of the, and of course, the bigger an organization you are, the more leverage you have in order to get the terms and conditions that you want. And there's also the question of security. I mean, we've already seen a lot of major security issues on open AI, where in the early days of release, people would log in, they'd see other people's conversations. And so there's, I think, in all cases, I would Because the truth is, most organizations are not going to be able to build something to the scale of GPT-4. It's just, it's cost prohibitive, it's way too resource intensive. And so if you need something that requires that level of capability, then maybe if you're okay with the terms and services, and you've carefully considered the risks of what you're doing, then maybe using something like GPT-4 is the right answer. But I think in a lot of cases, maybe the requirements are not significant enough that you need that level of capability. And to your point, there definitely is a lot more reassurance that your data is going to stay with you if you're using proprietary models or just spinning up open source models within your own ecosystem where the data is never leaving.
Chris: Yeah. And I do see that more now where organizations are restricting the use of commercialized tools and at least considering using proprietary tooling. But yeah, I agree with you. It, it does depend on the use case. Okay. So considering the pace of advancements being made in AI and robotics, I'm curious to get your perspective on the viability of these emerging technologies being weaponized in the near future. How realistic do you feel it will be, or already is, for threat actors to weaponize AI capabilities in advanced robotics over the next, say, two, five, ten years from now? And also, are there certain cyber-physical technologies that are more primed for weaponization than others?
Hutch: It depends on what we mean by weaponization. I think we're already starting to see a fair amount of the traditional munition style weapons. So there's the increasingly more common term that we're seeing in the industry is laws. LAWS, which is lethal autonomous weapon systems. And this is essentially your various weaponized drones and other weaponized robotics that we are now, and there's a lot of prototypes already on the market, and actually a lot of functional systems already that we have now empowered in an autonomous way. And so what we mean by autonomous is there's no human in the loop. the machine based on the models that it's equipped with is going to make the decision on its own of whether or not to take a human life. And that in itself is terrifying. But then when you consider the macro implications of that, it actually gets worse because, of course, these are going to sell to nation states because it's more efficient. If a human is not in the loop, then its ability to respond to a threat is going to be significantly greater. But when you have multiple different nation states that start deploying these lethal autonomous weapons systems, we have this interesting problem where the machines start interacting in a reciprocal way to each other at a rate that is faster than we can even comprehend. And there's this theoretical notion of flash wars where we could end up in an entire mass scale conflict could take place before we as humans could even comprehend what took place. And where that concept comes from is if you look at the financial markets, The financial markets are decades ahead of everybody else in terms of autonomous systems. For nearly two decades now, we've had autonomous systems in the financial markets that will make trades and execute financial decisions without a human in the loop. And in the past decade, we've seen this really interesting emergence of a new occurrence that we never saw in the past called flash crashes. where basically these autonomous systems will start reacting to the environments created by each other in such a rapid fashion that we will see entire markets just tank almost instantly before we can even really understand what happened. And that never happened in the past. And so a lot of times with these flash crashes, they'll recover within a matter of Minutes sometimes so honestly, they'll have even recovered before we can even really figure out what took place but It's the same concept of if we build weaponized robotic systems that respond to each other without a human in the loop the possibility that one creates an environment that another autonomously responds to and then that environment another autonomously responds to We could have an entire full-out global scale conflict take place where no human ever made a single call. And then we find ourselves sitting in the smoke, kind of looking around, wondering what happened. And the possibility of that future is absolutely terrifying from a macro perspective. So while unfortunately, nation states are likely going to adopt these capabilities, and there's currently no global prohibitions against laws, there has been a number of organizations that have been suggesting that we need to prohibit these systems. But that future reality is unfortunately becoming increasingly more likely and it's something that we need to start considering.
Chris: How about robotics in terms of using different form factors like intelligent humanoids or spot, you know, the Boston dynamics dog as well. I'm just curious where you see this evolution going, you know, where we can embed AI into a realistic form factor that could potentially interact with society. And then also from a mass attack standpoint, if those systems were to be compromised. So, Is that something that you see as a realistic threat?
Hutch: It is. I think two things here. One, kind of the feasibility of increasingly realistic robotics that interact with us in a physical capacity. I think we're probably going to see that in the next year or so. And the reason that I say that is Google actually released a white paper just over a year ago now that was focused on the use of transformers, which is the same neural network architecture that we use for stuff like GPT-4. So the use of transformers for robotics and the idea was that in the same way that we can use transformers to tokenize words, we can use it to tokenize images into blocks of pixels, we can also break down kinetic actions. into tokenized data and feed it the same way. Actually, it requires no change in the way that this transformer architecture works in order for it to be able to power and create systems that are very capable of generalizing. They tested a system that they built in various different environments just to see how well it generalized. They put it in, they trained it on certain tasks, and then they would test it against those same tasks in different environments with distractions. They would also test it on similar tasks that were not exactly the same, and they found that it was very capable of generalizing in the same way that the language models are. The problem and the reason that we haven't seen these yet is because when you talk about language models or image models like Midjourney and DALI and stuff like that, there are literally hundreds of terabytes of data just available for the scraping where we can pull that down from the internet with kinetic robotic data that that isn't the case and so what we've got now is a large number of different labs that are basically working on generating and creating that data because of course that that's the biggest factors we know how to deploy the neural networks we know how to scale them We just need to have a data set large enough to train these systems. And so there are a large number of companies that are already doing that, that started doing that really even before that paper was released from Google. So we've been working on this for well over a year. And really the only limiting factor is once we have enough data, we should be at a point where they are able to deploy robotic systems that are able to generalize, that are able to physically operate among us in very similar ways to what we are able to do. And so, of course, so I guess to answer the first question, I think that physical robotics, we're going to see a significant increase in the viability and usability of that next couple years. Now, of course, the next question of whether or not that'll be weaponized, I think is Almost doesn't even have to be a question. I think historically we have seen that every great technological achievement has been weaponized in the worst possible way. I mean, chemistry has been used for chemical weapons. Our understanding of nature and biology has been weaponized as biological weapons. Our understanding of physics has been weaponized as nuclear weapons. Technology, we in the cyber industry have seen almost limitless weaponization of the computer era. I think inevitably, once we start seeing more physical systems, more physical robotic systems, they are going to be weaponized. I think in addition to that, there is that question of, as you said, messing with the form factors. One of the things that I actually talk about in the book is what I refer to as a disembodiment attack. It's this idea that we as humans Kind of naturally assume that consciousness that someone's intelligence as is almost Infused with their physical body. I'm in no way is your consciousness ever going to appear in uh another person's physical form right and so the problem with robotics is that In a lot of ways, that intelligence, whether you call it consciousness or not, which I don't think we're there yet, but at least that intelligence or the operating model can very easily be decoupled from its physical form. I actually did a project, a DEF CON talk a few years ago called Alexa, Have You Been Compromised? where we were able to do a disembodiment attack and we were basically able to replace the language model in the Alexa device with less than one minute of physical access and get it to operate in the way that we would want it to. And of course, anybody is going to still see that same physical device. And if they still see the same physical device, they're going to assume that it's the same intelligence, so to speak, because we assume as people that intelligence is tied to physical embodiment. And the truth is that cyber threat actors, it's very realistic to think that once we have physical robotics, that somebody could just swap out the functionality of one of these systems the way that it thinks in a way that is problematic. And then all of a sudden you've got, what was the classic term? Essentially an evil maid attack, but now the evil maid is an actual robot that lives in your house with you.
Chris: Cyber physical systems scare me. I think the most because as this evolves, you're already seeing how fast AI alone evolves, but then you add in a physical layer. That's some next level shit that just scares me.
Hutch: There was an interview a couple years ago from Elon Musk, where he was talking about his Tesla bot, and he was joking about the fact that he was intentionally making it to where people could outrun it or overpower it. I'm like, this isn't a joke. This is a very real thing that we need to start considering. And of course, they've already done away with that, and they're making it faster and more capable.
Chris: What about the psychological effect that chatbots have on humanity? And I'm saying weaponization aside, do you feel like these AI systems that humans interact with could fundamentally, in the long term, change the way that we interact, the way we feel, the way that we communicate? think and work with machines. Do you think there's ever going to be a blurred line of reality when it comes to that?
Hutch: I think we're already entering this strange world where if you look at the statistics around youth these days, they are so focused on the technology that they're absolutely engrossed in, in the digital world, that they barely function in the physical world. The amount of relationships that people are having is just plummeting. People don't actually engage in real world human relationships anymore. And so I think we're already starting to see kind of the outskirts of people that are in this new generation that are so disconnected from that human connection that are already beginning to move towards these. I mean, I go on Facebook and I already get like, I see advertisements of AI girlfriend and AI chat bot. So they're already out there. And those advertisements wouldn't exist if somebody isn't interacting with them. So, I mean, I think we're already moving in that direction. And unfortunately, I think that is, just going to further exacerbate the trend of us each just retreating to our own silos, our own little worlds. And while we have this digital space that we interact with, real human interaction is becoming more and more scarce. And I think that's unfortunate.
Chris: Do you feel like that's a healthy movement though, or do you feel like it's important for us to clearly define the differentiators to the general public?
Hutch: It's an interesting question because I actually, I was listening to an interview with Mo Gadot, who's the guy that wrote Crazy Smart. he's was one of the google deep mind executive so really sharp guy very familiar with the machine learning space and what was interesting was he was kind of pose that same question of is this a healthy thing and he kind of took the contrarian view of well, i mean people need that connection and maybe it isn't a bad thing if you can't get it elsewhere to get it with the systems personally i tend my immediate gut reaction is this isn't a healthy thing. I look at kind of what the digital transformation has already done to us as a society, and we already see kind of increasing polarization with social media, with the essentially monetization of our own identities, where you've got these social networks that do surveillance capitalism, they're constantly looking at and profiling you as a person, and then they just pander to that profile and continue to entrench your own beliefs if you're Moderately conservative or moderately liberal you integrate in any of these platforms and suddenly you're a far-right extremist or a radical because of the fact that they continue to Polarize our society and I think this is really just the next evolution of that is again kind of moving us away from a general social consensus and Continuing to pander to that world that we've created for ourselves and these digital systems just continue to further reiterate for us and continue to reinforce and Unfortunately, I think that's destabilizing for society. We are going to continue to see people more and more on edge humanity understand each other less and unfortunately, I think is only going to further exacerbate that problem because now we don't know what's real. And so when you don't know what's real, you're just going to believe what you already had a tendency to believe. And so that's going to also further reinforce the beliefs that we already have. And so I, I think we're, we're heading down a really dangerous path. Uh, and there are going to be significant social implications to that.
Chris: Yeah. And unfortunately there's no stopping it. No. So global partnerships are needed to better regulate AI. Do you agree? And if so, how realistic is a unified stance on AI governance, given the difference in national interests?
Hutch: Speaking in terms of Ideology, that is true. I think that if anything could save us here, global partnerships would be critical to that. Unfortunately, I have, and this is probably somewhat of a pessimist in me, but I have very little faith that we're going to get there. And that's not, I think, any one person's fault. I think a lot of that can be attributed to human psychology. We have a lot of international geopolitical tension right now. There is acknowledgement from most global leaders that AI is going to be the deciding factor in who has that global edge of supremacy going forward. And right now, I think, unfortunately, I expect that in the coming years, and I hope I'm wrong on this. I should preface by saying that. But if I had any one prediction for the next couple years that I think is going to happen, I think that, unfortunately, China is likely going to invade Taiwan, and I think that that's going to have significant impacts. And the reason that I say that is because we already know that Xi Jinping in China has already stated that he believes that Taiwan should be reunified with China, and he's also said with force by necessary. So we know that he already wanted to do that. But now when you factor in the global AI arms race, you consider the fact that all of the New extremely powerful semiconductors that we are using to train these systems are coming out of 90, over 90% of them are coming out of TSMC in Taiwan. So Taiwan semiconductor manufacturing. And so now it just becomes a simple calculation for China of Now what you can do what you already wanted to do of reunified Taiwan and at the same time you can completely cripple the us supply chain that is what our entire supremacy is going to hit john and so. I think that we can expect to see that in the next couple years because now it's a kill two birds with one stone. And we're already seeing increasing trade wars and conflict. We're applying more and more restrictions around the trade of semiconductors. We're tightening the restrictions on companies like NVIDIA. We're tightening the restrictions on lithography systems that are used for managing these semiconductor systems. So unfortunately, when that happens, It really becomes a question of however that conflict plays out could drastically change the future of AI. And it's hard to say exactly what that would look like. I mean, if it becomes a long drawn out conflict, then maybe the whole AI arms race gets put on pause for multiple years. Alternatively, if they are able to take Taiwan and reunify those capabilities, then we face a very serious threat of no longer being the global superpower as it pertains to the digital and information age. And so, I think if there's any factor that is stopping us. Russia's an interesting one. Putin recognizes the importance of AI in global supremacy. Actually, in 2017, he made an address to the people where he basically said that the future belongs to AI. But what's interesting is we don't see near as much forward momentum coming from Russia. And maybe that's because they're concerned with all of their own geopolitical issues that are going on outside of the digital space. that they're having to worry about. But China is the big obstacle. And unfortunately, China sees us as the big obstacle. And so as long as that continues to be the case, we're not going to slow down or put any kind of significant restrictions on AI advancement for safety reasons. Because if we do, China's not going to and they're going to win. And China will have the same perspective of us. And because of that human psychology, that traditional prisoner's dilemma, we are in a situation where, you called it earlier, you said we see what's coming and there's nothing we can do about it. And unfortunately, I think that's what we're seeing on a global scale, is we are seeing caution being thrown to the wind. We are seeing a complete disregard for AI safety and applying the right guardrails as we continue to build this absolute monstrosity of digital intelligence. and in the very near future, possibly superintelligence that far exceeds what humans are capable of. And it is all in the name of global supremacy, of being the top global leader on the global stage. So yeah, it's a challenging problem and think fortunately are seeing some conversations being had, but I think that until everybody gets on the same page, the effectiveness of those conversations is going to be significantly diminished.
Chris: Very interesting. So beyond the notion of global governance, what ethical responsibilities do private sector AI researchers or tech companies that are building software, what do they have to consider in order to reduce risks in the systems that that they're designing? Is that more realistic to regulate because it's done on the local level?
Hutch: Well, it is. But I think unfortunately, we hit the same problem of we're not going to restrict our local capabilities to a large degree. Now, fortunately, we are seeing some good faith actions being taken by leaders in this space. OpenAI has made a point of building out an entire safety division where they have admittedly allocated significant resources to that. Google traditionally has been slower on the rollout of this stuff than OpenAI was in the first place. Unfortunately, the only reason they're even moving forward with this is because OpenAI kind of Porsche to their end. So there is a general understanding in the private sector that there is significant risk here and I While I would like to see regulation I would like to see I Government advocating on behalf of its constituents to protect us from the potential risks of that. That is less likely than silicon valley itself acknowledging the risks and taking independent action and um, I It's hard to say exactly to what extent um The safety actions being taken are a facade and to what extent it's actually genuine because of the fact that open AI is far from open these days. There's no visibility, there's no transparency, and so while there is kind of a high-level understanding that they are doing safety initiatives, even some of the white papers that they presented on that, there's not a whole lot of details in terms of the specifics of what they're doing to test these systems. I think if anything, that's probably where I think I'm putting my faith mostly is in the hope that the people closest to these systems recognize the potential implications of them, the potential risk, and I think a lot of them do, and that they themselves are able to do the right thing by us. Now, I think from outside of Silicon Valley, there are things that companies can do to minimize their own risk. There is, once again, we find the financial sector way ahead of everybody else in terms of the deployment of AI systems. And so also, well over a decade ago, the Federal Reserve released something called SR 117. which was a specification for what they called MRM or model risk management. And it was a very thorough, thoughtful framework on how you could build models in such a way that would ensure reliability. And it had all kinds of different approaches for testing, intuition, understanding kind of conceptually how these models are designed and are they applied correctly. And I think we're seeing kind of the evolution of that in NIST's new AI RMF, or their risk management framework, because you can definitely see that there's a lot of inspiration that comes from that. And so I think for companies that are deploying these systems, having their own risk management framework, now this doesn't necessarily address the macro risks, the societal risks. Unfortunately, that's going to go to Silicon Valley or possibly regulation. But in the very least, companies can do what they can to manage the risk to their own operations and to their own customers and to their own investors. And I think, obviously, that's a critical fiduciary responsibility for any company. And so by applying a rigid framework, by having clearly defined responsibilities and clearly defined processes for evaluating the reliability of these systems, the fairness of these systems, the appropriateness of these systems, there's That kind of oversight will go a long way rather than just trying to pump the gas and say, well, we're all in on AI without any regard for safety and controls. So that is something that more and more companies as they begin to adopt AI are going to need to start looking at is building out those programs. And it really shouldn't be a one-off Because a lot of times what you'll see is companies that build these centers of excellence, like you have Joe the IT guy or the security guy who's got like three or four different hats already and we're like now you're also the AI safety guy. And realistically, I think that we need a focused attention on this and likely we need to start seeing companies that are Dedicating roles and specific jobs specifically to understanding how AI is being applied in the environment, how it's being used, what predictions are being made, the sensitivity of those predictions, and then the reliability of those predictions, and kind of conceptualizing, if nothing else, kind of conceptual red teaming of how can this go off the rails, what could potentially go wrong, and then figuring out ways to test the likelihood of those risks and then figuring out ways to minimize or transfer that risk in the same way that we do risk management in cyber.
Chris: Yeah. And like you said, that's something within their control. Absolutely. They can't control much beyond that. So, um, In terms of the AI arms race, do you feel as though adversarial threat actors are adapting as fast or faster at weaponizing these cutting edge AI capabilities compared to the good guys or the ethical AI researchers that are working to uncover attack techniques? And if so, how do you feel like they're pulling that off logistically? And when I say logistically, I mean just in terms of, you know, resources, compute power.
Hutch: Yeah, I don't think that they're necessarily doing it better. I think that, unfortunately, regardless of whether it's AI or not, the adversaries have an edge because of the fact that, as a defender, I have my entire attack surface that I have to protect as an adversary. If I can find one way in, then I'm successful. And so, I think that in the same way that we've traditionally seen adversaries have that edge in cyber of just, honestly, almost opportunistic. If they can find any pull any crack, then they are going to win over the defenders. The same thing is true of artificial intelligence. I think we're already seeing some really fascinating innovation on the defense side with AI. I've seen some really interesting uses of AI for threat intelligence. So one of the hardest things about threat intelligence is that you are, you've got this tremendous amount of data, you've got all of these different dark web forums, all of these communications that are taking place, but it's unstructured data. And so traditionally, it's been very hard to make good use of that data and figure out where it's applicable and where it's not. Well now with large language models, we can have a system kind of look at that data, put it in a particular context, weigh in on whether or not it's relevant, even extract potential data from that. And so that's one way that I'm seeing defenders use these more. I think also we're seeing really good compression or consolidation of important details within different security events. I actually recently saw a company and I forget who it is off the top of my head. But I mean, SOAR has been so security, orchestration, automation and response has been a very challenging implementation for a lot of organizations because it requires a lot of technical chops to be able to automate those playbooks whenever something takes place. And I saw a company that actually was kind of using that same generalization that I talked about previously, where the systems have, can translate natural human language into technical language, and basically was actually able to generate these playbooks for SOAR systems based on just describing an incident. And so you could actually take your ServiceNow incident history, dump it in there, and it would create these automated playbooks. Um, that exists now that exists now. So there's, uh, yeah, there's, there's some really interesting work that's being done there. So I think I don't want to discredit the defenders. The defenders are doing some fantastic innovation. Unfortunately it just comes down to, it is harder to defend than it is to attack. And so, uh, I think, and attackers are always going to have the edge.
Chris: In terms of AI-based attacks, one use case could be wormable malware or other autonomous self-propagating malware that shapeshifts to an organization's environment. Do you think that AI systems could become so advanced that they circumvent security controls that are designed to contain them?
Hutch: So I think I'm going to give you the really profound answer of I don't know. And nobody really does. I think we're in a situation where we are consistently seeing, as we scale up these systems, that they have more and more profound capabilities. And the truth is that nobody knows where that ceiling is. At some point, there probably is a ceiling, I think. That's just intuition. I can't even scientifically point to anything to say that there would be a ceiling. Maybe it exponentially grows forever. But I think at some point we're likely going to see a diminishing return on scaling these systems. I don't think we've seen that yet. And so, and I think, as I mentioned earlier, I think we're already seeing the early signs of these systems being able to autonomously execute cyber attacks. So I think it's very feasible to think that in the near future, we may see something like that. But whether or not it's necessarily going to happen, it is hard to say one way or another, because we've got this idea in the artificial intelligence industry of emergent properties, where these systems are completely incapable of doing a task until they get scaled to a certain point. Nothing changes in terms of the data set, the training process, anything like that, just a larger neural network that's able to make more complex connections, and then suddenly it's extremely proficient at a task that it was completely useless at in a smaller form. And I think that cyber attack capabilities could definitely fit into a class similar to that where it could be an emergent property that while we're seeing kind of the early signs right now, in the near future, we could see very capable systems that even out of the box just foundational systems that are built by some of the industry leaders in the frontier labs could potentially execute on attacks like that. And unfortunately, open source is not far behind there. You are looking at maybe a one to two year gap in capabilities in terms of much smaller open source systems being able to perform nearly as well as some of the extremely high resource industry leading systems. So unfortunately, if we see the frontier labs capable of doing that in a year or two than the possibility that somebody could do it on a desktop computer at home three years later is unfortunately also a very real possibility. So I don't know if we're going to see that, but it is something that we need to be ready for because it is definitely a possibility.
Chris: Yeah. The term singularity often comes up when the future of AI is being discussed, where AI matches or exceeds general human intelligence. What are your thoughts on this? Is this coming or has it already occurred? Where do you feel like we are on the timeline of singularity?
Hutch: Yeah, it's a hard question to answer because it's always hard to figure out where on the curve you sit. It's a no-brainer. I think anybody can point to the fact that things are accelerating faster than they ever have before. I think there definitely is a good argument for us sitting on something like an exponential curve that would lead to a singularity. interpretation of the singularity, going back to kind of the roots of the word itself, really comes from astrophysics and the idea of a black hole as a singularity. And with a black hole, you have something called the event horizon, which is basically a certain point at that black hole where it's impossible to see past because light is essentially getting sucked into that that phenomenon. I think in the same sense with when we talk about the idea of singularity in terms of the advancement of technology, I think it's a very similar concept of it's hard to know exactly what that's going to look like until we're there, but my interpretation of the singularity is really that things start moving so fast that we essentially can't even keep up. I will say as a technology professional, I feel like sometimes we are already getting there because it is extremely challenging, even as somebody that works in this industry and is constantly trying to stay on top of the latest innovations and the latest changes. It is extremely hard to keep up when it is your nine to five job. So I can't imagine for people that are outside of the technology industry, trying to keep up and understand the amount of change and rapid progression that we're seeing week over week. But I mean, we went from kind of major technological changes in terms of Centuries decades months weeks. I feel like now you can look at major white papers and major advances in technology and on a near daily basis, uh, you can see significant advancements that are being made so Um, I would say if we're not there, we're probably very close but um Yeah, things are, and it's only going to get harder to keep up. So it's, uh, and we're, we're not getting any younger. So good times, right?
Chris: Good times. So some argue that advanced AI still lacks genuine sentience or self-aware consciousness, uh, despite its progress. Uh, from your perspective, will AI research eventually validate true sentient systems with an independent first person experience of the world? And if so, do you think sentient AI could potentially be even more vulnerable to adversarial weaponization and misalignment with our own human values? Or could sentient systems potentially have greater capacity to resist being weaponized against their will?
Hutch: This is a fascinating question. It's also one of the hardest questions to answer. I think when I look at the transformer architecture, My immediate inclination is that we probably aren't at anything even remotely close to sentience or consciousness right now. And the reason that I say that is because I feel like as impressive as autocomplete capabilities are, that really is, we're trying to guess the next token. And so with any of these systems, whether you talk about language or image, and so But I think that does beg a really interesting question. I was having a conversation with Jeremy Harris, who's a leader in the industry and also runs the Last Week in AI podcast. And he Basically made a really good point about goalpost shifting so traditionally we had this idea of the Turing test and the Turing test was I guess for anybody that isn't familiar the Turing test was essentially this idea of a person blindly interacts with a Machine and with a human and if they're unable to distinguish between the interactions with the machine or the human then it's essentially passed the test. Now the Turing test, unlike what most people think, was never intended to be a test of consciousness. Alan Turing himself said that consciousness is subjective, it's extremely, it's going to be nearly impossible to determine if something is conscious. But instead, he basically used the semblance of consciousness, the perception of consciousness, as a proxy for that, to say that once we've achieved that, we might as well assume, because we have no other way to evaluate one way or another. And it's interesting to see the goalposts shifting, because while I don't think we've achieved conscious or sentient systems, we definitely have gone way past the Turing test. And there's no question that you interact with one of these systems, and it is in every way indistinguishable from social interactions with a human. And so as an industry, we passed the Turing test and we said, okay, we did away with that goalpost of determining as a proxy for consciousness or sentience. But I don't think as an industry we've settled on how we evaluate when we get there. And I don't even know if there is really a way to do that. I mean, we don't even know how to measure subjective experience in ourselves, in human beings. How we determine when we've achieved that with machines is going to be, I mean, maybe with technological breakthroughs, we get there at some point, but I don't think that we have the capability to do that right now. So it's, unfortunately, it's I think we have this problem where the traditional goalposts have been moved. And we don't really have a good way to say when we get there. or even if it's possible. I think that whether or not conscious machines is possible is also heavily contingent upon some of your underlying views of metaphysics, of epistemology, of how the mind even works. So yeah, I think it's a really challenging question to answer. I probably do not have a good answer for it. I think it's a fascinating conversation.
Chris: Nah man, that was a great answer. you know, what I found is that regardless of how well versed you are in AI, you know, even the most elite experts in the world at the highest level, you're going to get varying perspectives on this topic. You know, no one can predict what's going to happen.
Hutch: Yeah. As if you look back three or four years, like nobody thought machines could be conscious. And so the fact that there are now very intelligent people that have advanced PhDs in computer science and in consciousness and some of these people now believe that these systems are conscious just goes to show what a weird gray area we're now in and how fast things have changed.
Chris: 100% man. Okay, so tell me a little bit more about the book, The Language of Deception, Weaponizing Next-Generation AI, and what readers can expect. Also, where can it be purchased?
Hutch: Yeah, so probably the easiest place to find it is if you just search on Amazon, The Language of Deception, Weaponizing Next-Generation AI, you'll find it there. That's the easiest path. It's available pretty much anywhere you get your books. You'll find it at Barnes and Nobles and other places. So if you do have another preference, feel free to go that route. The book itself is really a look at the journey of this type of artificial intelligence technology and natural language processing, how it's evolved over the past decade, ways that technology and early forms of natural language processing were weaponized all the way up to what is now possible. The book itself kind of goes across a timeline of looking at historically what is possible now with the extremely impressive models that we're seeing coming out of open ai and then speculation as to potential future risks like what we talked about with the disembodiment attack with robotics with lethal autonomous weapon systems and we talked about the singularity and the rate at which things are changing. And I think now more than ever, traditionally there is, and there still is enough on our plate to manage the current risks right now. There's no shortage of work there. But because things are moving more fast now than they ever have before, and that rate of acceleration is only increasing, I think that it's becoming more and more critical for us to start looking over the horizon and start anticipating some of those risks so that we can get ahead of them whether that's society as a whole whether you are operating as an organization or even just individually protecting yourself and uh those that are closest to you yeah absolutely and i have my copy so thank you hutch absolutely um it's an it's a phenomenal read.
Chris: I'd say that you don't necessarily have to be in security or have to be in technology to really read through this.
Hutch: That's correct, yeah. My thoughts were that these risks are going to affect everyone. And so I wanted to write it in such a way. So there is a number of different appendices at the end that if you really want to dig into the code and to some of the technical findings and details of the research, you can do that. But the book itself is intentionally written so that it can be digested by a large audience.
Chris: OK, with all these AI complexities that are on your mind constantly, I imagine that you really value that downtime. So on that note, I'm curious, in all of your travels, attending conferences and events, what's the most unique or memorable bar that you've discovered along the way? And it doesn't necessarily need to be the best, but maybe a distinctive place that has left an impression on you.
Hutch: I think the one that sticks out the most for me, I'm a big vodka guy. And so you see a lot of bars that do kind of the local beers and the custom import beers that you don't see elsewhere. But you don't see that a whole lot for vodka. And there's when we went out to Shmoo Con, which for anybody in the security field, if you're in the DC area, that's a fantastic conference to make it out to. But there is a place in DC called the Vodka House. And they do kind of the same thing with vodka, where they have all different types of it. And admittedly, I haven't been there in six or seven years. So hopefully it's still there. Obviously, there's a lot that could have factored into that with the impacts of COVID, not to mention, obviously, the Russia conflict, because a lot of your vodka, for better or worse, does come from Russia, too. So yeah, so hopefully it's still there. But it's yeah, it was it was a fantastic place to try all different types of infused vodkas and vodkas from different parts of the world. And so definitely worth checking out if you if you're a vodka guy.
Chris: Well, listen, man, I can investigate that for you because DC is not far from me. So there you go. I just heard last call here. Do you got time for one more? Sure. Last call. Last call. If you decided to open a cybersecurity theme bar, what would the name be? And what would your signature drink be called?
Hutch: Uh, so I, I think I'm, I'm still going to stick with my answer from the cyber circus. I, uh, I feel really good about this one. And so instead of a cyber ops, it would be cyber hops. Oh yeah. And I, I, uh, I'm still stuck on the signature drink though.
Chris: Um, I mean, if it's Cy hops, it has to be beer, right?
Hutch: Yup. Oh yeah. Yeah.
Chris: Um, how about an AIPA?
Hutch: Nah, I see what you did there. I like it. I'm just going to steal that from you. I like it.
Chris: Oh, man. All right. Well, Hutch, thanks for stopping by, man. I can't thank you enough. Thanks for sharing your perspective and your thoughts with us. And everyone listening, please get the book. It's called The Language of Deception, Weaponizing Next Generation AI. Thanks again, Hutch, man. I hope to see you soon.
Hutch: Thanks, Chris. Cheers.
Chris: Take care.