Skip to content

Episode 23

Future AI Trends: Strategy, Hardware & AI Security at Intel

Steve Orrin

Transcript
Share this:

Description

In this episode, we sit down with Steve Orrin, Federal Chief Technology Officer at Intel Corporation. Steve shares his extensive experience and insights on the transformative power of AI and its parallels with past technological revolutions. He discusses Intel’s pioneering role in enabling these shifts through innovations in microprocessors, wireless connectivity, and more.

Steve highlights the pervasive role of AI in various industries and everyday technology, emphasizing the importance of a heterogeneous computing architecture to support diverse AI environments. He talks about the challenges of operationalizing AI, ensuring real-world reliability, and the critical need for robust AI security. Confidential computing emerges as a key solution for protecting AI workloads across different platforms.

The episode also explores Intel’s strategic tools like oneAPI and OpenVINO, which streamline AI development and deployment. This episode is a must-listen for anyone interested in the evolving landscape of AI and its real-world applications.

Show Notes
Resources

Intel’s Legacy and Technological Revolutions

  • Historical parallels between past tech revolutions (PC era, internet era) and current AI era.
  • Intel’s contributions to major technological shifts, including the development of wireless technology, USB, and cloud computing.

AI’s Current and Future Landscape

  • AI’s pervasive role in everyday technology and various industries.
  • Importance of computing hardware in facilitating AI advancements.
  • AI’s integration across different environments: cloud, network, edge, and personal devices.

Intel’s Approach to AI

  • Focus on heterogeneous computing architectures for diverse AI needs.
  • Development of software tools like oneAPI and OpenVINO to enable cross-platform AI development.

Challenges and Solutions in AI Deployment

  • Scaling AI from lab experiments to real-world applications.
  • Ensuring AI security and trustworthiness through transparency and lifecycle management.
  • Addressing biases in AI datasets and continuous monitoring for maintaining AI integrity.

AI Security Concerns

  • Protection of AI models and data through hardware security measures like confidential computing.
  • Importance of data privacy and regulatory compliance in AI deployments.
  • Emerging threats such as AI model poisoning, prompt injection attacks, and adversarial attacks.

Innovations in AI Hardware and Software

  • Confidential computing as a critical technology for securing AI.
  • Research into using AI for chip layout optimization and process improvements in various industries.
  • Future trends in AI applications, including generative AI for fault detection and process optimization.

Collaboration and Standards in AI Security

  • Intel’s involvement in developing industry standards and collaborating with competitors and other stakeholders.
  • The role of industry forums and standards bodies like NIST in advancing AI security.

Advice for Aspiring AI Security Professionals

  • Importance of hands-on experience with AI technologies.
  • Networking and collaboration with peers and industry experts.
  • Staying informed through industry news, conferences, and educational resources.

Exciting Developments in AI

  • Fusion of multiple AI applications for complex problem-solving.
  • Advancements in AI hardware, such as AI PCs and edge devices.
  • Potential transformative impacts of AI on everyday life and business operations.

Steve Orrin LinkedIn – Steve Orrin | LinkedIn

Intel  – Intel Corporation: Overview | LinkedIn

The Data Scientist – Media – Data Science Talent

Transcript

Speaker Key:

DD Damien Deigh

PD Philipp Diesinger

SO Steve Orrin

00:00:00

DD: This is the Data Science Conversations Podcast with Damien Deigh and Dr. Philipp Diesinger. We feature cutting edge data science and AI research from the world’s leading academic minds and industry practitioners, so you can expand your knowledge and grow your career. This podcast is sponsored by Data Science Talent, the data science recruitment experts. Welcome to the Data Science Conversations Podcast. My name is Damien Deigh and I’m here with my co-host, Philipp Diesinger. How’s it going, Philipp?

PD: Good. Thank you, Daniel. Looking forward.

DD: Great. So, today we’re delighted to be talking to Steve Orrin, who is the federal Chief Technology Officer and Senior PE for Intel Corporation. In his role at Intel, Steve orchestrates an executes customer engagements in the federal space, overseeing the development of solution architectures to address challenges in government enterprise, national security, and other federal areas of focus. Steve has successfully positioned Intel as a leading expert with respect to the application of technology in government. He has 

00:01:17

orchestrated projects for federal government customers centered on security, AI and inferencing and edge ISR sensing. He’s also played a lead role in the funding and launch of the Intel Trusted Compute Pools architecture, a first of its kind integrated security solution for trustworthy virtualization and cloud stacks. And in his 19 years at Intel, he has developed a forte for successfully building strong teams and playing a key role leading all facets of technology and strategy. Steve, welcome to the podcast. We are delighted to have you here with us today.

SO: Thank you, Damien and Philipp, it’s a pleasure to be here today.

DD: Awesome. Intel, of course, are household name with over 50 years pedigree as a technology powerhouse, this is clearly not the first era of change and tech disruption that Intel have been through. So, what parallels do you see with previous eras such as the PC era and the internet era?

SO: So, Damien, it’s really interesting to look at history and see how it informs today. We’ve gone through as a society, multiple technology, both evolutions and revolutions over the last 50 years with the coming of the microprocessor and the shift from mainframe computing to PC and client server architectures. And the drastic shifts in both the technologies that enabled those transformations as well as the societal changes that came 

00:02:50

along, today we go everywhere with our phones, but 20, 30 years ago, that was an unheard-of thing. We’ve seen that the blowup of the internet starting in the mid-90s as it started to gain traction. And when you look at each one of those key inflection points, at the core of it was computing, processing and the power of delivering the right compute for the right application, the right use case, and the right architecture. And Intel has been at the core of every one of those transformations. Some in the front of it, like with the PC revolution, the move to the internet and cloud.

Some a little bit behind the scenes, but the technologies that we all take advantage of were oftentimes either created by Intel or made to scale leveraging Intel technologies. A lot of people don’t realize that things like wireless really came to bear because Intel put wireless technology into the laptop so that you had something that could connect to your Wi-Fi and to the broader networks and computing. USB was invented by an Intel fellow. These are the core technologies that really help connect us to each other, to the systems, to the applications, and they’re oftentimes built in. That really goes to the heart, I think of that moniker that everyone knows Intel inside. And as we see the shift over the last several years, both first to cloud and cloud architectures and Intel being the foundation for AD to 90% of the cloud computing out there to now the shift towards whether you want to call it data science, and AI, machine learning, but really where 

00:04:23

data is driving a lot of the applications, the use cases and our overall interactions with each other and with computing. At its core, the question you have to ask is, well, how do I move that data?

How do I transform and process the data? How do I get the most out of it? How do I secure it? All those questions are being answered because of the technologies under the covers. And at the core, it’s all running on the hardware and having capabilities inside that hardware that accelerate data movement and data processing that give the algorithms that we use for security, the algorithms we use for inferencing the power and scale to go to these trillion parameter models takes hardware. It takes a lot of hardware to really facilitate all of the benefits and opportunities that we see in AI and in the overall data science that is blossoming today. So, I think the thing to keep in mind is at the core of every one of these technology relations, there is hardware, software, and architectures that enable these inflection points in these transformations.

And oftentimes we start to, as they become pervasive, we take them for granted. We don’t think about AI when we try to change lanes in our car and the car tells you there’s something next to you, but there’s a little AI running inside your car letting you know that there’s something next to you that you shouldn’t try to change lanes. We don’t even think about it as being AI. We see the Chat GPTs and it’s like that’s AI. But really AI is 

00:05:50

everywhere. Every time you go online to try to purchase something and you’re getting recommendations based on your history, your prior buying history, and other recommendations, and your demographic information, there’s an AI running. And so, it’s really become pervasive. And the only way that happens is having the right technology that can perform at scale, that can secure it built into the infrastructure to enable these kinds of amazing outcomes that we’re all starting to see.

DD: And in what ways has Intel’s legacy in hardware influenced its current approach to the AI era we’re in?

SO: Damien, it’s really interesting question that you’re asking there, and a lot of it you have to understand a little bit more about how AI is actually going to get deployed and used. A lot of people think that AI is all about the GPU or all about the application and the data science, and they are critical components. But one of the key things we see when we look at how AI is actually deployed, how it’s operationalized, it’s a recognition that AI has to run in all sorts of places. It runs in the cloud, it’s happening on your network, it’s happening at the edge, it’s now happening in your PC. And so one of the key things that we understand, and it’s based on the pervasiveness or ubiquitous of Intel hardware being from the edge to the network to the cloud, is looking at it from a holistic perspective and saying, what’s the right hardware for the use case and how do you make sure that 

00:07:14

the right hardware giving you the right processing is available and able to do what you need to do at the right time.

And so, one of the things that’s really helped craft intel strategy and how we see the world evolving is looking at that holistic view of edge, of networking core, and of the cloud on-prem in the data center as well as distributed amongst edge nodes and making sure you have the right compute for that environment. And that means that heterogeneous compute architecture, it could be GPUs, it could be CPUs, Zion class or PC class CPUs. It could be accelerators, it could be custom AI accelerators or mathematical accelerators for natural language processing. So, very specific ones. It’s having the right software that allow you to write your application and deploy it into the heterogeneous environment. It’s programmable FPGAs. A lot of people aren’t familiar with that, but these are programmable hardware components that allow you to deploy custom algorithms and then change them and tune them based on the use case.

And so, understanding that it is a heterogeneous architecture that actually services the AI revolution is really in the DNA of intel of having the right kind of hardware ones that run in your PC or right there next to the camera in the lamppost, all the way to the data center and the wide scale high performance computing environments. And that it takes different hardware 

00:08:39

architecture to service those different parts of the AI story. And that’s at the core of Intel’s strategy, and at the core of how we’re going to market is providing a diverse set of AI capabilities from the hardware level. But one of the things we recognized a number of years ago is that it could be a real challenge to try to write your application or your AI model to work in all these environments. And so, Intel invested heavily in a set of software products.

One is called oneAPI and the other is OpenVINO. And what these are, are software layers that enable the developer and the application builder to just write once to be able to do it in an environment they’re familiar with, and then have that software layer compile their code for the different hardware architectures that has to live on. And so, it really gets you that moment of right once deploy everywhere. And because we did it in an open way, in an open-source model, it means you’re never locked into one approach. You can take your code where you need it to go, you can go across cloud vendor, cross PC manufacturer, you can go from a very tiny model out at the edge to a massive large language model, and your application development environment is going to stay constant for the developer, really easing the developer access to the power of the hardware that they have.

 

00:09:54

PD: You already talked about the approach that Intel is taking. Can you talk a bit more how the approach that Intel is taking differentiates from competitors or how it compares to other companies?

SO: So, Philipp, but you ask a good question, and I think it’s important to go back to that term I used, heterogeneous. There are multiple different approaches to how people are going about AI today. And a lot of them are focused on the GPU. There’s a large market segment that’s looking at the GPU, and the GPU is an important part of the AI story. But I think the key thing is our competitors in some ways are very siloed in the kind of capabilities they’re bringing. They’re bringing GPUs and trying to fit them into every use case edge to high performance computing and data center in cloud. And one of the things we’ve seen in practicality is that it’s not a one size fits all. So, on the one hand, GPUs have a role and we have a GPU, NVIDIA, AMD all have GPUs and it’s an important piece of the puzzle.

And on some of the very large language models, the GPU for training is the right architecture, whichever vendor you choose from. But as you start to deploy your models into the world to do inferencing, and as we’ve seen with Generative AI, you never start training. There’s two basic approaches. A lot of people are looking at the classic approach of just train everything in the big cloud. So, all the data has to shift back from the edge, go into 

00:11:15

the cloud to update the model, then push that back out. And while that may work for really cool demonstrations of the power of these large language models, it’s not a practical implication when I need to be able to know is my car going to hit that tree? I can’t wait for the cloud to tell me new information when it sees something new, I need real time or near real time response from AI.

I need to learn from the situation I’m in. And when we’re seeing whether it be in government use cases or in industry manufacturing, you need to be able to respond to a new situation very quickly to not have an adverse effect. And so, that requires edge processing and that means small swaps. So, size, weight, and power, that’s a different architecture. You’re not going to bring 400-megawatt CPU- or GPU to the edge. And so, it’s understanding you need to have that heterogeneous. So, we will have healthy competition at various different parts of that architecture. But one of the things that really gives us a unique position is that we’re one of the only companies that is in all of those environments, in the cloud, and at the edge, and in the network. And so, we have healthy competition in the cloud with the GPUs. We have healthy competition from a variety of vendors building arm and ASIC kind of products for the edge.

But there’s no one company other than Intel today that really has that diversity. I mean, the only really good correlator would be our classic 

00:12:39

competitor that we’ve been competing with for years, which is AMD, which has a GPU and has it heterogeneous architecture. And I think that’s where our footprint and software are going to help unlock and unleash the power of AI because really the hardware is only good is the software that you can run on it and really enabling developers to get access to the hardware without having become hardware experts is one of those key differentiators.

PD: So, you are saying basically that it’s architecture and it’s taking a holistic approach to the problems. What are the most significant challenges that you’re facing at the moment turning AI systems or visions behind them into reality?

SO: So, I think there’s a couple ways of answering that, Philipp. I think part of it is there’s a lot of learning curve going on in the industry today. Everyone sees AI, I want some of that. And so, they’re jumping on it and they’re trying it out in their labs, they can get it up and running. It’s the transition to practice, the transition to live, which I think most organizations are struggling with. And I think a lot of the impediments we’re seeing both from in the industry of adoption of these architectures is how do I get it out of the lab and into real world? And when I get it into the world, I realize that my lab did not represent the real world. I may have had a nice big GPU cluster, I may have access to the cloud, a drone that’s going out and 

00:13:58

looking at the forest and checking for diseased plants doesn’t have access to the cloud, doesn’t have access to high performance computing.

And so, I need to think differently about my architecture. And so, I think one of the biggest challenges that the industry, and corporations, and organizations, and individuals that are looking to adopt AI is that scaling or that operationalization side of the camp, how do I make this real and have it work reliably? I think another one is going to be, and it’s starting to pile up, we’re seeing governments talk about it. We’re hearing forms is around, how do I secure AI? How do I trust this ai? And then from an architectural perspective, there is a lot of choice and I think this is one of those areas where software is going to be a key differentiator, having a good ecosystem of software capabilities of pre-trained models, of developer frameworks to really ease the path that was as people learn and come on board with AI to give them the most choice as well as the best path to market for taking things out of a cool experiment and turning them into something real that’s driving revenue, reducing costs or serving their constituents.

PD: So, Steve, you explained a little bit about the challenges are and you highlighted that scaling is a big part of it. I think that’s something I hear a lot across the industry also. How about AI security? Do you see that as a major challenge?

00:15:19

SO: Philipp, you’ve hit on a key challenge that we’re going to see, especially as AI starts to grow and become more embedded into our business applications and our business operations and into our everyday lives is how do I secure this? And there’s a couple of ways to look at it. One is how do I secure what we’ve deployed? So, the inferencing, the decision making, the weights, the models themselves, and there’s a lot of work being done from startups all the way to large corporations and even in government forum of how do we best protect the AI? There’s a flip side to it or another skew of how you look at the problem is, well, how do I trust AI? Trust is more than just security. Security is absolutely critical, ’cause you can’t trust something if you can’t secure it. But even if I have a really secure AI, it doesn’t mean I can trust the information it’s providing me because that’s more of a lifecycle understanding.

How was that AI trained? What data sets were used? Was there in his implicit bias built in, were there biases in the tunings that were done? And then post-deployment, have there been attacks like poisonings or hallucinations that are happening in the AI? So, trust is a much harder thing to attain. It’s a lot harder to measure, can I trust that? You can secure and I can encrypt things, I can lock things down, I can use confidential computing to be able to lock down the runtime of an AI, make sure it’s absolutely protected. But it’s one of those, the old logic problem of 

00:16:51

garbage in, garbage out. How do I trust the AI is really where a lot of the conversations coming in. And one of the approaches I’ve talked to people about is the approach I like to look at is what I call the lifecycle approach, understanding I need to get visibility into how is that AI built?

It’s not the same as explainable AI because it’s science in its own right, but it’s really about can I get some at to station into what data sets were used, what was the environments that it was trained on, what type of decisions were made during the tuning, who was involved? Because we find that it’s not one person building an A from start to finish. You can have a variety of teams, different organizations from different locations, different backgrounds. You can have data sets that were curated 10 years ago that are now being used. And so, understanding, getting that visibility and that transparency is one of the foundational ways we’re going to get better trust into these AI systems. Because ultimately the question I want to ask is, A, can I trust it? But really, it’s can I trust this answer I’m getting? And sometimes it is, can I trust that it’s giving me the right advice or if it’s a real time situation, can I trust the decision it’s telling me to make?

And so, being able to make those trust decisions requires evidence or artifacts that I can use to make an informed risk decision about whether or not the AI is trustworthy. A great example of this, we look at healthcare as a great example. There are a lot of data sets out there that have been 

00:18:21

curated for a variety of use cases that are used in medical science and medical prediction and in a lot of things. But one of the interesting things that was looked at when you go back and look at the data and you find, okay, so what was the data set? And it really leads to why we get some really weird outcomes on the other side when we try to use it for the broader population is at least in the US, a lot of the data sets that we’re trained on we’re were gathered from 18 to 35-year-old white males.

And when they asked the question, why is that? So, it’s the scientist doing the data collection was on a campus, put out a notice, hey, I need to have an hour of your time to come in and get your body scanned or to submit to a variety of other tests to capture data. And it just turned out that the vast majority of the people that had extra time on their hands seemed to be 18 to 35 white males, which is great for that population. But it really becomes a real challenge when I’m turning the AI around and trying to provide results for a diverse population, a variety of different ethnicities, genders, age groups, US versus international. It totally becomes a problem because my dataset had this implicit bias, it’s not a malicious bias, but by the fact that it wasn’t a diverse and a data that went training the model, my outcomes are going to be obviously skewed.

And ultimately the question isn’t that data set is bad, it’s that the AI didn’t have enough diversity. And the question we have to ask is, well, how do I 

00:19:50

trust that when it’s giving me a medical advice that it has the right diversity of data? That’s where having that transparency becomes critical. And so, one of the ways we’re going to secure AI and trust AI, A, protecting it in its deployment, but B, getting transparency and getting attestation to the lifecycle and how it was built so that I can make the best decision about is this the right AI for this situation? There may be really good use cases for bias to AI, if you will, if your population matches what you were trained on. If you’re only going to ever do medical recommendations for 18 to 35-year-old white males, you may be okay with a non-diverse data set, but that’s not reality.

The other thing to think about when it comes to security is it’s not a stopping point. It’s not a point in time. I secured my AI, I got my transparency, I deployed it, I’m done. AI is in some respects, I hate to use this term, but it’s a living thing, it’s an evolving thing, especially the Generative AI in large language models. They learn from the prompts, they learn from the interactions, they learn continuously from new data. And so, one of the things that we recommend and that people are looking at is continuous monitoring of the AI, validating that your responses are as expected and that means performing a standard set and even some deviated sets of inputs and prompts to be able to check the queries to see am I starting to get deviation in my weights? Am I getting deviation in my 

00:21:19

responses? Is it leading to a hallucination, has attuning exercise that we’ve done caused some sort of forgetting to happen?

We have to continuously monitor, and update, and look at the model, make sure it hasn’t gone astray because like I said, it learns from everything it gets. And whether it’s malicious prompting and what we call prompt injection and poisoning or just you have the vast majority of the internet putting in all sorts of crazy stuff and it ends up learning some weird stuff out there. It’s incumbent upon when we want to trust AI to know is there a provider or the organization that’s delivering this AI to us continuously monitoring and updating AI to make sure it stays on track, if you will.

PD: That makes a lot of sense. When talking about AI security, a lot of people also imagine bad actors exploits, attacks and so on. Does that play a role for your daily work?

SO: So, it does. This is a new area of threat. There was actually at this last year’s DEFCON conference in Las Vegas, there was an entire track in entire village dedicated to AI hacking. And there were some really cool demos of the kind of things you could do both to large language models. There was a lot of work on large language models and Generative AI, as well as even to some of the more classic CNNs and DNNs, the object 

00:22:32

recognition and natural language processing kind of AI tools where poisoning and prompt injection are a common attack vector. And I think that’s sort of the standard bar, but there’s some more esoteric or well-designed attacks going after the model weights, understanding the model to be able to then create images or inputs that will fool the model.

One of the classic examples, I believe it was Purdue University showed that if I understand how your model is set up, and what are the weights that it’s applied, and the metrics it’s using, I can do changes to my, there was one great example like you’re wearing, if I had glasses with stripes on them, I could fool the AI and it would no longer recognize me to the point where sometimes it would no longer recognize me as a human. Definitely would miss the facial recognition, but you could actually trick it into not knowing it was a human. Another example that was in the early days was being able to put a sticker onto a stop sign and suddenly it couldn’t detect it was a stop sign. A lot of it goes into a deeper attack, if you will, of understanding that model. So, that means stealing the model, analyzing it, looking at the data that generated or what its weighting bias is and then being able to use that against the model later

 And we’re seeing a lot more of that research ’cause where at the end of the day, the goal is to bypass or skew an AI for a variety of reasons. I think there’s another area to think about when we talk about malicious use. 

00:24:00

Obviously, the AI themselves are being targeted, whether it’s to embarrass or to skew it or to do malicious activity. But we’re also seeing the adversaries, the criminals, the cyber gangs and the nation states use AI to prosecute their malicious scams. We’re seeing much better crafted phishing that’s being generated by AI. We’re seeing deep fakes being used to check people to send money or to respond to a support call that’s not really from support. And we’re seeing AI being used to do information gathering to understand what are all the services and what are all the ports and to really speed the process of malware development. And so, as the adversaries are using these AI tools, we as defenders have to do a better job of adopting and defending against these kinds of attacks ourselves.

PD: And Steve, what role does hardware play in securing AI systems?

SO: At the end of the day, all of this runs on hardware. And there’s a couple of key properties that hardware provides. I’m going to break it into two camps. One is hardware can accelerate the things you want to do to protect AI. So, encryption, being able to encrypt your models, encrypt your data feeds using hardware acceleration to accelerate their cryptography and the key management and the protocols allows you to turn on all those security bells and whistles without impacting performance. So, one of the baselines is leveraging the hardware acceleration that’s been built in 

00:25:20

sometimes as long as 20, 30 years ago, crypto acceleration has been in your commercial off the shelf hardware platforms, your Xeons and your PC clients, and it’s been available since 2010.

A lot of those features are already baked in and much of the software stack takes advantage of, you just need to turn it on. But the other area is understanding that at the end of the day, hardware is physical, it’s not virtual. And so, there are technologies like confidential computing where I can use the hardware to lock down access control to memory and be able to use the hardware-based encryption of memory. So, you think about the data security model we’ve talked about for years, data at rest, security and data and transport. Confidential computing is that last mile, data and use protection. And for AI, that is really where all the fun happens is in the actual inferencing, in the training, in the actual execution of the AI algorithms and being able to put that into an encrypted memory container where the memory itself is encrypted and the access to that memory is locked down by the CPU is a capability that allows you to protect your AI even if you have malware resident on the platform.

Actually, it even protects you. If someone physically walks up and tries to put a probe onto the platform and try to read the memory in real time, it’s all encrypted. And so, one of the key roles that hardware will play in securing AI is providing that safe place to stand, that secure, what we call 

00:26:47

a trusted execution environment so that your execution of your AI engine is protected, its algorithms are protected, the models, the weights, all of that can be protected no matter whether it’s in the cloud or deployed at the edge where you don’t have guards with guns protecting it. You can use that hardware to protect your AI in its execution and the data that ultimately you also want to protect the responses, make sure that no one’s trying to change the response. So, it can give you that end-to-end protection that we’ve all been looking for.

PD: What are the challenges you’re facing when you’re integrating AI security features directly into hardware?

SO: So, I think it’s the same challenge that you’ll find anytime, Philipp, when it comes to integrating security. So, one is you have the tradeoff of security versus performance is always a classic problem when adding security. And so, one of the things that Intel has spent 30 years working on is how to introduce security features that don’t impact hardware. And part of what that is building those features into the hardware itself. Honestly though, the biggest challenge is twofold. One, the software stack, getting it easy to integrate into the products and applications you already use. One of the key things we’ve learned and really the industry has learned is that the more stuff you have to do in order to take advantage of security, the less likely the downstream end user or customer is going to be able to either 

00:28:05

want to do it or be able to successfully do it. So, removing friction is a key thing.

By doing that integration early, having the operating system, the cloud providers, the security vendors themselves have that hardware integration before it gets deployed. So, when you buy a software product, it already is taking advantage of it or when you go onto the cloud, it’s just a click of a button and you get confidential computing. It’s those kinds of integrations that Intel and its ecosystem are doing both in open source and in commercial software to ease that friction between adopting security and deploying it. And then the other is a lot of times it is lack of understanding or apathy. A lot of people just don’t realize that they have a lot of security at their fingertips if they just turned it on. And we see this, in the security industry, the reason we still have security problems is oftentimes you have the right security capabilities, but you haven’t configured them, you haven’t flipped the switch to turn them on, you haven’t deployed it to all the different parts. And one of the challenges that all security professionals have to deal with is that we have to be right a 100% of the time. The hacker has to be right just once.

PD: Right. What do you think are the biggest challenges in AI security over the next five to 10 years?

00:29:19

SO: I think as AI becomes more pervasive, I think some of the bigger challenges are going to be around data privacy. AI is a data engine. It is hungry for data, it consumes data, it lives on data. And the more data you put in, the more opportunity for exposure of that data. And once data gets learned, it’s very hard to unlearn that data. So, I think data privacy and security is a key part of that, but it’s bigger than just can I secure the data? How do I protect the data throughout it going in and coming out? And so, that is going to be an ongoing concern and the understanding that even in a geo geopolitical environment, different countries have different determination of what’s considered to be privacy, what’s considered to be good enough security. Different industries from their regulations are going to have different standards.

And so, one of the key challenges of securing and trusting AI is trying to rationalize across all the different domains of trust, domains of regulation to provide a solution that can service all those different environments. And the other thing to keep in mind is that AI is a tool. It’s a tool for organizations, industry, and governments to use for the betterment of their customers, their business, their citizens. It’s also a very powerful tool for the adversaries. And so, I think we’re going to continually see this cat and mouse as they adopt technology much quicker than the legitimate industries do, is how to keep that balance and how to basically not have 

00:30:47

implicit trust. This is why a term that’s very popular today called zero trust is so important.

It’s changing that risk dynamic from trust and then verify to don’t trust, verify twice and then maybe trust. And it’s a different approach because the adversaries always take advantage of the fact that they’re just implicitly trusted things, whether it be identities, or credentials, or users that are trusted. And what zero trust is flips it on its head is nobody’s trusted. Your CEO isn’t trusted and that’s okay. It’s doing the right things at the time of the transaction to figure out can I trust just this transaction? That shift in model may help us get a better handle. And as we look at AI that’s constantly changing and constantly evolving, I think zero trust will play an even more important role there as we look to how do we leverage that AI and how do we give it access to the things we want, but ultimately how do we trust it or in some cases decide not to trust that outcome.

And I think that’s one of the things we have to say is everyone just goes to Chat GPT, and if they get something really, really weird that say, okay, that’s bad. But there’s a lot of in-between that’s gray and there’s a lot of stuff that comes out of these AI systems that’s actually not true, but it looks good enough that we’ll think, oh, that must be true. And so, we have to take a zero-trust approach to AI and say, I’m not going to trust you until proven that you can be trusted. And I think that shift is one of the big 

00:32:08

challenges of our time are how do I get that dynamic trust built into the use of AI?

PD: Makes a lot of sense. Yeah. You talked about regulations a bit already. How about organizations? What would companies have to do now to prepare for the future of AI security challenges?

SO: So, Philipp, I think there’s a couple ways to come out there, but I’m going to get on one of the key things, and it’s not just the thing, it’s when you do it. And that’s around data governance. A lot of people talk about data governance, and data governance critical, but what oftentimes happens that you start building your AI, you’re doing your dataset, you all of a sudden then sometime down the road you’re like, oh wait, we need some data governance. Okay, we’ll plop it on here. And that is a huge mistake. Data governance almost happens to happen at the very beginning as you’re doing your data governance, and really at the same time you’re starting to do your problem definition because that problem definition will actually inform the data governance and vice versa. It will form how you craft the discovery phase.

And so, data governance is critical and what it does besides giving you a framework for applying controls, it can inform you if you’re in regulated industry, or you’re getting data from a regulated industry, or your 

00:33:19

marketing people think someday you may want to go into a regulated industry. Having that built into your governance model will allow you to apply controls early so that you can be more agile downstream. So, you haven’t built a whole system that you realize, oh my gosh, we’ve consumed PAI now we can’t use it or we need to apply this to a PAI and we need to have extra controls. That data governance framework is actually critical. And again, I use the word framework, it’s not a tool that you use once. It is an overlay on the whole process that constantly needs to be informed and integrated in.

And I think the other thing is that there is not a technology, it’s about the organizational dynamics. As you’re building these AI solutions, I often talk about having a diverse team building it, not just the data scientists and the developers, the model twins and IT, but the business unit owners who’s actually going to generate the revenue or the benefit. Having legal and compliance involved from the get go is a critical step to make sure you’re both informed on the potential challenges and planning for them, but also so they’re informed of what’s coming out the other end so that when you come out with your AI, they’re not like deer in headlights and like, oh God, we’re going to shut you down. We haven’t looked at this from a compliance perspective. Having the key stakeholders involved from the beginning as actually a recipe for success time and time again.

00:34:38

PD: Yeah. What would you recommend individuals in organizations to stay updated with the latest developments in AI security?

SO: Philipp, there’s so much going on. It’s hard to stay up to speed. There are a couple of good newsfeeds out there, and blog posts, and podcasts, definitely one of the things I recommend is listen, go out and search. Don’t rely on Wikipedia or things like, actually, go and listen to the people who are actively doing this stuff. Go to some of the forms. Government has got a lot of industry collaboration forms where they’re putting out documents that are meant to service the whole industry, and it’s got really good representation across hardware, software, academia, cloud providers coming together to actually write about best practices for AI security. And I think the other thing, and this goes to the way I structure my teams, is reach out to the folks in your organization that are smart in an area you want to learn more.

I surround myself with people who are absolutely smarter than I, and I listen to them. And I give you a great example, six months before anyone had ever heard of Chat GPT, I had two of my engineers, one on my team and one in one of the matrix teams I work with come to me and say, individually, they came and say, hey, I got to tell you about this really cool thing. It’s called a transformer. It’s really cool science. I’ve got to explain to you how it works. And they walked me through, it’s like, this is going to be 

00:36:04

the next great thing. And I’m like, cool, that’s awesome. I like the idea. And then fast forward six months later and you see Chat GPT go, boom, and the T in Chat GPT is transformer. And I go back to them and I say, you guys knew. You guys knew. And they did. And that’s the thing, listening to the people who spend their time in that domain, and in the industry, we call them bell cows.

They’re the indicators of what’s coming. And so, augmenting both what you can learn on the outside from good really cool blog posts and from these industry collaborations, but also listen to your people who are steeped in the technology. They may not understand the business application of what anyone can ever do with that technology, but they understand the technology and that will help you as a business leader, as a data science leader, inform how you’re going to be able to adapt and evolve as these technologies are constantly changing.

PD: That makes a lot of sense. Yeah. Are there any common misconceptions about AI security that you would like to address?

SO: One is that you can just use AI to solve any security problem. A lot of people think, well, AI is this powerful tool. I’m going to detect the next advanced persistent threat that no one has ever seen before. And there’s a flaw in that fundamental statement. AI is built on data, that means it 

00:37:19

needs to train on a lot of data of how things happen to be able to make a prediction about how things will happen. If there’s only ever one of these attacks, that’s not enough data to really train a good AI to detect the next one of them. And that’s been one of the fundamental challenges around using AI in cybersecurity. Everyone’s looking to use it to catch that one time really well-crafted nation state advanced persistent threat. And the reality is that’s not a good use of AI. And so, I would say that that is one of the main misconceptions.

The other is the other side, which is, oh, I can’t use AI for security because I can’t trust it, or I’ve got smart cyber hunters. They’re going to do that. And that’s another misconception, ’cause there is a place where AI is absolutely going to show real value. And the way I look at it is that you think about the day in the life of a cybersecurity professional inside an organization, 90% of their time is spent firefighting, the hacked du jour, a blip on the firewall, the ransomware phishing campaign that’s coming in. It’s the mundane, every day happens all the time or a new vulnerability was disclosed, so let’s go figure out what applications we have that have it and let’s go patch those applications. So, they’re in a constant firefight of the whack-a-mole kind of approach to security, and they get almost time to deal with the really exquisite next generation attack because they’re always consumed by the day-to-day.

00:38:42

That is a place where AI can actually shine in the automation and the application against these mundane, repetitive daily processes. A, we have a lot of data of what it means to do searching vulnerability databases, looking at inventory and asset management systems and correlating across them to able to know, okay, I’ve got a new log for J, I’ve got this web server that has it. Okay, I have a patch from that vendor deploy. That’s something that can absolutely be automated, well, using AI machine learning. And what that does is that if you can automate and use machine learning for 80% of what I call the stupid stuff, then you’re underfunded, overworked and under-sleep team of security professionals can focus on the 20% really hard problems. That goes to the third myth is that AI is going to replace my job, whether it be in security or any other field.

And the reality is AI is a tool and it’s something we should harness, and it’s not going to replace your job, it’s going to augment you. It’s going to enable you to focus on the hard problems, which are actually the more interesting ones, and the ones you really want to get up and go tackle. And it can take care of the 80% of the mundane, the repetitive, the manual processes that AI is really good at tackling. And that way that separation is actually one of the ways I think that organizations will get the largest ROI of the application of AI for cybersecurity is not in the exquisite trying to find 

00:40:06

that one-off attack, but in automating and getting better efficiencies at dealing with the window of exposure, we’re all dealing with the regular day-to-day kind of vulnerabilities.

PD: Yeah, makes a lot of sense. From your perspective, what industries would you say are at most risk from threats of AI security?

SO: I think as we’ve seen that almost any industries that adopting AI can be at risk of the threats. I think obviously, the regulated industries, the ones that have value, the old question that was asked of a bank robber back in the 1800s was, why do you rob banks? And his answer was, that’s where the money is. The same is true today. So, the ones that have money, or assets, or critical infrastructure regulated. Those are the ones that are obviously going to be more targeted. But one of the things we’ve also seen is that the motives behind the attacker can vary. It can be financial gain, it can be sowing chaos and disruption. It could be influence, it could be revenge, geopolitical. The motives are across the board. And when you have the diversity of motives, it means that the target space is much richer.

If it’s financial gain, well, they’re going to go after financial assets or they’re going to go after large user bases to be able to get some financial gain. But if the motives are otherwise critical, infrastructure, supply chains, 

00:41:25

logistics, those can also be ripe targets. And so, I think it actually doesn’t serve us well to think, well, is there one industry more at target than the other? It’s really understanding that every industry could be at target, and it’s about your organizational risk appetite and risk posture to make the determination of how much security do you need to deploy to meet your risk bar. And I think one of the things that is an absolute mistake is to think, well, I’m not important. I’m not doing something critical, so I’m not at risk. And I think a really good example of this came during Covid during one of the big ransomware attacks.

We all know about the capital pipeline attack in the US on the east coast, but one of the ones that got a lot of news for a short time but was really informative was the JBS attack, the one on the meat processing in Australia. They got a ransomware attack, it shut down meat production. There was a global shortage of meat supply. And I learned true really cool things or important things, I wouldn’t say cool, but important things about that. Number one, no one is immune from attack. You couldn’t get a less sexy business than meat packing, there’s no money, it’s not critical infrastructure, it’s not energy, it’s not dams, it’s meat packing. So, it is the least sexy of the businesses or industry that you think would be a target. So, it said that no one is immune. The other thing that was really informative is just how dependent we are on technology. Because again, 

00:42:51

you would think about a legacy kind of environment of meatpacking as not being technology dependent, but yet a ransomware took down the line and shows that every industry, even meatpacking, is dependent on technology operating and not being shut down by ransomware.

So, I think back to your point, every industry is a target for different reasons. And the crucial question for an organization is where do I fit on that risk profile? Do I have assets that are of value from a cybercriminal perspective? Am I servicing a critical infrastructure or a critical constituency that makes me potentially an activist target or a nation state? Do I have an important part of the global supply chain? So, understanding where you fit in that risk profile and then applying the right controls to map to your risk. And that’s why the industry, things like the cybersecurity framework from NIST and the risk management frameworks are really important because it’s not a one size fits all. It’s understanding what’s the right controls for my risk, for my environment at this given time. And what zero trust adds on that is that it’s not a once and done, it’s a continuous process of reassessing risk.

PD: Makes a lot of sense. If someone is listening who wants to pursue a career in AI security, what advice would you give them?

 

00:44:09

SO: So, a couple of things. I would say number one is it’s a great career because there’s going to be a lot of need over the next several years. So, definitely cybersecurity in general has been one of the professions that has no matter what’s going on in the world, its recession proof, depression proof, it’s geopolitical, every time, always something going on. You always need security. And I think as we look at AI, and even though there’s a huge hype curve, it is definitely here to stay. The application of AI security, that intersection is critical. One of the things I suggest, there’s two things. One, go play with these technologies. Get them in your environment, run a small large language model and beat it up. Go try to attack it. Try to do prompt injection, try to get at the model weights, try to see if you can do memory sniffing, attack it. Learn the techniques of the adversary because that will better inform the kind of things you need to do as far as protecting it.

And you become more familiar with the breaking points, and the holes, and the gaps. Understand how an AI is built so you can better apply security throughout its lifecycle. So, I would say get your hands dirty as one. Number two is interact with your peers. Go out, like I said, DEFCON was one example last summer. Every major security conference is going to have an AI track because it’s the topic everyone’s talking about. Go out and learn and then network with your peers because we’re all struggling 

00:45:30

together and we work, just like the adversaries collaborate, we work better in collaboration. And so, someone who’s starting out in their career, go out and network with people who are already living it, learn from them and see how we can all collaborate together.

DD: I would like to take a brief moment to tell you about our quarterly industry magazine called “The Data Scientist” and how you can get a complimentary subscription. My co-host on the podcast, Philipp Diesinger is a regular contributor, and the magazine is packed full of features from some of the industry’s leading data and AI practitioners. We have articles spanning deep technical topics from across the data science and machine learning spectrum. Plus, there’s careers advice and industry case studies from many of the world’s leading companies. So, go to datasciencetalent.co.uk/media to get your complimentary magazine subscription. And now we head back to the conversation. So, Steve, how is Intel using AI to improve its business?

SO: Well, I think just like every organization we’re adopting in a variety of ways, AI is being used in things like your sales management, your lead management, in marketing. I mean, we’re finding AI pervasive everywhere. I think some of the more exciting areas is some of the research that’s being done on silicon design using AI for optimization and performance improvements. So, we’ve been designing chips, like you 

00:46:58

said, for 50 years. And there’s a certain way of doing things of where you lay them out and how you deploy them. One of the really cool areas of research that Intel and others are driving is using an AI and applying it to chip layout and learning that there may be efficiencies that are non-obvious because an AI can look at all the different paths, and data flows, and the power requirements and getting better optimization of where you place things based on where power is delivered, where data needs to go.

And sometimes it’s also because the AI can only train on this chip’s architecture, but every chip in the past as well as how chips get used is informing how chips get laid out. And both our core process as well as doing custom work, better efficient ways of placing different parts of the chip architecture to get better performance improvements or reduce some of the errors in the development process by optimizing for the things we’ve learned in the past. But one of the really cool things is that that idea doesn’t just apply to chips because at the end of the day, a chip is laid out with big complexes and then flows. This can be applied to a variety of other areas. And we’re already seeing some people look at fab designs over the factory fluoride, how to better get efficiencies of the flows through fabs when you use an AI to help augment or provide process improvements and efficiencies there.

 

00:48:19

I think we’ll start to see water treatment plants and power distribution pubs take advantage of similar AI approaches to get better improvements because a lot of those industries have not fundamentally changed the layout of the actual physical environment probably in 30, 40 years. And we will put new machinery into those environments. But I think what we’re seeing is AI can actually help improve the positive reduced amount of time from step one to step two or be able to pipe more through by understanding a broader scope of picture. We’re using AI across our organization to better improve our development lifecycle processes, looking for inefficiencies of how we build our products, how we do document management. So, like every major organization, we’re looking at machine learning, AI to improve our business operations, our sales and marketing campaigns, as well as our core business of how we design chips. And so, it’s starting to be pervasive.

We’re obviously, because of the nature of AI, knowing that it’s not fully baked, we’re obviously augmenting that with humans. It’s not like we’re saying, hey, AI runs the show, we’re going to go have tea. It’s really about how do we leverage this AI tool to augment our teams across the organization to get more out of their jobs to do better. And even, I’ll give you one example of where it’s already being deployed and we’re already seeing the benefits is little plugins to things like teams and web chats 

00:49:42

where it can transcribe the meeting, set up meeting notes for you, give you task items. I’ve used that tool. And one of the things is it saves about a half hour of writing up the notes, but more importantly, it gives me the list of action items that I can immediately send emails off, said, okay, this is what we discussed, go do it. And that speeds up our time. So, we’re seeing already some business improvements by just using little AI tools within the environment to do that transcription, and meeting note, executive summaries, and things along those lines. So, even the little tools are already providing value.

DD: And are you doing anything interesting that’s maybe beyond just individual productivity, potentially using Generative AI? Is there anything interesting happening there?

SO: I think there’s a lot of research into what are the things that Generative AI can do. I think it’s still too early to tell of how it will be adopted into the manufacturing processes. One area of interest is using it to, in a simulated sense, how do you do fault or error detection? And so, that’s an area where I think it’s more of like, we call it an adversarial approach of looking at fault detection and then trying to see if you can build a better chip or a better process based on the simulations that you can do. Use the AI and the Generative AI to generate all the different permutations and then have something else, try all the different implementations and see how that 

00:51:08

goes. So, I think that’s an area of innovation. It’s still not practical, it’s not ready for prime time. But the use of generative AI to reduce faults, to reduce errors, to increase yield is absolutely something that manufacturing and chip manufacturing or tractor manufacturing, everyone is looking at ways of leveraging that tool to, again, reduce errors, reduce increase, yield, better the production of parts by using it early in the design and development process.

DD: How is Intel balancing innovation and AI with its maintaining its position, its traditional business area?

SO: So, Intel has always been an innovative company. We’re one of the largest spenders of R&D, over $9 billion a year in R&D. And I think one of the things to understand is when we look at AI, we’ve been looking at the AI, and machine learning, and data science for a long time. It’s having teams that are dedicated, but also having the ability to cross pollinate across organizations. So, having both a central AI capability that’s driving our strategic, but also, understanding that the cloud has different needs than the edge, has different needs than the network. And so, having innovation teams spread out throughout the organization that are driving innovation in the domain specific areas and regions.

 

00:52:25

Having, I have an AI team here in the federal because we’re looking at the federal use cases for AI. And so, at its core, it’s how do you foster that innovation? And that really comes down to giving people the opportunity to go off and try things, giving them the space to focus on that problem and whether that be within their domain or having them be part of these sort of strategic initiatives around AI. And so, I think Intel is probably not necessarily unique, but in the tech industry we’ve focused on innovation throughout our life cycle. It’s really built into our DNA. And one of the other things that’s been built into our DNA is understanding that you don’t just keep all of your eggs in one basket. 

And so, a lot of times we cross poll that. We have teams of people across organizations working together, so they learn from each other. And so, that’s built into how Intel goes to market. Many business units would collaborate on an end result. The software team, the data center team talking to the edge team and the network team working together. So, AI is one of those horizontal initiative that really touches almost every aspect of the company. And so, having innovation but also having the ability for all of our engineers and our managers to be involved in this transformation that our CEO is driving us to the AI future that we all see.

PD: Is it possible that you talk about any specific Intel products that were designed with AI security in mind?

00:53:46

SO: I think the thing to think about, I could talk about some specific use cases of our technologies for AI. Because at the end of the day, AI is an application. It’s a use, it’s a set of a program that has ins and outs. And one of the things we’ve seen, and we’ve actually done some demonstration on this, there’s many companies already leveraging this out in the cloud, is confidential computing. I think that is one of the killer security technologies for AI. We’ve got a variety of ecosystem providers. You can go to Microsoft, and to Google, and to these cloud providers today and leverage confidential computing in their environment to protect your AI. And so, I think that is one of those really killer security apps, if you will, for AI, is being able to protect it as you deploy it and to protect it no matter where it lives in the cloud, at the edge, in the data center, and provide that foundational security, and at the station.

Because what confidential computing does, as I explained earlier, it gives you that secure memory container and the hardware access controls for it. So, make sure only the right application can talk to it, but it also gives you an attestation that you can verify that my AI is running in that environment before I interact with it, or before I send data to it, or before I unlock information from it. And so, building in both the confidential computing and attestation is really one of those aha moments for AI. How do I securely deploy it and not have to do different things for different environments, 

00:55:13

have one security mechanism that can go from the edge to the cloud? I think that’s probably one of the more killer apps that we’ve seen deployed at scale. I think in the background there’s a lot of really cool technologies coming down the pike of things we can do to better secure AI throughout its lifecycle.

One area of interesting research is the application of blockchain or secure ledger to every step of the AI development from the data set that was used, the labeling systems, the data governances, being able to put in controls and being able to tie that together into a chain to give you that provenance at the end that I can easily go check, okay, this AI for medical scanning has this provenance and I know it came from the San Diego dataset and the New York dataset. I can get that information in a more trustworthy way. And so, that’s one area that I think we’re seeing innovation. And at the end of the day, blockchain is not exactly the most performant application stack. So, using hardware to accelerate that is going to be critical in order to be able to scale it. But I think to your point, the question that really the technology we’ve got today that’s really transforming the security for AI has got to be confidential.

PD: Steve is interactively contributing to the development of new industry standards when it comes to AI security.

00:56:32

SO: So, Philipp, that process from the whole industry is just beginning. And Intel has participants in a variety of standards, both open-source community standards as well as some of the more traditional standard’s bodies working together to help craft better standards for AI security. There’s work that are happening at NIST right now, the National Institute of Standards Technology in the US around AI security that we are helping contributing to. There’s industry standards, ITF and others that are working on. But we’re still at the early stages from having a standard. And as you well know, if I started a standard today, it could be three years before it actually gets adopted. A lot of what’s really fast tracking today is some of the industry collaborations. Working together as an industry to put out better guidance or how to secure AI, better ways to be able to get that transparency slash visibility.

So, I think we’re starting to see the beginnings of it, intel and our community. And one of the things that we feel is very important is that it shouldn’t just be Intel and its ecosystem. Our competition, everyone needs to come together. And when you go look at many of these forums and these standards groups, you do get the diversity of hardware providers, software versus cloud providers because we all recognize that one of the things we have to all get right is how do we secure AI? How do we trust ai? And so, we’re seeing a lot of cooperation across what would classically 

00:57:52

be seen as competitors or frenemies, depending on the term you want to use coopetition. But when it comes to things like security standards, we all recognize it’s important for the industry and for the world at large to get this right. And so, we are all collaborating together on open standards as well as ones that are more industry specific to help drive better and more trustworthy adoption of these technologies and tools.

PD: And maybe my last question, what are you most excited about at the moment in terms of AI, AI security, innovations, algorithms, whatever it is, new hardware?

SO: So, there’s a lot to be excited about right now. I will pick on two things that I think are most exciting. One is on the application space, one is on the hardware space. On the application space, one of the things I’m seeing, and it’s a slow-moving trend, a lot of organizations have done an AI for a thing, they’ve done an AI for another thing. So, they’ve built these little silos of AIs across the organization in different lines of business or different parts of the same business. And one of the things that people are starting to recognize is, okay, great, I’m going to go do my business and I’ve got five different apps. And so, one of the exciting areas we’re going to see over the next year or two is the fusion of when I start fusing and whether that be through foundational models that can go across multiple domains or some data fusion to drive the models in the first place.

00:59:12

Being able to get the more complex reasoning kind of use cases enabled by AI. And that’s I think the next wave. And if you at it, when you look at the hype curve and we get a lot of really cool stuff and then we start to get into the adoption phase, it’s when we start looking practically, okay, well, how do I actually do my business correctly? Not how did I fund a whole onto cool AI projects? And it’s at that point we start thinking about how do I build the right AI to actually answer the question I’m trying to ask from a business or from a government perspective and really sort of take those individual questions that we sort of tried out in the lab and scale them to be real. So, from an application space, we’re just on the cusp of that next wave of moving from the lab, from the cool experiment to something more practical and integrated on the AI side.

On the hardware side, I think there’s some really cool stuff coming out, whether it be the next generation of our AI accelerator and the kind of things it can do at the reduced swap that it can do it at to right now, one of the most exciting things, and I’m getting mine in a couple of weeks, is the AI PC. And a lot of people say, well, what is an AI PC? Think about it. It’s your laptop that you’ve always had with a built-in AI accelerator. And so, it’s about more than just really cool collaboration tools. Imagine being able to build AI that you can deploy at scale at the edge, whether it be a laptop or an edge device, but have it natively built for that environment versus 

01:00:41

trying to figure out how to take something big in the cloud and stuff it into a laptop.

We’re going to start to see this transformation of AI being built on a PC for a PC. And those kind of ais, other ones I think will actually transform a lot of our everyday life. Because let’s face it, we spend a whole lot of time in front of our computers and there’s so many different things that they could be doing better. And it’s not all going to be gaming, but I know the gaming’s going to be cool. It’s going to be a lot of things, things like localized cybersecurity, localized data and document management. Think about travel scheduling are a lot of things that AI could really unlock when we get it really running and powerful on our laptops, and PCs, and our edge devices. So, I think that’s a cool wave. And the hardware has just come, came out this year. Like I said, I’m getting mine soon, and we’re all excited about one of the, when we start putting on our developer hat and figuring out what cool things can I do today. It’s going to open up a whole new set of opportunities.

DD: So, that concludes today’s episode. Before we leave you, I just want to quickly mention our magazine, “The Data Scientists.” It’s packed full of insight into what many of the world’s large companies like Intel are doing in relation to data and AI. You can subscribe for the magazine for free @datasciencetalent.co.uk/media. And Steve, thank you so much for 

01:02:00

joining us today. It was so good talking to you. It was a brilliant conversation. Thank you so much.

SO: Thank you, Damien, and thank you, Philipp. It was really interactive and engaging conversation today.

DD: And thank you also to my cohost Philipp Diesinger and of course to you for listening. Do check out our other episodes @datascienceconversations.com and we’ll look forward to having you on the next show.