Read the full transcript of AI expert Dominique Shelton Leipzig’s conversation with TEDx Producer Mark Sylvester at TEDxSantaBarbaraSalon 2024 conference.
Listen to the audio version here:
TRANSCRIPT:
Introduction to AI and Cybersecurity
DOMINIQUE SHELTON LEIPZIG: I’m actually a cybersecurity and data privacy partner in a global law firm. I focus my work and research on data strategy and helping companies, CEOs, and board members think through how to be amazing with their technologies like AI. My first question kind of comes from, on one hand, AI has been around since the fifties. Right? The early fifties. On the other hand, people will say it’s really only been around for twenty-four months since ChatGPT caught the world by storm, and everybody started paying attention.
MARK SYLVESTER: How long have you been paying attention to, I guess, cyber for a while, but how about AI and cyber?
DOMINIQUE SHELTON LEIPZIG: Well, you know, it actually started in… and you’re correct that what’s new here is generative AI. Right. AI has been around for over fifty years, and we’ve dealt with it in terms of addressing privacy issues for a very long time. And then, if you look back at California, which is always on the cutting edge of many things, we had our California Chatbot Disclosure Act, which was in 2019. So helping companies there be transparent about their disclosures and letting people know that they’re interacting with an AI.
MARK SYLVESTER: Oh, so when I go on to the AAA website and there’s a little chat icon, chances are I’m dealing with a bot. And because I live in California, it’ll say you’re dealing with a bot in some clever way.
DOMINIQUE SHELTON LEIPZIG: Exactly. Precisely.
MARK SYLVESTER: Yeah. I actually prefer that. I get my work done a lot quicker when there’s a robot helping in that instance.
AI Governance and Brand Reflection
DOMINIQUE SHELTON LEIPZIG: The biggest governance issue is making sure that AI reflects the brand, that it looks everything the company wants it to reflect just like an employee. So one of the things that I get asked a lot because I’ve integrated my digital assistant. I like Ethan Mollick’s definition. He calls it co-intelligence. So I like that. I’ve integrated it quite a bit. People have said to me, aren’t you afraid of all of your writing being absorbed by the LLMs, by the robots. I know that big companies have rules against bringing ChatGPT. And for that exact reason, how are they working around that, or what’s the solution to that?
MARK SYLVESTER: Well, there’s got to be policies and, really, it has to start from the top. Safe and responsible and trustworthy AI is a tone that the CEO and the board can establish in an organization. Making sure employees know what to use, and what to place into AI so they’re protecting things like strategic plans or proprietary information is really important to raise that awareness. So what I’ve heard is that you can have your data, your own large language model. We all we’re getting used to all these new words, aren’t we? You can have it inside your company. I wouldn’t… I’m… it’s too small, but I’m thinking of a large company. They can do that. So I know that and I’m going to figure out that IT professionals have figured out how to button that down so that’s secure.
Here’s the question. These computers that run AI are upwards of a hundred million dollars. Right. How are big companies… I mean, that’s different than, you know, us deciding to switch with a certain vendor on our office tools. A hundred million dollars just for the iron. What are companies doing about that?
AI Implementation in Companies
DOMINIQUE SHELTON LEIPZIG: So there’s… I want to talk about two different camps. They’re the companies that are licensing the large language models to integrate into their businesses, and many of them are entering into pilot agreements first and then enterprise licenses that last over three years. And they’re creating their own applications sitting on top of the large language model. So think of, you know, company X GPT, that’s trained on the company’s own data that is not shared with the large language model, and the large language model is underneath like an ocean of data from the whole Internet and the application sucks up those insights.
So that’s how that’s happening. And the other? Where you’re creating the large language model itself, the tech companies that are part of the whole ecosystem, either creating the cloud services or the AI tool, that has the algorithm like an OpenAI, those folks, those are significant capital expenditures that are necessary to be able to develop the product.
MARK SYLVESTER: So how worried are executives or how worried should we be? I almost want to have two camps. There’s corporate, and then there’s small business and entrepreneurs and that. How worried should we, I’ll use the royal we, be that all of this is in the cloud even if I have my own servers and my own protected things? So I have nothing on premise. Everything’s in the cloud. Should I be worried about security?
Security Concerns in the Cloud
DOMINIQUE SHELTON LEIPZIG: Well, you know, security is always a concern, Mark, and this is why it’s so important that the users of AI think about what they are putting into the chatbots and other services because no company, no business is immune from data breaches. Right. Especially when we have tensions all around the world in national security, etcetera.
MARK SYLVESTER: So let’s talk about data breaches. It feels like there’s probably more than two, but I’ll… I think simple too. One is where they go in and they just take stuff. They you know, credit cards and passwords and that kind of thing. And you hear about those. You’re hearing about them less, but we still hear about them. How? I mean and I know some IT guys. They’re just very, very smart. How is it that those things are still happening? How is it that big tech hasn’t figured out how to put an iron door on data?
DOMINIQUE SHELTON LEIPZIG: So it’s not as simple as an iron door. What it involves, and I talk about this in my book, “Trust: Responsible AI, Innovation, Privacy, and Data Leadership.” It’s about governance. Okay? The technology is, sort of, neutral. What it requires of all of us, our CEOs, our boards, the rank and file of folks doing the work to be aware of and be digitally savvy and aware of the risk that might arrive. Paying attention to, you know, not clicking on malicious links, not looking going to URLs and clicking on links where you don’t know the sender of the email. Those are simple things, but they have resulted in huge losses for companies, because of just those simple safety hygiene, cybersecurity hygiene that needs to be in place.
MARK SYLVESTER: When then the other one that is very troubling is the ransomware attacks. And so for someone who doesn’t know what that is, why don’t you explain what that is?
Ransomware Attacks
DOMINIQUE SHELTON LEIPZIG: Yes. So ransomware attacks will be coming much more prevalent, and they continue to be a source of loss for companies. What it involves is, a business owner wakes up in the morning and hears some… usually gets a report from an employee that they are frozen out of their systems. They cannot access important information. All or part of their operating systems are immobilized. And instead of getting the entry screen to put in their passwords and get into the portals for work, they’re seeing, you know, sometimes skull and bones and, you know, reach out to this chatbot obviously to negotiate a ransom to get access to their data. So this is a terribly disruptive practice, put together by criminals, but it is having a tremendous impact, especially health care, financial, and our critical infrastructure.
MARK SYLVESTER: Well, that was the concerning part is, health care when, you know, if we can’t run our ICU and we can’t run the emergency rooms and we can’t dispatch, and then our financial systems, the fact that they can get at that, it almost not almost. It feels like it’s more than a business problem. It feels like it’s a national security problem.
DOMINIQUE SHELTON LEIPZIG: You’re absolutely right, Mark. This is, you know and Senator Schumer’s report on AI just came out today, and it touches exactly on this issue. That this is national security, cybersecurity, privacy, all of these things are interwoven and the governance necessary to operate generative AI, is all the more acute because of the risks that are there. In addition to the cyber attacks and the disruptions, you know, national security, our American companies are often targeted especially when there are tensions all around the world, that impact our every aspect of our infrastructure. So yes.
MARK SYLVESTER: I’m guessing now that I think of it that there’s a whole class of industries that don’t report ransomware. I’m thinking of the military industrial complex. You know? Lockheed and Boeing and, you know, those kind of company. We don’t hear that because I think us thinking that though there’s data breaches, it’s that… that’s… it’s extremely troubling to me as a citizen that that happens. So, okay, so Senator Schumer puts out a report. What kind of action can the government take? Because some people will say, well, the government again, I understand governance, but criminals could care less about our governance. So how does that help us?
Government Action and Governance
DOMINIQUE SHELTON LEIPZIG: Well, what governance does is it makes the enterprise and the organizations that might be targeted more resilient. So in terms of generative AI, for example, governance like risk ranking the AI, treating low risk different than high risk, continuously every second of every minute of every day, monitoring and testing output to make sure that another risk isn’t happening that, say, the AI is wrong. Okay? That’s very important. So these are the sorts of things that take a little bit of attention, not that expensive, but have a big difference in terms of outcome. And I’ll just leave with this one last point. Data breaches cost our global economy last year $6.1 trillion with a T dollars. We’re expecting that generative AI is going to bring $7 trillion with a T to our global economy. And you can see how quickly these things can cancel themselves out if we don’t pay attention, on all fronts. Which is why we’re having this conversation.
MARK SYLVESTER: I think I asked you what’s the world… what’s the idea the world needs to hear, and why do they need to hear it now? And clearly, you’re the person to help us understand this. So as I’m thinking of… I’ve been in software my whole life, and I’m thinking of the not my developing the developer stuff, but I’m thinking of my IT stuff. People that keep the trains running on time, keep all that’s… that their job is doing that. What have you seen in the universities that are training our up and coming IT people? Are they able to keep up with these risks that are facing us, whether they go to work, you know, with the NSA or they’re working in private industry? I mean, are we keeping up with it?
Education and Model Drift
DOMINIQUE SHELTON LEIPZIG: Well, it’s exciting because, you know, I was recently… I have a digital trust summit that I founded at Brown University and large language model experts, generative AI, professors that have advised the White House and so forth are explaining and bringing this information up to the students that AI is… governance is necessary. And a lot of the things that they are calling for to address something that’s very important, which is model drift. The key point that…
MARK SYLVESTER: Oh. I like new words, so I love that. What’s model drift?
DOMINIQUE SHELTON LEIPZIG: This is a phenomenon of that is true with every generative AI model that it will drift from because it’s powered by the whole wide Internet and that we are generating 2.5 quintillion bytes of data per day. So that is flowing in and adding to the Internet. So when we build applications on top of the generative AI models that are licensed from the tech companies, testing once a quarter is not enough or testing once a month is not enough because all of this roiling data is coming in with insights that will cause the model to drift. Recently, a CEO of a company woke up in January with the name of the company trending on X, formerly Twitter, because a chatbot that had been working perfectly and helped thousands of customers 24/7 to know when their packages were going to arrive and so forth. Suddenly, a customer asked a very simple question, you know, where can… when can I expect my package?
And instead of getting the normal answer, it’s on this date, etcetera, the chatbot came out with an expletive-laden screed against the customer then went on to criticize the company and blame the company for letting go its real customer service representatives and leaving the company with this useless chatbot. So, that’s why it’s so important. What… what had… you know, the chatbot was not starting out cursing at customers. Otherwise, it wouldn’t been deployed. The issue was that there was not continuous testing, monitoring, and auditing of output and an actual guardrail about what an accurate customer service call needs to look like that for that company.
And that’s… that’s worth it. When I talk about governance, that’s the type of thing that needs inserted into the AI in terms of code and so that the company can be alerted the very first time a chatbot starts, you know, cursing at its customers, for example.
MARK SYLVESTER: So the robots need to monitor the robots?
DOMINIQUE SHELTON LEIPZIG: I’m afraid so. Yes.
MARK SYLVESTER: So it’s like when you do call and get a human and they say this call is being monitored for whatever, you know, the whatever their legal words are that it’s almost the same thing with the chatbots for the exact same reason if one goes off the rails. Now in that particular example I have a couple of minutes here. In that particular example, did somebody insert something to make it do that, or did it do that on its own?
DOMINIQUE SHELTON LEIPZIG: This is the way that AI works. It did it on its own, and then later, you know, it… it was easily prompted to write a haiku poem about how terrible it thought the company was, etcetera. It went on for so long that the customer was able to videotape this happening on his iPhone and then went ahead. And first thing that the customer did, which I do believe in our people, I do think that people trust or want to trust companies with this technology. The first thing that the customer did was write to the company, to say, look. This is… this is the first thing that I did receive, etcetera, so forth. I want to make sure others aren’t impacted.
It just, nobody responded for forty-eight hours. So in frustration, he jumped on to X. So what we want to do is with governance, make sure that AI can be trustworthy to our customers, to our business partners by taking the steps necessary to implement common sense governance. Let me put it a different way, Mark. You know, we do quality control and product testing on pretty much every major product of any company. They’re not going to let it out the door without that control. We’ve got to do that with AI so that humans are controlling AI, not the other way around.
Related Posts
- Why eSIMs Are the Smart Choice for Mobile Businesses
- Transcript: Elon Musk and Jensen Huang Talk AI at U.S.-Saudi Investment Forum – 11/19/25
- The Minds of Modern AI: Jensen Huang, Geoffrey Hinton, Yann LeCun & the AI Vision of the Future (Transcript)
- Transcript: NVIDIA CEO Jensen Huang’s Keynote At GTC 2025
- Mo Gawdat: How to Stay Human in the Age of AI @ Dragonfly Summit (Transcript)
