Skip to content
Home » In the Age of AI, Trust is Key: Dominique Shelton Leipzig (Transcript)

In the Age of AI, Trust is Key: Dominique Shelton Leipzig (Transcript)

Read the full transcript of AI expert Dominique Shelton Leipzig’s conversation with TEDx Producer Mark Sylvester at TEDxSantaBarbaraSalon 2024 conference.

Listen to the audio version here:

TRANSCRIPT:

Introduction to AI and Cybersecurity

DOMINIQUE SHELTON LEIPZIG: I’m actually a cybersecurity and data privacy partner in a global law firm. I focus my work and research on data strategy and helping companies, CEOs, and board members think through how to be amazing with their technologies like AI. My first question kind of comes from, on one hand, AI has been around since the fifties. Right? The early fifties. On the other hand, people will say it’s really only been around for twenty-four months since ChatGPT caught the world by storm, and everybody started paying attention.

MARK SYLVESTER: How long have you been paying attention to, I guess, cyber for a while, but how about AI and cyber?

DOMINIQUE SHELTON LEIPZIG: Well, you know, it actually started in… and you’re correct that what’s new here is generative AI. Right. AI has been around for over fifty years, and we’ve dealt with it in terms of addressing privacy issues for a very long time. And then, if you look back at California, which is always on the cutting edge of many things, we had our California Chatbot Disclosure Act, which was in 2019. So helping companies there be transparent about their disclosures and letting people know that they’re interacting with an AI.

MARK SYLVESTER: Oh, so when I go on to the AAA website and there’s a little chat icon, chances are I’m dealing with a bot. And because I live in California, it’ll say you’re dealing with a bot in some clever way.

DOMINIQUE SHELTON LEIPZIG: Exactly. Precisely.

MARK SYLVESTER: Yeah. I actually prefer that. I get my work done a lot quicker when there’s a robot helping in that instance. I’m curious. What is the biggest threat? I’m going to stay away from cyber for just a second. But what’s the biggest governance issue, if you will, for big companies and AI?

AI Governance and Brand Reflection

DOMINIQUE SHELTON LEIPZIG: The biggest governance issue is making sure that AI reflects the brand, that it looks everything the company wants it to reflect just like an employee. So one of the things that I get asked a lot because I’ve integrated my digital assistant. I like Ethan Mollick’s definition. He calls it co-intelligence. So I like that. I’ve integrated it quite a bit. People have said to me, aren’t you afraid of all of your writing being absorbed by the LLMs, by the robots. I know that big companies have rules against bringing ChatGPT. And for that exact reason, how are they working around that, or what’s the solution to that?

MARK SYLVESTER: Well, there’s got to be policies and, really, it has to start from the top. Safe and responsible and trustworthy AI is a tone that the CEO and the board can establish in an organization. Making sure employees know what to use, and what to place into AI so they’re protecting things like strategic plans or proprietary information is really important to raise that awareness. So what I’ve heard is that you can have your data, your own large language model. We all we’re getting used to all these new words, aren’t we? You can have it inside your company. I wouldn’t… I’m… it’s too small, but I’m thinking of a large company. They can do that. So I know that and I’m going to figure out that IT professionals have figured out how to button that down so that’s secure.

Here’s the question. These computers that run AI are upwards of a hundred million dollars. Right. How are big companies… I mean, that’s different than, you know, us deciding to switch with a certain vendor on our office tools. A hundred million dollars just for the iron. What are companies doing about that?

AI Implementation in Companies

DOMINIQUE SHELTON LEIPZIG: So there’s… I want to talk about two different camps. They’re the companies that are licensing the large language models to integrate into their businesses, and many of them are entering into pilot agreements first and then enterprise licenses that last over three years. And they’re creating their own applications sitting on top of the large language model. So think of, you know, company X GPT, that’s trained on the company’s own data that is not shared with the large language model, and the large language model is underneath like an ocean of data from the whole Internet and the application sucks up those insights.

ALSO READ:  AI Is Dangerous, But Not For The Reasons You Think: Sasha Luccioni (Transcript)

So that’s how that’s happening. And the other? Where you’re creating the large language model itself, the tech companies that are part of the whole ecosystem, either creating the cloud services or the AI tool, that has the algorithm like an OpenAI, those folks, those are significant capital expenditures that are necessary to be able to develop the product.

MARK SYLVESTER: So how worried are executives or how worried should we be? I almost want to have two camps. There’s corporate, and then there’s small business and entrepreneurs and that. How worried should we, I’ll use the royal we, be that all of this is in the cloud even if I have my own servers and my own protected things? So I have nothing on premise. Everything’s in the cloud. Should I be worried about security?

Security Concerns in the Cloud

DOMINIQUE SHELTON LEIPZIG: Well, you know, security is always a concern, Mark, and this is why it’s so important that the users of AI think about what they are putting into the chatbots and other services because no company, no business is immune from data breaches. Right. Especially when we have tensions all around the world in national security, etcetera.

MARK SYLVESTER: So let’s talk about data breaches. It feels like there’s probably more than two, but I’ll… I think simple too. One is where they go in and they just take stuff. They you know, credit cards and passwords and that kind of thing.