The neXt Curve reThink Podcast
The official podcast channel of neXt Curve, a research and advisory firm based in San Diego founded by Leonard Lee focused on the frontier markets and business opportunities forming at the intersect of transformative technologies and industry trends. This podcast channel features audio programming from our reThink podcast bringing our listeners the tech and industry insights that matter across the greater technology, media, and telecommunications (TMT) sector.
Topics we cover include:
-> Artificial Intelligence
-> Cloud & Edge Computing
-> Semiconductor Tech & Industry Trends
-> Digital Transformation
-> Consumer Electronics
-> New Media & Communications
-> Consumer & Industrial IoT
-> Telecommunications (5G, Open RAN, 6G)
-> Security, Privacy & Trust
-> Immersive Reality & XR
-> Emerging & Advanced ICT Technologies
Check out our research at www.next-curve.com.
The neXt Curve reThink Podcast
The Beta AI Mentality (with Debbie Reynolds)
Debbie Reynolds of Debbie Reynolds Consulting LLC joins Leonard Lee of neXt Curve on the reThink Podcast to talk about the Beta AI mentality that is ruling the day as "AI companies" scramble to push out experimental and Beta versions of their AI products and services out the door. We are also seeing a scramble to drive adoption of AI by opting users into these features by default with assumed consent or in the absence of it.
Leonard and The Data Diva parse through some highlights of some recent Beta AI deployments that consumers and organizations should be aware of:
- Apple's recall of Apple Intelligence News Summary (2:04)
- Apple Intelligence default opt-in problem (9:06)
- LinkedIn AI training on your engagement and content (12:50)
- Meta's new AI-driven personalization across Facebook and Instagram (18:40)
- The risks and impact of DeepSeek model versus its AI service (27:47)
- The Chain of Thought (CoT) hack (36:40)
- OpenAI's agentic AI and its potential risks (39:23)
Connect with Debbie Reynolds at www.debbiereynoldsconsulting.com
Hit both Debbie and Leonard on LinkedIn and take part in their industry and tech insights.
Please subscribe to our podcast which will be featured on the neXt Curve YouTube Channel. Check out the audio version on BuzzSprout - https://nextcurvepodcast.buzzsprout.com - or find us on your favorite Podcast platform.
Also, subscribe to the neXt Curve research portal at www.next-curve.com for the tech and industry insights that matter.
Next curve.
Leonard Lee:Hey everyone. Welcome back to next curves. We think webcast series on security and trust where we talk about the hot topics in the world of privacy and trust that matter. And today, um, we're going to be talking about a lot of stuff. We have a lot on our plate. I mean, it's just the beginning of the year. And we're out of the gates with just, I would have to say just pure insanity. There's just so much to cover. But of course, as always, I'm joined by Debbie Reynolds, the Data Diva of Debbie Reynolds Consulting LLC. How are you Data Diva?
Debbie Reynolds:I'm well, thank you, Leonard. I'm happy to be here. This is so much fun.
Leonard Lee:Yeah, and I love your scarf as always. It's just amazing. Many varieties that you have. And, they just, make you look fantastic.
Debbie Reynolds:This one is from Spain.
Leonard Lee:Really? Oh, wow. Wow. So you're, this is like an international, it's a global collection.
Debbie Reynolds:it is. It is. Yeah. I have a thing. Yep. Yep.
Leonard Lee:So cool. Yeah, so, we have a lot of things that we're going to cover, and, before we start here though, remember everyone to like share, comment on this episode and remember to subscribe to next curves, rethink podcast at www. next curve. com, as well as on YouTube. We're also on Buzzsprout. So you can subscribe there. Just look us up next curve, rethink, and then subscribe, listen, and you'll get your constant diet of the insights that are both tech and industry that matter. And so with that Debbie, let's get started here. a lot of crazy stuff. the first thing. I want to talk about is Apple, because for the longest time, I think we've supported them in their mandate and their mission to put privacy 1st, right? Trust and security and we would argue that they've been the gold standard in the industry in terms of, driving that mission for their customers and bringing that value through. Yeah. Their products, their solutions and their services. And yes, they've been challenged in that mission. But I think there's been always a sense of merit in terms of how they executed. Although now I'm starting to see, some, let's call them, issues with the sustainment of that mission. And in particular, we saw Apple, sidelining it's Apple intelligence, summary feature, because, uh, the BBC. Complained that, the Apple intelligence summarization feature was producing inaccurate summaries. So, what do you think?
Debbie Reynolds:Yeah, well, unlike here in Europe, they have laws about. the accuracy of news.
Yeah.
Debbie Reynolds:So for them is a big deal. And, we know that these things can make mistakes and there isn't someone really very vigilant on watching it. We know that they can go off the rails and do different things. And so I think, pulling back the feature makes a lot of sense. Maybe, re figuring how you do that. obviously. Yeah. And you know this, you work with summarization things and it, it can be wrong in bad ways, especially in news, right? Where this is something where people are really relying on it for accuracy. And so to me, this is just an example why you either maybe need to do use different use cases for this stuff or really have more humans involved in that, in that. Process,
Leonard Lee:right? Or maybe even not using it at all. Quite honestly, I thought that, apple would have better judgment. And, be a bit more responsible and in fact, I think this is a perfect example of irresponsible. AI, where you're putting, not only the user, but society of risk of Bad information or inaccurate information. For the sake of, putting out a product and, you know, quite frankly, I'm pretty disappointed by this. when we think about other features that they're introducing in their operating systems across the board, things like notification summaries, email summaries, I found that. Quite honestly, I'll have to be very honest here. Apple intelligence doesn't do things better than other, quote unquote, tech companies out there that put out similar types of features. And I think this is where, these companies really need to tread very carefully. And quite frankly, I thought, uh, Apple knew better, but apparently, they don't. And maybe they should check themselves before they wreck themselves. Right?
Debbie Reynolds:well, yeah, I agree. I think, my thought about the way that Apple is trying to do Apple intelligence is that they wanted to be sort of a safe space for people who weren't. Comfortable when going out into the wild and playing, using all these other tools for that, I think it's fine. But for those of us who are maybe looking for something a bit more, different, more robust, give you more flexibility. it may not cut it, but maybe that wasn't the goal in the 1st place. I thought it was just kind of like, let's socialize. AI for people so that they don't feel like they're missing out, even though it may not be the best, there is out there.
Leonard Lee:Yeah, and I think it does highlight a problem as a lot of the conversation around consumer AI. Has gravitated or transitioned or is being pivoted toward agentic when we have these types of reliability issues, right? These, quality issues. how are these agentic AI frameworks that propose to personalize things for you? How much utility will they. Actually provide and, I think again, especially when we start to get into a gentic automation, these guys have to be really careful. and I think, you know, definitely you and I are going to keep a close eye on this dynamic and how it's playing out or this new pivot and how it's playing out. But I think we're seeing early evidence of how these, on device. Models are actually quite limited and you do really have to be very careful. I would say the feature that I think has proven somewhat useful, even though I don't use it very often is the image playground where I put together a Christmas card for last year. That came out okay, but then, what you're doing is you're prompting, you're, feeding that context window, or at least it's feeding itself, photos of your family members yourself, and, and creating cartoonish type images that you can then, use tools like whatever Adobe. Photoshop or what have you, PowerPoint to put together some content, but, that's kind of safe and harmless. I thought that's where Apple is going, but I think they might want to tread cautiously because they had to back the stuff out. So,
Debbie Reynolds:I think that's true. one of the things I don't like about the way that AI works, especially on like devices and
mm-hmm
Debbie Reynolds:Personalization, right? The way the algorithm works. Infers things about you and it assumes certain things that you would want or you would like. Right? So an example, even though this was an apple intelligence, the feature that I absolutely hated on apple and I love all the features. What's the memories where they like, take random things from your. Your photo album and say, Hey, remember this, remember when your dad had cancer and you were, took a video of him in the hospital. It's like, you don't know what I care about or what I want to see. Right. So I think if they're going this route, definitely trail lightly because you may make people very upset about the things that you're assuming or referring or trying to, lead them towards trying to help them be helpful.
Leonard Lee:Right. Yeah, maybe trigger a bad day, but yeah, totally that you don't want to dwell on that morning. Right?
Debbie Reynolds:Yeah. It's like, yeah, your husband cheated on you that you, look at the text 10 times. Like, I don't really want to see that, you know, so yeah,
Leonard Lee:well, okay, sticking with Apple. Here's another thing that I saw that didn't quite make me exactly. Happy is apparently an iOS and I assume the other device OS is whether it's macOS and, iPadOS, at least with the iOS 18. 3, that released, a lot of the AI functions are going to be default opted in. So it's all. Turned on, and then you have to actually proactively go in and turn that stuff off. And I thought ample might learn from the Microsoft, recall incident. And I think this is like a data diva. No-no. Right. Can I, can I qualify as such? Right. This is not what we do.
Debbie Reynolds:I was disappointed in this email. I have been an Apple lover since, the first mask come out and, obviously I take a critical eye at this and I love most of the stuff that they do. I was very disappointed in this because, this is the same company when they did app transparency. They opted everybody out of advertising and then we had to opt in and we love that, right? And so obviously they know how to do this. But the fact that they chose not to do it here and then have people, not everybody wants to use AI, not everybody feels like that's important, right? So, you know, I doubt I probably don't use a fraction of the features that I have on my phone. Phone now, even before you get it, because I'm just a simple person, right? Maybe I don't need all the bells and whistles. And most people who are using their phones probably aren't using it to that extent anyway, to create something that makes them have to stop what they're doing and try to. dig down into the settings and turn things off, especially this thing that they had done recently with Siri, where they had opted people in for Siri to share certain informational apps that you have on your phone or something like that. And, you have to actually go into each app and disable it. And I'm like. I know that they know better to do them to do this. And so, yeah, I mean, it's not, it's not a customer first. It's not a privacy first. my thing is that it wasn't really communicated really well. And I feel like what they. What was communicated was, here are all these cool ways that we're going to protect your data and secure it. So it's private and it's like, but that's not the point. The point is I want control over my choices and not to have you opt me into things that I don't want,
Leonard Lee:I can
Debbie Reynolds:opt in myself.
Leonard Lee:I think we're in total agreement here. And of course, Apple, if you would like to come on and explain yourself at any time, we're more than happy to speak with you. And, if you'd like to come on. Yes, because, I think these are the things that can really damage a brand's, trust, consumer, faith and trust in a brand. And, I think once you've compromised that trust it's really difficult to earn back and I have to be honest, I'm really starting to scratch my head about Apple and, that's not a good thing for Apple. I can always switch to something else, right? It's just, hopefully the folks at Apple, think a little bit more intensely about what they're doing here and, prescribed to our recommendations on what privacy. First strategies are and how they can benefit your brand. so why don't we move on to, LinkedIn? I know that this is a particular moment if I can put it that way, that sort of triggered you, if not very much triggered you and it's, That interesting notice that we got about the AI training of our information, and engagement on LinkedIn. And of course, LinkedIn is owned by Microsoft. So, what's the deal? Data Diva.
Debbie Reynolds:Very concerning. Also, this is another one that people are like, super upset about, because someone had found, they were just going through the, The settings they found they were opted in to this AI thing that you have to opt out of. And so the concerning thing is, okay, especially in the U. S. So if they opted us in to this thing, that means like, let's say if you were on LinkedIn for 20 years, so they took 20 years of your stuff and put in AI. And then in the U. S., depending on what state you're in, if you opt out. They don't necessarily have to erase that any of that data that they've already gotten, right?
Yeah, they
Debbie Reynolds:may only have to opt out. they may only either have to say well going forward We won't do this or we'll delete some of your data up to a certain point But really,, it actually, there's a lawsuit that was filed in California about this particular thing. And so we'll see. We'll see. Part of that is trying to figure out why they did it and what they're doing with that data. And so if it doesn't work, we'll see. Settle, I think we're going to get a lot more details about what's happening or what has happened with people's data with this. So people are very unhappy about that as was I, because, I look at it was funny because I have recently when the person sent me the message, I'm like, I have recently gone through periodically the settings and I noticed it was a lot more. Choices than there were before and I was like, what's all this stuff right now? I had gone through it recently and this is one that I didn't see and I'm like, you know I see almost all this stuff So the fact that someone like me who follows this very closely can miss something like that Like what with the regular user a regular person who doesn't have time or the head To do this with it would do so,
Leonard Lee:Yeah, and again, I think we run into this problem where they opt you in by default. And you're participating without consent and assumed consent on the part of the platform or service provider. And I think this is really where we start to have issues. Also from a brand perspective, and again, we're just really surprised that Apple is adopting this approach. Apparently it is that it. It's, not providing consent,, compromises trust, if you're going to be using or intending to use, our information and our engagement on a platform to train your or for any other purpose consent, I mean, that's common courtesy, right? It's not like you're paying. Us to train your products and build your products, right? We should be compensated. I think that's mindset that consumers should more broadly and increasingly have. And I think they will eventually, I think that movement is growing and because we have, privacy advocates, like you and I, who are making. People aware of what's going on, the nature of privacy and how it's being, compromised in many regards, for, corporate purposes, but I think that consent mechanism has to be clear has to be apparent and should uh, accessible and sometimes it's not right. So.
Debbie Reynolds:I guess I'll go broader. I'm going to say agency, right? So we want control. The control is what we want, right? So when whatever vernacular you want to use to express what that means.
Yeah,
Debbie Reynolds:but more control over how our data is handled. And so maybe I decide, okay, I'm fine with that. okay, I'll opt in or whatever. But I think the farther these companies get away from. Okay. Being able to articulate how something benefits me as a user makes me even more suspicious of them doing stuff like this because like, I don't care about any of that. I don't want any of that. So why don't you opt me into that? Like, how does that benefit me? It doesn't.
Leonard Lee:Right, and it also highlights the issue with, the challenges with these models regarding the right to be forgotten, right? That becomes a real significant issue, especially, as we look at. These privacy regulations probably getting more entrenched in the future. And so I think these could very easily become headwinds for a lot of these technology companies, quote unquote, tech companies that are trying to cater to global markets.
Debbie Reynolds:Yeah, well, the right to be forgotten is different than deletion. So, the right to be forgotten has a broader scope, and we don't have that in the U. S., so we don't have a right. We don't have a right to be forgotten, and even the deletion rights that are articulated in our laws, depending on what state you live in, you may only have the right to delete stuff. maybe a year, maybe two years. But beyond that, like, let's say you are a customer of AT& T for 20 years, right? You can't say, forget me for 20 years. they'll, forget you for two years. Somewhat, we'll delete some of your stuff, but not all of it. The reason why this, LinkedIn lawsuit is going to be very interesting because, if it goes forward, they'll have to really explain what deletion means.
Yeah.
Debbie Reynolds:what opting out means, how long they keep stuff, what do they do with the stuff that they took and what did they take? You know, that's what everybody's concerned about.
Leonard Lee:Yeah, exactly. So let's move on to meta. I guess the big news item has been, that meta AI can now use your Facebook and Instagram data to personalize its responses. What are your thoughts there?
Debbie Reynolds:Yeah. when you think of personalization, that should indicate to you that there's some privacy issue there somewhere. Right? So personalization just means we're going to take more of what we know about you. And then we're going to try to use it in a way that gives us the right to have it, but then give you something in return, whatever that is. Right. But as Usual, a lot of these data exchanges are very asymmetrical. Right? So it's like, okay, well, you like this color lipstick, or you like to eat at this restaurant. But then that data is packaged up and sold to someone else 1000 times over, right? So all you got was a recommendation. And what they got was more data to sell about. Yeah, it's always concerning, right? So to me, But the value exchange needs to be a little bit more. I don't think it'll ever be even, but it needs to be more compelling. I don't see a compelling argument for doing this personalization, especially if people don't want it. So it looks like they're going to just turn it on and just say, Hey, I'm personalizing all this stuff, especially with a company like metal, where they have different properties. And so People interact with these different apps because they want them for different reasons. Like I don't see someone let's say for someone has Facebook and whatsapp. They probably don't want anything They're doing on whatsapp to ever go over into face facebook. You know what i'm saying?
Yeah
Debbie Reynolds:that needs to be a consideration as well.
Leonard Lee:Well, yeah, and I think There is a question of how did they design their personalization engine around these large language models or these personalization AI models, right? I think that's a real big question and how does it secure your privacy? how does it isolate your, I don't know if you call it data. It's embeddings. Basically, how does it protect those embeddings? Because that's private information. if these models know. About you. but there hasn't been enough isolation of the data and the embeddings to, you know, institute privacy for the, those insights. Mm-hmm. Those preferences. don't know. Mm-hmm I, I, I don't, I don't know how these things work. I don't know if they've published it, but I think they need to. Oh, absolutely. And I haven't seen anything. So anyone from meta, if you're willing to share, I'm happy to bring you on board and tell us how you are protecting people's privacy and instituting privacy first principles and the implementation of your products and services. So, yeah, I mean, it's a big, you know, what I would turn this stuff off. Um, yeah, yeah, I mean, I would use things that are on device before I would turn any of this stuff on. And I think there's architectures that would allow you to get personalization. benefits without necessarily having to give up your, you know, having this memory, if you will, and, and, yeah, so this lingering question of, well, how is it protected? Right?
Debbie Reynolds:well, I'll give you an example. Something happened to me this week and this, this is a Gemini thing. So I was, I was invited to a meeting, right? I was not, the host of the meeting, right? I'm just an attendee. And then this Gemini thing popped up and I guess it was on, it was their system. So I pressed, you press the button and they say, Oh yeah, I'm recording or do whatever. So I'm for the transcript. So I guess they were like, Oh well, Gemini is going to do this transcript and this summary or whatever. And so the way the meeting was, I did my speaking and then I left the meeting. Right. So I later, I get a transcript of the meeting and It has a summary of what I said, but then it has stuff that happened in the meeting that where I was not there.
Leonard Lee:Oh, geez.
Debbie Reynolds:And I thought, who did this? Like, this is so bad. This is so bad. Like, a lot of people in corporate America, they do meetings like that where someone drops off and they maybe they talk about something else. Yeah, now you've created this huge loophole and so that's what concerns me when people say they want to personalize or whatever They're not really thinking about who should know what you know, it's almost like spycraft, right? To know and all the other type of stuff like I
don't yeah, I
Debbie Reynolds:don't see that being Folded into a lot of these innovation. They're just like, Oh, this is this cool thing and wish to share everything. It's like, no, it's not like, no, that's not okay at all.
Leonard Lee:Security is really, really challenging issue with these things. And what I mean by things like our back and being able to control access all the way down to the embedding level. Right? I mean, we're talking about if it's ragged or if it's, a standalone model. That you're interfacing with, these things are very, very difficult to do. And so architecture, how you designed it, where's data, stored, ingested, and, embedded in all these different possible. elements in a quote unquote, a I platform application. These are things that really need to be asked. I don't hear anyone asking these questions. They just assume that these things work, which is, I think if you're an enterprise out there. You and you're a C. I. O. Make sure you ask these questions and that you are very clear on what your company is using in terms of, external service and what you're actually bringing into the enterprise and what enterprise grade. Um, with enterprise grade security and confidentiality protection, what that looks like, because I'm telling you right now that these are things that are not apparent and clear at the moment is 1 of the things that are really stymieing enterprise adoption of generative AI tools and applications.
Debbie Reynolds:And not everyone needs to see everything, right? So there's a problem there. If you, this, and this actually came up with recall where they were like, well, let's just index everything on the computer. regardless of the user, right? So you're going to take all the different user profiles and scramble everything together. So there's some, whoever has access can see it, but not everyone who's using that computer. Should see everything to everybody else to see. Right. So to me, this is like a theme that I'm seeing in terms of you're basically creating an unauthorized access issue that could be, could be a harmful safety issue, like in a personal situation, or it could be a confidentiality issue.
Leonard Lee:corporate setting. Right. Exactly.
Debbie Reynolds:Almost. I like, because I mean, I remember when companies are trying to do enterprise search and it was bonkers. It was totally crazy because it was so bad because it's like, how did that get in there? it's like, because you didn't secure your data. The right, that's why that's how right. So we still have those problems. And what we're doing is bringing more complexity and more technology. That's bringing up things. So maybe maybe we're hiding in plain sight. no longer, but we don't really have the security around that, or the barriers that need to be there, now people use it or be confident in it.
Leonard Lee:Yeah, and, we're not just managing content anymore documents and information. This is like knowledge, right? There's like, this layer that synthesizes a corpus of. Knowledge, that is informed by content, by information, like data, and now you have to figure out how do we security trim this thing so that it only shows and it only acts like it knows, or it refuses to respond to certain. Folks, based on access controls that have been defined, right? And that whole exercise is very, very complicated, and I haven't really seen anybody out there, quite honestly, that's, cracked the code for that. and this is a huge gap at the moment. And again, anyone out there who claims that they have a solution. Please reach out. would love to talk to you. so we'll put that challenge out there for folks. You know, I think this is gonna be a regular thing. We'll do now. It's like, if you have a solution and we didn't know about it, give us a jingle
and
Leonard Lee:see if you have actually have something real. you and I, we like to keep it real, right? So that's what we do. let's move on to the next thing, which is, oh my gosh, this is like the news item that has shook the world, deep seek, right? It's an AI comp, it's a AI lab actually out of Hong, Hangzhou in the Zhejiang province of. China that has released a model, that is on par with anything that's in the Western world, it arguably can outperform, even, GPT, what is a GPT 4. 0 and, does this add a very, very, very. Very competitive economic profile, meaning, the price of the training or the cost of training was one 10th of what it took to, train comparable, llama models by, meta and also it's, inference costs are ridiculously low. And so now the AI world is grappling with the advent of this model, but then also. I think people are going back to the basic question, or at least maybe ignoring the basic question of what's the difference between a service and the model that, and, because, there's a lot of talk about open source and how open source, blah, blah, blah. I think that's a topic for a different discussion. that has more to do with, regulatory and, export control type stuff. And then, some of these, open versus close, parties wanting to claim bragging rights, but then there is the question of security. and privacy, Because we have now tick tock. That issue continues to be a factor. That's an open issue. If you will, that the Trump administration is negotiating, some sort of, resolution on, but then now we have this. I just read that DeepSeek, their AI bot or assistant is the number one downloaded application on the Apple store. What are your thoughts here? And how should enterprises and consumers think about this?
Debbie Reynolds:Yeah, well, it's definitely got people's attention on one side We have a lot of talk about how much money that these companies need how much more money They need to be able to do this stuff
Yeah,
Debbie Reynolds:how much more power and all this stuff and then you have this group out of China doing similar things for a fraction of the cost, not even with, the most advanced chips. And so, I don't know, maybe necessity really is the mother of invention.
Leonard Lee:Yeah,
Debbie Reynolds:it's very interesting. But I mean, to your point, the model and the service is different, but it's, if you have a model, you can create a service. So, I think they wanted to go in that route. And so, yeah, I think these companies should be concerned and I know a lot of the talk, the Stargate stuff that we're talking about, it's all about energy, I think it's all about chips. The fact that we see the stock market go up and down based on who has the most advanced this or advanced that. And so now you have a group. It's not using the most advanced this or advanced that or have a trillion dollars in their back pocket and they're, I think they should be concerned, cause that's what these companies don't want to pay our arm and leg for AI, right? And so having it at a cheaper costs, to be able, cost of entry and also maybe creates more. Situations where maybe companies wanted to leverage. I couldn't do it because of the cost. Maybe that opens that up as well. But I think, they're probably going to be some friction here. Because this is out of China, right? Yeah.
Yeah.
Debbie Reynolds:You know, it reminds me of. Decades ago where there were, bans for, Russian diamonds. apparently, Russia has tons of diamonds and they didn't want those diamonds to get out to these other markets. Cause it would like make diamonds go like way down in terms of costs So I think we're probably going to start to see more of that on a Global scale where they're like, hey, you have to buy our expensive model because we're going to ban this other thing. But then, the Internet doesn't really have boundaries. So, there are going to be people are going to be able to use the stuff regardless. So, really interesting.
Leonard Lee:Yeah, with the model itself, it can be deployed on device, or it can be deployed in a data center. A service provider can use leverage the model to provide like a service. I think, the thing that is a big question mark is with DeepSeek, their own hosted service, right? What are the privacy and security risks associated with that? And, I think there's no lack of concern in the United States at the moment. Of, Chinese entities, harvesting information, and maybe even in coding and embedding, human, like, profiles, personal profiles onto, a, model or at least an AI service platform that they're hosting. And I think that these are the things that I think after all these years, actually, 2 years, I don't think we've really gotten really good at, having a clear discussion about what that risk structure looks like. I know that there's a lot of, people who have. Applied and and you're, you're probably 1 of the only people that I know of who have looked at this problem holistically. So, but. The question of what does safe a I look like, not only from a consumer's perspective, but an enterprise perspective, as they look at external parties, providing these services, what does that look like? And how do CIOs and tech and business leaders evaluate these companies? And these services, right? And these applications, and I think for consumers, it's almost like a hopeless thing, but there has to be something done. I know that you're working on a lot of stuff in this regard, but things seem to be moving faster than human consideration. Right? And the things that, can be. Institute in terms of protective regulations and policies, right? I think that's a, it's a big gap right now.
Debbie Reynolds:Yeah, I think the gap is that it's 1 thing to say that, we think that there's danger if you do this, use this tool from whenever, but what hasn't come out is why and what that risk is. Right? So Part of choice, and maybe this goes back to my thing about agency, is for people to know what that risk is. And so to not articulate what it is, is not. It's like me saying, instead of me saying, don't touch the oven because it's hot. I just say, don't touch. You're like, well, why not? Well, just don't. I was like, well, why? I was like, I think it's really dusting up people's curiosity more than it needs to. And so I think it's going to be hard to convince someone. That is harmful if you don't explain how it's harmful and why. It's like, that's what's missing. So you can say, I don't know. It's almost like it's cartoonish in a bad way. it's like the, movie device where, everything is around this one thing, the MacGuffin, right? It's a MacGuffin, right? So we don't know what it is exactly. it's the light and the suitcase. From Pulp Fiction. That's what it is. So unless you're going to open the suitcase and tell people what's going on, you can't stop them from trying to go in these ways, especially as it comes to money, right? we see, especially a lot of these small and medium sized businesses, they want to edge. They want to be able to do stuff with AI. And so they don't have a trillion dollars in their pocket to do the stuff. And, so I think, any company that can create, give people access to something that they would not have otherwise had financially. It's going to have an edge period, whether it's the Mars. Or Venus or China it doesn't matter.
Leonard Lee:Yeah. and I think that question of how needs to be asked about How are these companies doing the things that they do in particular in regards to privacy protection and, security. And I think we'll just have to continue to do the good work that you're doing hopefully. This podcast is elucidating or highlighting some of the considerations and questions to be asked. but anyways, let's move on to the last topic that I know you want to chat about a chain of thought. And this new threat or risk, I don't know exactly how you want to characterize it, but maybe you can share it with our audience.
Debbie Reynolds:Yeah, so there are some researchers that have been doing some work on, AI models that use this kind of chain of thought reasoning. And basically they were saying they had come up with a type of adversarial attack that basically can jump into the chain of thoughts and maybe change or tweak things that the model is doing to be able to give an output that may be wrong in some way. And so this can have huge implications depending on how companies are using. These models make decisions, right? I think we all will see this in movies, right? Where someone goes into some computer system and they make some tweak or change and the ones thinking about, and this can really happen. so basically. as we see all these new architectures come up and these new ways to, manage data in these models, there are always going to be adversaries out there that are trying to manipulate it. And, it's becoming easier for them to do it, right? Because that cost of entry is much lower. And so as companies are really relying on data, maybe overly relying on data for models, they really need more humans. In the loop to be able to catch those types of things, but this is something that's really not on anyone's radar because they feel like, Oh, well, let's just put this thing in a model. And, we think it's better than a human because, humans suck and so computers are better and all this stuff. So, we have adversaries who can manipulate these things, through, Yeah. And not in obvious ways, maybe things are more subtle, that may play out over, days, months, years, as opposed to what people think, okay, if they're going to come in, they're going to take something, maybe they want to come in and they want to throw you off track or put you in this, go in a different direction or something that you maybe wouldn't have done. So it's just to me, I read these stories and I always think, well, what could they possibly do? What would be the risk here? It fascinated me that's the way, this particular attack, it's not like a head on, it's kind of like a sideways thing,
Leonard Lee:you know, so yeah. Yeah. Wow. I have nothing to say or add to that. Definitely something I need to look into. But, I think 1 of the things I'm really looking forward to is RSA, uh, C, 2000, uh, 25, because I think we're going to see a pretty, significant. Attitude change in the cyber security community as they look at these new tools. Oh, yeah. Here's the other thing that came out is, the, open fast, following anthropic coming up with an agent, right? That basically can take over your computer screen and execute tasks, right? Discover how something works and then executing tasks. I don't think that anyone has really raised the question about, okay, what are the implications from the perspective of a, a bad actor. Using this and weaponizing it and using as a tool to, basically cause harm and, I think. A lot of these innovations look really cool. Some of them are not as novel as people think, but their ability to do harm is apparent. Their ability to provide benefit, not as much. A lot of these are hypothetical. I think that's one of the things that, business leaders, if you're a board of directors, stop putting pressure on your organization. do a little bit more homework because some of these technologies are not what they're cracked up to be. some of these solutions are actually quite dangerous. So you might want to, listen to your CISO, give them a little bit of credit and maybe save yourself some headache and, maybe save yourself some money instead of investing in these experiments. working with the CSO to, come up with measures for safe adoption of these technologies, right? Because they're going to come anyways. I think, being first to implementing a potentially dangerous, application within your enterprise or a tool within your enterprise, may not exactly be the. Best thing for, your organization and, you as a consumer, not a great thing either. So,
Debbie Reynolds:depending on what you want the agent to do, I still think a lot of the stuff that people should be trying to do with these models should be the kind of low risk. Yeah, if it goes bad, it's not going to harm someone, right? It's not going to create a harm, but like, I hear people like, oh, well, let's give this agent your credit card number. And, Leonard likes the aisle seat on, United airline or whatever. And the cyber criminals, I don't have to infiltrate. Leonard in his life. I just have to get a hold of this agent and then yeah Book flights for him. I could book flights for me and my criminal enterprise and the places that I want to go and do stuff Right, but then they'll be like was your fault because you should have you know known that you shouldn't have done this So it's like stay with the low risk Stay with the low, the annoying tasks you want to automate to me really the agent thing, you know A lot of what I think general AI and these agents can do will probably be doing things that no one would care about. It wouldn't be on the front page of a paper, right? It'll probably be these low hanging fruit automation. Like for people who don't know how to automate stuff already, it may be just an easy way to automate. A couple things, right? Yeah. So that's fine. It depends on what it is, but, my giving me your credit card and the code to your hat off the car and stuff because the cybercrime was like, Hey, this is so cool. We can, like, do theft, like, you know, automate it in an automated
Leonard Lee:way. Yeah, absolutely. I still want to know. How did you know? Yes. United. I'll see. That is my preference. That is creepy.
Debbie Reynolds:That is my preference.
Leonard Lee:Oh, okay. Okay. Well, hey, Debbie, thanks so much for jumping on and kicking off. 2025 with this podcast and sharing your insights and perspectives on some of these and topics that really should start to concern us, going into 2025 and I want to thank everyone for tuning in. And if you have any questions, or would like to get in touch with Debbie Reynolds, who is the data diva, go to W, W, W dot Debbie Reynolds consulting. Dot com. She is doing wonderful work and, if you have any questions about what is happening and privacy and security, she is absolutely the person to be talking to. if you'd like to engage with next curves research, go to W. W. W dot next dash curve dot com and you can. Tap our, research portal there as well as our media center where you can find links. To all the cast and present podcasts and other media that I am engaged with. Just go there and, obviously you can subscribe to the next Curve Rethink podcast here on YouTube as well as on Buzz. And so, uh, once again, Debbie, thank you so much and we'll See everyone next time.