
The neXt Curve reThink Podcast
The official podcast channel of neXt Curve, a research and advisory firm based in San Diego founded by Leonard Lee focused on the frontier markets and business opportunities forming at the intersect of transformative technologies and industry trends. This podcast channel features audio programming from our reThink podcast bringing our listeners the tech and industry insights that matter across the greater technology, media, and telecommunications (TMT) sector.
Topics we cover include:
-> Artificial Intelligence
-> Cloud & Edge Computing
-> Semiconductor Tech & Industry Trends
-> Digital Transformation
-> Consumer Electronics
-> New Media & Communications
-> Consumer & Industrial IoT
-> Telecommunications (5G, Open RAN, 6G)
-> Security, Privacy & Trust
-> Immersive Reality & XR
-> Emerging & Advanced ICT Technologies
Check out our research at www.next-curve.com.
The neXt Curve reThink Podcast
Highlights from NVIDIA GTC 2025 (with Jim McGregor, Francis Sideco, Karl Freund)
Karl Freund of Cambrian-AI Research, Jim McGregor and Francis Sideco of TIRIAS Research joined me on a neXt Curve reThink Podcast to recap NVIDIA GTC 2025 from the Signia Hilton Hotel in San Jose on the third day of the event.
This episode covers:
➡️ Key impressions from NVIDIA GTC 2025
➡️ NVIDIA's evolving identity
➡️ NVIDIA's moves widening the lead in AI supercomputing
➡️ NVIDIA's broadening AI computing portfolio and the software stack
➡️ The NVIDIA philosophy of "more is more"
➡️ Synthetic data, digital twins, and robotic training
➡️ Jensen's jacket and more!
Hit Leonard, Jim, Francis and Karl up on LinkedIn and take part in their industry and tech insights.
Check out Karl and his research at Cambrian AI Research LLC at www.cambrian-ai.com.
Check out Jim and Francis and their research at Tirias Research at www.tiriasresearch.com.
Please subscribe to our podcast which will be featured on the neXt Curve YouTube Channel. Check out the audio version on Buzzsprout or find us on your favorite Podcast platform.
Make sure to follow me here on LinkedIn and hit that 🔔 at the top of my profile for a constant diet of the tech and industry insights that matter.
⭐ Subscribe to the neXt Curve reThink Podcast on Buzzsprout here: https://bit.ly/43mr2Hm
⭐ Subscribe to the reThink YouTube channel here: www.youtube.com/@nextcurve
⭐ Follow neXt Curve at www.next-curve.com
⭐ Subscribe to the neXt Curve newsletter here: https://bit.ly/3LbXVgZ
Hey everybody, this is Leonard Lee, executive Analyst at Ncur, and welcome to this Rethink Podcast and it's a really special edition that we're filming here live at G CT GTC 2025. It's been a long day. It's been a long day. It's been a long way, and this is our Silicon Futures program. And, of course you all know we're doing this in collaboration with TEUs Research and Cambrian hyphen AI research. You don't have to say hyphen, I love saying hyphen ai. Well, you know, it, it means that I care. Oh, right. It means you can spell Yeah, it's one extra, it's one extra token. Man. It's like the brown m and ms. Okay. You just do not have brown m and ms. Okay. In a hotel when Van Halen is in town. So that, that, that is the hyphen. But, I'm joined by, Jim McGregor, as well as Francis Ko, as well as the illustrious Carl Fre. Right. And I'm sure everyone out there in AI land is really glad to see us here together in San Jose. At the, San or we're at the, actually, the Hilton Insignia. Hilton Insignia. Signia. Okay. Well, we've been getting bounced around, back and forth, all over the place. So gentlemen, this is like the big Nvidia, good event, AI events, super Bowl of ai, right? Yes. In all of the universe. And so, looking forward to having this chat and, it should be a good one, right? Mm-hmm. So, hey, let's talk about. Impressions. What'd you guys think? What did you, number one, what'd you guys think about this year and what did you think was different from last year?
Karl Freund:I think everybody was expecting really fast hardware and a roadmap of really, really fast hardware uhhuh. so that was not a surprise. They met the bar. I think what surprised us, I think, at least, certainly me, was two things. One was the optical. Yes. The co silicon
Jim McGregor:Photonics.
Karl Freund:Thank you. Silicon photonics. Yeah. I can't even say it. And the other is, is really dynamo, which is the new operating system as they call it, which effectively think of it as, as, the hypervisor for hundreds of thousands of GPUs. Yeah. That optimizes across all the GPUs, depending on. What you're trying to do. I'm still trying to wrap my mind around it or, yeah.
Jim McGregor:Or let's just call it the operating system for the AI factory.
Karl Freund:Yeah, yeah. The operating system for the AI factory. But it's kind of like a hypervisor, right? Yeah.
Leonard Lee:Kubernetes, it's been described as sort of Kubernetes for AI inference, I think
Karl Freund:across. Hundreds of thousands of gps. I think
Francis Sideco:that was, obviously the networking bit, which I think probably Jim's gonna talk about. But I think for me it, it, it was that, Jensen definitely tried to make a case. I mean, he's been under pressure in terms of the industry going from, obviously we're still gonna do training workloads, but we're gonna be doing inference workloads a lot more and the importance of that. And I think just generally they've gotten hit in terms of them not being. Potentially as optimized for inference or not. Mm-hmm And I think Jensen during this show definitely made a case for the fact that actually when you start looking at the transition of inference, not just from one shot inference to reasoning inference, you actually do need scale up. In inference, which before that, came out, the question had always been, do you actually need scale up for, for inference? and he made the case that you do, whether or not that case is gonna get bought. I think the jury's still out, but it passes the smelt as especially Yeah. When you bring the dynamo into play.
Jim McGregor:Yeah.
Francis Sideco:And I think that was the key for me. Uh, and
Jim McGregor:Agentic
Francis Sideco:ai and agentic ai, that was the key for me is that the homogeneous. Fungible resources that Dynamo can help manage, so, right.
Jim McGregor:Okay. I don't know. For me, it's like drinking from a fire hose and the fire hose gets bigger every year. But the stuff from Cosmos to do simulated, or synthetic data. Mm-hmm. And, and. Scenarios, just thousands, millions of scenarios using synthetic data, right? To train robots that is gonna, and autonomous machines. I think that's huge. I think seeing some of the new platforms, they have a complete enterprise platform solution. Now, from, PCIE cards to, what used to be Project Digits, now it's, DGX Sparks, sparks and the DGX station. So you have desks. Desktop and side solutions for AI developers. You have the DGX Super pod, you have, the next generation of Blackwell Ultra coming out. So it's just one thing after another, and obviously, I. The, the networking, networking stole the show, I think for the second year in a row. Yeah. Last year it was the, envy link switch, which, allowed 72 GPUs to act as a single one. Right. In a single rack. But this year, once again, everyone's been saying, and this, we've been developing silicon photonics since about 2000 actually earlier than that, but. Everyone keeps saying it's two to three years away. It's two to three years away, right? They finally got an ecosystem together to actually produce something that they're, gonna be offering a quantum X infin bend version of it and a specx ethernet version of it. So they're gonna have, Uh, silicon or they're gonna have, photonic network switches. That is significant in terms of reducing components, power consumption, increasing resiliency and reliability. Reducing
Karl Freund:it all ships this year.
Jim McGregor:Oh yeah. Well, the switch next year. The infin, the Quantum X ships this year. The max spec Max next year. Spec max next, early next year. Yeah.
Francis Sideco:And it may, it makes sense. I mean, they did scale up last year with NVLink Switch and then, scale out this year, and
Jim McGregor:I think he points to what we're gonna see. I would expect future enhancements to NVLink, so that we see more than 72 supported more than 144. 2, 280. Well, he already, he already kind. Yeah. We've already. Seen that with previewed
Francis Sideco:that with where they're going with Vera Ruben. So
Jim McGregor:yeah, with Reuben and Reuben Ultra, I think that, obvious, I would expect, even though he said No, use copper in the rack, by the time we get to Ruben, I almost think that we're gonna be looking ats at photonics in the rack. Well, he's
Karl Freund:double down on it this afternoon, right? Yeah, yeah. I mean, he, this morning he said, no, we're gonna stay. Copper in the rack, so I'm not convinced
Francis Sideco:of that one. We'll see. Well, he said use copper as much as you can. And I think that's the key. As, as long as can, as long as you can and as long as you can, is the key factor. That might not be that long. So
Leonard Lee:what about you? Well, you guys have left me with nothing but scratch once we go down the line. Say, but I do have to say something. We're, this is all like unscripted of course. Obviously we don't have anything to refer referencing, so I need to do the disclaimer. Welcome. And by the way, all the opinions of these guys are entirely their own, don't reflect next curve or mine, which is probably good and completely accurate because they like to be up on me. Right. I'm like, like the black sheep of the family. Yeah. Yeah. Especially Carl. But, I thought it was really. Overwhelming, a lot of, details on the roadmap where I think like what Jensen has been trying to emphasize, I think, over the course of GTC is that, they're a different company. Mm-hmm. I think that's a, and we've been talking about how they're becoming more systems oriented. And, Jim often services. Right, right. And then the software aspect and so. All this stuff is coming together in a really compelling way and I think Jensen does a really great job of, telling the story. Now, I do have to say there are some gaps in what I've heard, and I think these are areas where Nvidia really needs to, like all the scaling stuff that didn't work for me. It still doesn't work for me even from last year. Right. And how does Huang's Law from last year now map into these new, this new scaling thesis and, I'd like to see more clarity on that as we go forward, but they are also introducing a lot of stuff and they're trying to figure out how to frame this. Whole scaling argument, against the backdrop of a lot of what's happening in ai, right. With moving toward reasoning. Last year was about MOE, right? Mm-hmm. Yeah. In terms of the ai, make sure makes sense now. Yeah. We have the time scaling, this long thinking that Jensen talks about where the industry is talking about in general. And, these are all new concepts and they have implications on the requirements around compute. And we hear Jensen talking about the a hundred x, right? With reasoning, we need, a hundred times more or more tokens, compute, which is different from tokens. That was one of the things that I took away. As we talk about tokens, we need to think about it not just in general terms, but very specific terms in
Francis Sideco:well terms
Leonard Lee:of the, inputs and then what. It gets generated in terms of a token and how that translates into how much more compute you need. Mm-hmm. And so, there, I mean there's all these layers that are being placed on the conversation. Yeah. And that makes it really complex and I think, difficult for, folks to. Keep up with, I think.
Francis Sideco:I think that's one thing that we do all need to keep in mind, right? Because obviously Jenssen's gonna keep pushing, Jenssen's gonna keep pushing the boundaries of what's needed. But that doesn't mean that's the only way. Enterprise AI or AI in general is going to create value for the economy at large, right? Yeah. There's going, there's a ton of bifurcations that are happening in terms of use, in terms of, how you architect these data centers for what different enterprises need and so forth. So not everything is going to be the top of the line, Uber scale stuff that, that jenssen's talking about. There's going to be room for other architectures and and other use cases that don't need that. That, screaming high end capability.
Jim McGregor:Well, and especially with the infrastructure requirements that come with'em in terms of the cooling and power and everything else. Everything, yeah.
Yeah.
Jim McGregor:But one thing that really amazed me. And, they made this change a couple years ago to combine, uh, their automotive stuff and their other robotics. Robotics and other stuff, and into autonomous machines. And hearing from some of the other groups learning that even though avs have kind of been pushed out and fallen by the wayside mm-hmm. That a lot of these other groups focused on robotic machines for healthcare and other types of applications, how they've taken that learning that's already gone into automotive, even if it's not being deployed, how they've taken that learning and applied it to other forms of autonomous machines, particularly robotics.
Yeah. Yeah.
Leonard Lee:Oh, really quickly. Yeah. I wanted to bring this up because I thought it, we've been talking about digital twins for a long time. Mm-hmm. And quite honestly, a lot of conversation, especially. Like two, I think about two years ago it was nonsensical. This year actually, the stuff with Cosmos, right? Mm-hmm. Yeah. And synthetic data. What I think we all need to recognize is we're not talking about synthetic data in terms of a SOA generation. This is like using some of the physical models that they used to have. Mm-hmm. And all the things that they have in gaming, it's like has gaming roots that are used to model like certain environments and then they augment that with AI to, synthetically generate. Scenarios New data. Data, yeah. Right. Scenarios. And I think, that's where, some folks might have been confused in the past and what is all the synthetic mm-hmm. Data stuff. But, one of the things that I was really impressed with this year is, how it will essential in supporting a lot of the. Test acceleration, right? Mm-hmm. That are gonna provide the learning environments and learning experience for these models that are gonna be trained. And this has roots, like what you're saying in automotive and just there's this, all this heritage that's coming together mm-hmm. In some of these concepts that, I think are, actually pretty powerful.
Karl Freund:So I think one of the things that, I'll never forget the quote from the keynote, which is Jensen calling himself the Chief Revenue Destruction Officer. I. That's, that kind of rings true because when they announced they were going to an annual product camps, a lot of us who've been in the computer industry a long time said, that's gonna be really hard to do. Customers don't like having what they just bought be obsoleted so quickly. But he really embraced it. He really embraced it. And he said, he said, yes. Broad, Blackwell, is shipped three times more than, already this year mm-hmm. Than Hopper did all of last year. And then he said that, revenue ramp is gonna continue to drive forward.
Jim McGregor:And with Dynamo, they're getting a 40 x. 40 times improvement over Hopper.
Karl Freund:Yeah.
Jim McGregor:That's just phenomenal.
Karl Freund:It's wild. Yeah. You, that's an inference statement. Yeah. Right. This hard inference is really at the heart of everything they're doing this year at GTC. Yeah. Whether it's for robotics or for the data center, or even for the little, miniature. DGX. The, yeah,
Jim McGregor:The Spark.
Karl Freund:Yeah.
Jim McGregor:Love that.
Karl Freund:Wanted to call it digit. I want one. It's called Spark. I want one. I one so bad. It's super cool. We all want one.
Leonard Lee:Yeah. So Jensen send us, we all wanted DX Spark. Yeah. And a workstation and station, right? That would be great. Like, and then sign up please. Okay. I'm not going that far. Oh no. His jacket by the way, everyone. Yes, he was wearing his original jacket once again. So that jacket that he wore at CES must have been a CES special. Yeah. So, yeah. Well, he wore two different jackets
Jim McGregor:today. Yesterday. Did he? Yeah.
Yeah. Oh,
Leonard Lee:he's got, he got a lot diversity in his jacket collection. He does. I wonder if
Karl Freund:he wore that jacket or a suit when he went to the White House. I don't know. I didn't see any videos of his arrival. Maybe it's a simulation.
Francis Sideco:Yeah. But let's not underestimate what you said, Leonard, about the simulation and like with cosmos of being able to create synthetic mm-hmm. Training information. If you really take it into layman's terms, that's like being able to, force feed training Yeah. To a human being. Like the way they did it in Matrix, frankly, it's just like shoving it all in, in, it's like driving a billion miles. It's like driving a billion miles in a very short amount of time. And there's the sheer amount of experience that you're able to gather one of and
Jim McGregor:apply one of, one of the stats was 27 years of simul or of what would be real time, physical training. In, what was it?
Francis Sideco:In one day? One day on one GPU. and that's assuming in real world you actually encounter those situations. Yeah. Dropping Godzilla in the middle of the freeway happen. Yeah. You might not do that. But the fact that you're able to do that kind of concentrated learning, you can, IM, you can immediately see how some of these automated systems. Are becoming more highly trained than a human because you can't do that. Well, it's
Karl Freund:given me renewed confidence that we will solve the. Automated driving. Well, you problem.
Leonard Lee:And and it's not new.'cause they were, they were talking about this stuff like four years ago. Yeah. You know, it just is at the sort of like the height of like the av, av yeah. Mm-hmm. Pipe. Mm-hmm. But its
Francis Sideco:application wasn't as. As clear Well, how you can see how well divers diversified
Leonard Lee:its application. I think, I think it was still clear and necessary back then, right? Yeah. In accelerating the, the training of, AV systems. Now, I, I mean they're broaden it, especially with their focus on, um, robotics, right? Yeah. With Isaac group, right, right. Yeah. Being
Jim McGregor:able to take, human actually doing something, combining it with videos of humans doing something. Yeah. And then using AI to create simulations and solutions. Yeah. And just phenomenal. I know that even a lot of surgery being done today, like, knee replacements and shoulder replacements and stuff like that, they're actually using robotic arms to do all the cutting. Yeah. And everything else, it's amazing that just blows me away.
Karl Freund:One of the things that, that I thought was really interesting was the discussion about the automotive industry and the, and NVIDIA's approach to the automotive industry, which I always thought it was primarily in vehicle. And now I'm starting to realize that's nice. If it, if they have an Nvidia chip in the vehicle,
Francis Sideco:that's a what they're really after is the
Karl Freund:cloud for simulation, for the vehicle operation, for the training. Yeah. And the command and control. And an important point was that only 5% of the vehicles in the world today are level two and above.
Jim McGregor:Yeah.
Karl Freund:So there's a huge opportunity there. Yeah. And the opportunity is not just in the vehicle, it's all a simulation. All those vehicles, all those OEMs will require before they put those cars in the road. And,
Jim McGregor:And while they did back out of the infotainment solutions. Mm-hmm. Now they've got the whole partnership with media tech. Using their technology, their IEP to develop an I develop by. I'm surprised.
Karl Freund:Hear an update about that.
Jim McGregor:Yeah, I was kinda surprised too, but I would expect that we will. Funny.
Francis Sideco:Well, on the automo, on the automotive vehicle bit, right? I mean put it in in perspective right now, like in Nevada, you train your 16-year-old on 50 hours of drive time and they give him a license.
Jim McGregor:Oh my god.
Francis Sideco:Okay. And like 30 hours of classroom work, 50 hours of drive time, they gave him a license. Mm-hmm. Okay. Compare that with training a ve uh, an automated vehicle with 27 years worth. Of basically drive time. Yeah and it just, it puts it in perspective. Well, as long as it's good
Jim McGregor:driving. I was just gonna say, it could be really bad driving actually, but hey, no. Well, that's the thing about
Francis Sideco:The synthetic data is because you can control the, you can control the synthetic data. It's not garbage and garbage out because you can control the quality of the synthetic. Well, but,
Leonard Lee:but I'll argue this, you need to have a crap driving algorithm to create. Stupid situations in your Euro synthetic data.'cause that's actually what Yeah. A vehicle needs to respond to is to bad drugs because, there's only finite numbers of way to drive properly at any given time. There's infinite ways of driving. Really crappy crap. Yeah, yeah, yeah. Exactly. And they can get 90%
Francis Sideco:of that in Vegas.'cause the drivers there shocked. I was trying to get this guy to
Jim McGregor:it, it, it strangle me. It, it, it really needs to account for me on the road. Yeah. Oh yeah. Terrible. The car that's going three times faster, my god. Crazy. Geez, that's really horrible. I actually used to terrorize the, Waymo's in Phoenix just to see how they would react. Oh. So there was one Jim sitting there poking the beer. I would, I'd actually, there's one place on Mill Avenue where it goes down to one lane and I would actually pull it next to the Waymo's and test them and see how they react. Hopefully
Francis Sideco:without a passenger in there. But no,
Leonard Lee:you, I wanted to take things back to the new AI supercomputing lineup. Right. And last year they introduced, what the transformer, transformer engine, right. Is that what that was? No, that was, no, no, no. That was.
Karl Freund:Now transformer engine was,
Leonard Lee:two years ago. Yeah, two years ago. Right, right. But how there's a lot more talk about, so last year they introduced transformer engine to two, right? Yeah. For, yeah. And so with the support for, FP four, right, exactly. But how now that's becoming even more important. There's a lot more talk about, uh, mixed precision. And the role that, it plays in driving the, the scaling of these systems. Right. And so when we look at, one of the things I noticed, if you look at the roadmap, the chip, it's, the individual GPUs themselves are not scaling as fast as everything else around it. We talked about networking already, but it is interesting to see how, you have the memory, the HBM, right to, the upgrades there that are happening over the course of the next, two iterations of the system as well as the networking. Really playing a outsized role, in, in scaling the systems. Yeah. And I, one of the things I thought was really interesting was Jensen, trying to clarify what a GPU is. Mm-hmm. Mm-hmm. Right? Because everyone thinks it's just the chip, but in actually their parlance it, it is something much bigger. Right. And, we see this, I guess it's a confusion. E, especially amongst the media about what a GP is. Just turn on a TV and have, I don't even to call'em GPUs. Yeah, it's not really a GPU, right? I just called an
Jim McGregor:accelerator, or I call it a GPU accelerator.'cause it's really an accelerator. Yeah, but it's a accelerator. It's it literally is a freaking, you look
Francis Sideco:at the GPUs that are in like the G-G-D-G-X sparks and all that stuff, they don't do graphics. Let's ask this question. Okay, let's go down
Leonard Lee:the line here. What the hell is a GPU? Now is
Karl Freund:Nvidia Accelerator. Well.
Jim McGregor:It's a graphics the best you can do. It's a graphics processing unit, but these things aren't GPUs.
Francis Sideco:Yeah,
Jim McGregor:that's the problem.
Francis Sideco:Yeah. It's K FFC is no longer just Kentucky Fried Chicken. Right. It's just KFC and it's the same thing. It's just another, it's just another, it's another, where it's not an acronym are bringing
Leonard Lee:out Kentucky Fried Chicken as. Source of an analogy. It's been a long day. He's been king a long day. Oh my gosh. The king food analogy, bringing up Pizza Hut. Talking about splitting up pizza pies. Yeah. Well, no, I mean, I think the important thing to understand is like what you alluded to or you pointed out before, is that it's a logical it. It's a logical gpu. Massive, right? Mm-hmm. And I don't know if that's quite registered in everyone's mind, but it's an important point for, everyone who's observing the AI industry and AI supercomputing to clearly understand. Mm-hmm. Right. I think
Karl Freund:one of the, interesting conversations with Jensen was about what do you want NVIDIA to be known as? What do you want the Nvidia to be known for in three to five years? Mm-hmm. And the answer was not. The data center, the, which is what many of us were expecting. What he said was that Nvidia is foundational mm-hmm. To the entire world for artificial intelligence. Yeah. And I thought back to the slides, he flashed up really fast, too fast for any of us to read, during his keynote, which was of all of his major partners
Jim McGregor:Yeah.
Karl Freund:And how they're adopting. Ai. And if you look at each one of those, they had these little green squares on them. Those green squares were all nims. Okay? Nvidia Inference, inference services. And that to me is foundational. That is more of a competitive moat than Cuda.
Jim McGregor:Well, it's an entire stack. It's an
Karl Freund:entire stack, and you have the entire industry adopting that entire stack, which is gonna make it really hard for a MD or Intel. Well, and
Jim McGregor:I don't think it's just, uh, I don't think it's just that, when you look at Dynamo mm-hmm. Being an operating system for an AI factory or even a data center, when you look at Omniverse, which is, you know, depending on how you look at it, you could actually classify it as an operating system for the cloud. Yeah. So there are, not to mention these endless, almost endless foundation models are creating for each individual vertical segment.
Francis Sideco:Yeah. So just going back to my earlier statement though, I agree completely that Nvidia is. Is continuing to push the envelope. They're gonna get a lot of value out of this and a lot of return on their investment. But that does not mean that there's no, no place and no role for other competitors because not everybody needs like the Uber scale. Mm-hmm. Training or inference to frankly,
Jim McGregor:there, there's still a lot of traditional AI solutions out there, and AI at the edge
Francis Sideco:there, there's a lot of a AI at the edge. And even when you look at the competitor, the more direct competitors to, to Nvidia, I'm sure they're not. Standing still, right? Yeah. and they're going, there is an appetite, for not just one, one. We don't want to end up with another kind of Wintel situation, from a, from. 20, 30 years ago. And I think the industry there is an appetite for other options. And especially at the edge. Especially at the edge, there's opportunity
Jim McGregor:that, and for specific applications cases, that's why specific we still see hyperscalers doing their own silicon. Yeah. Because they have use cases where they know they can optimize for.
Leonard Lee:Yes. And those are, so one of the things that I think it still has, it's still not been clarified. Or is well understood is that there is a difference between hyperscale and cloud service provider. Oh yeah. Right? Oh yeah. And that, for the hyperscalers like meta who have singular workloads that are massive, it might be a recomme scale to optimize. Yeah. I mean that, that's gonna be a production application that req requires that level of massive scale where yeah, you're going to be going custom. You're going to, tune the entire infrastructure singularly to run, that set of workloads optimally, right? It's an operational system and, for the cloud and for the cloud service providers and the model builders, that's where I think. Nvidia has its sweet spot because it's programmable, it's more, quote unquote general purpose, right? It has the flexibility to the software where the, the hardware now can be used in a wide different range of, purposes, right? And projects. And that's a different. Value proposition almost. But I, I do feel that Nvidia is trying to, break that, notion, right? That no, actually we're good for everything, right? And that mm-hmm. It's an argument obviously, that, they'll want to make and, and but it's important to understand that dynamic.
Karl Freund:Yeah.
Jim McGregor:Yeah. And we should also note that. This is just mayhem. They have outgrown San Jose, period. they shut down most the roads. Yeah. they have to do it somewhere downtown San. This is ridiculous. Yeah. Took over
Karl Freund:the entire park. Yeah. For, for food. Well,
Francis Sideco:hopefully they'll be out in Vegas at some point, but
Leonard Lee:Yeah. Yeah. So, let's talk about enterprise general ai Understood Uhhuh. Did you? Well, I talked about the systems. No, you saw, talked about the system, but enterprise, because one of the things that they're really trying to push for this year, because you're seeing things like, I don't know if you noticed, there's like a whole AI data, data, and storage movement going on, right? People want to get a handle on the. Enterprise data and figure out that out. Right. Which is different from what we're seeing with what the model Yeah. Builders are producing or pursuing, which is, hey, how do we get as much, publicly available or maybe in copywritten data as possible for us to train these foundation models. But for the enterprise, it's different, right? Yes. And you're now, this is really how do we make data available for compound AI applications that are gonna enable us to take, advantage of this new AI cap generative AI capability. I wanna make that distinction.'cause everyone, talks generically about ai, the thing that's really caused the stir in the last two years is generative ai. Right. The big question. And now we just
Jim McGregor:AI And those are two different things.
Leonard Lee:Yeah, they are. I think they're closely bound together in, in a certain way. Well, in, in terms of function
Jim McGregor:or, or, yeah. Well, generative AI is thought, agent AI is reasonable. So, um, yeah, you know what? I'm glad you brought that up, okay. Keep going. One is going to give you a feedback on everything it's been trained on. The other one is actually gonna sit there, think about it, and use a lot of different information that's available to it to actually make a judgment call. And it may be significantly different than Gen ai.
Francis Sideco:And you add onto that, it's abil, it's agency, it's ability to take that judgment call and then act upon it. Take action, take action. I think that's'cause a lot of people are using agent ai. To mean basically copilot assistance or that's completely different. Yeah.
Leonard Lee:And you know what, so the reason why I'm glad you brought that up is there is a semantic disconnect out there right now. Yes. The way that people talk about AG agentic ai, it's as certain circles, well, there's assistance, but then. Now, like when you, what you see a lot of ISVs talking about is AgTech AI in the context of automation, process automation. Right? Right. And so there is really this Yeah. Like disconnect that's happening and that's what they call, they call it semantic D specification. That's actually a, actually, it's, it goes the other way around. I can, I
Francis Sideco:can't even get my semantics in my head straight on that, but I mean, that is really important if you're talking about agent ai and the first thing outta your thought process is, a work, a process workflow.
Mm-hmm.
Francis Sideco:That's not a agentic ai, that's a, that's an automated process workflow. That. That's different than actual agency. Don't say that. Yeah. you're gonna
Leonard Lee:make some ISVs very unhappy by saying what you just,
Francis Sideco:well, here's the, here's the thing. We have to make sure that we under, I mean you can, from a marketing standpoint. Okay, fine. Use that. But if you're really trying to understand the implications to the technology and what you can or can't do about it and what kind of infrastructure you need to support it, you have to be clear about those nuances.
Leonard Lee:Well, yeah. And then we also have to reconcile that against what we're hearing, from the, a lot of when we were in the A IPC world and the on. Device AI world. Mm-hmm. How they talk about ag agentic ai which is personalization, it's doing things for you. Yes. These are automation concepts. Action. Well, there's
Francis Sideco:there's context. Yeah. There's the context element to it. There's the judgment and then there's the action, which is exactly agentic ai.
Leonard Lee:Okay.
Karl Freund:Yeah. Without action is not agent
Jim McGregor:And enterprise is, really trying to figure out how do they, they use all the information in free and open source models that are out there, combine it with their information and then what resources do they need on prem and the cloud. Everyone's looking at hybrid solutions, but it is. And it's a, it's, a collus challenge right now for them to figure out. But I think over the next year we're gonna see a little bit more clarity and ability to be able to do that.
Francis Sideco:And I think where enterprises need to start, and this is where they kind, get lost a little bit, is there's so much possibility out there. They're like, they don't even know where to start. And I think really, if you're going to be serious about enterprise ai, you really need to start and. This is from every single conversation I've had is start with specific problems that you're trying to solve for your business, and then let that drive what model you're using, what architecture, hybrid on-prem, whatever you're using, what you know, data you need to be bringing in and so forth, governance, all of that stuff. But you have to start with a, with specific use cases and problems that you're solving. Absolutely. So.
Leonard Lee:So I did want to give a shout out to IBM. I know you guys like doing that, right? So I'm gonna, I haven't done it. You guys have, but I'm gonna give a shout out to IBM because they're the only company I've encountered in two years that are, it's, that's actually addressing, enterprise AI security for ai. Yeah. They have a thing called, context aware storage Folks. Keep an eye out on that. And also. Here's a name that you need to know. His name is Vincent Su, HSU. He's an IBM fellow, and he is working on this stuff. And, they were collaborating with, Nvidia to integrate what's called, context to Wear Storage. Into the NIMS framework, right? Mm-hmm. And this, I think this is really important work. Yeah.
Jim McGregor:And
Karl Freund:I think we'll hear a lot more about next week in New York. Yeah.
Jim McGregor:That and then combined with their granite models, which are really the only lab enterprise ready. AI solutions. Yeah. Well it
Francis Sideco:From a security, and guardrail standpoint, but also couple that with instruct lab, where there is, where it makes it really easy to then bring your enterprise data context in, into the, to retrain for, specialized
Jim McGregor:model. Yeah. But
Francis Sideco:again, even then. Even in my conversations with IBM, they say it's very important for you to know as an enterprise exactly what problems you're trying to solve. Yeah.'cause then that's when the tools really sink.
Leonard Lee:Yeah. And you know why I'm bringing this up is that, I've identified this gap in rag architectures. This is gonna resolve the, I'm, that might be premature to say it will resolve, but it starts to address the issue of being able to apply fine-grained security controls. Yes. On top of a rag. Mm-hmm. And so whether it's, graph database. That you're dealing with, or Vector database. A lot of folks don't know that you can apply, fine grain controls on top of a vector database. They're, they were never designed to do this kind of stuff. Yeah. And so they're doing some creative, abstractions of controls on top of, and this is something I need to find out more about, but I'm very excited, at least IBM recognizes this problem because for this whole time people have just been focused on. Hardware level security, network level security, but not really looking at the application and the data.
Francis Sideco:And you can see the convergence there. Aside from that, that, collaboration, they also announced a collaboration with NVIDIA around their IBM consulting, engagements, because it is right now requiring. A lot of consultation for enterprises to figure out, okay, what is their use case? What are they trying to solve?
Leonard Lee:Yeah.
Francis Sideco:Um, and then go from there
Karl Freund:and what's the state of the data
Francis Sideco:Yeah. And what's the state of the data. Absolutely. That's a key one.
Leonard Lee:Yeah. So, yeah. Cool. So, yeah. And so what do you guys expect next year? I, I guess you, you already know what to expect. Already know that's a
Jim McGregor:bigger fire hose.
Leonard Lee:Yeah. Bigger, bigger fire hose.
Jim McGregor:Come on, let, let's do this. We gotta do this. Well, we already know. We already know. Next year, do we? Next year it's,
Francis Sideco:do we, it's Ruben, right? Yes. Next year will be Reuben. It's very Ruben. Yeah.
Karl Freund:Next year it'll be Ruben.
Francis Sideco:And then on the networking side, they went scale up last year. They did scale out this year. So we'll see what's next. Next up for that,
Jim McGregor:They seem to be strategically targeting. Each bottleneck in the data center, especially in the data center. Something around ai. So I, definitely I think we'll see enhancements in memory and everything else. And networking. Yeah. I'm hoping that they start addressing the bottleneck that's emerging out of the infrastructure for the data center. Because right now all the cooling and, power solutions are really customized to each implementation. Yeah. Yeah. And that's gotta change. When you're spending hundreds of millions of dollars on a data center, yeah. You have to, you don't want to do it each time you're employ, you're, putting new GPUs into the data center. So it's gonna be interesting. Do you realize what you're saying? Yes. Oh my gosh. It has to be fucking play. I, on that note, I think is probably, I worked for the Motorola computer. Matter of fact, I'm surprised that they're not looking at alternative structures like blade servers.
Leonard Lee:Ah, here we go. I'm serious. He's back on the soapbox. Blades are good. Blades are good. All right. On that note, on that note, no, there's nothing wrong with that. No, I think it's a great comment. Yeah. So seriously, you guys, there's no expectations. You know everything for next year. No, no,
Karl Freund:no.
Leonard Lee:Oh, absolutely. Just
Karl Freund:like we didn't expect optics.
Leonard Lee:Okay, so he got one. What about you?
Francis Sideco:Yeah, I think I said me, I think, don't say
Leonard Lee:that you, you're gonna just repeat. No, no, no. I think
Francis Sideco:I said earlier when you first asked it was, I think the memory subsystem okay. Is going to become extremely, it already is extremely critical. It's, it's going to become even more critical and I think, I think we'll see probably some innovations around that.
Karl Freund:Yeah. Carl. You've stumped the band. I, I, I really don't know what to expect next year. I thought I knew what I, what to expect this year. Mm-hmm. I was wrong. Yeah. Right. Was not expecting storage. We got storage was not expecting optical.
We got optical networking.
Karl Freund:I don't know, may, maybe they'll do optical in the rack. Maybe that'll be the surprise. I don't know. But, it's gonna be a fun space to watch this space.
Leonard Lee:Yeah. Like for me, I mentioned before security. Hopefully we'll see traction. I don't know if we will. It might be a pretty tough nut to crack. And, this whole, KV cash thing. Yeah. With Dynamo. Yes. Well, that's, it's so interesting. That's the memory. That's the memory that, because here's the thing, I wanna know whether or not, because this year it's all about a hundred x. How about the other way around? How about compression and how about reduction of the amount of compute required because, it's one, it, I understand why Nvidia wants
Francis Sideco:more.
Leonard Lee:More. Yeah. But what about less? Because someone's gonna take that opportunity. Right. And that's, that's, and I think
Jim McGregor:we're seeing that with their Jetson, even though it wasn't really highlighted this year. Yeah. Their Jetson platform for autonomous machines, whether it's robots or cars or whatever, they're going there even though that wasn't the focus of this gtc. Mm-hmm. I still think that you're gonna see more.
Leonard Lee:And so that's what we might be able to expect. Maybe Yes, at GTC 2020 6 26. Yeah. And with that, I guess we, we wrap it up. We
Francis Sideco:wrap it up, yeah. Sounds good. Well,
Leonard Lee:gentlemen, that was fun. It was great hanging out with all of you here at G. It's been a fun week, GTC 2026 and, to our audience, thanks for listening in. Remember to like, share, comment on this episode. And remember to subscribe to the next curve, rethink a podcast, as well as our research portal, www.next curve.com. And also, these guys are published
Jim McGregor:everywhere. Find us on Forbes. Find us on E Times, eur, YouTube. YouTube, a number of other publications and outlets and definitely, look for us at, the other trade shows that are coming
Leonard Lee:up@www.teusresearch.com. Come on man. You gotta front your company properly and of course, and Cameron
Karl Freund:hyphen hyphen hyphens a i.com.
Leonard Lee:Yes. And I do that because I don't want people to like, to not have that. Yeah. Put in the website. Thank you very much. Well, it's been fun. Yeah. And that's Carl Fre of, Cambrian AI Research.
Karl Freund:Well said.
Leonard Lee:Yeah. And so, until next time, remember to always tune into the Rethink Podcast and our Silicon Futures, program here for the tech and industry insights from the world of AI and like GPU stuff. We'll see you next time. Bye-bye. Thanks. Bye-bye.