The neXt Curve reThink Podcast

Silicon Futures for December 2025 - NVIDIA + Groq, Trainium 4, Marvell, and 2026 predictions!

• Leonard Lee, Karl Freund • Season 7 • Episode 47

Send us a text

Silicon Futures is a neXt Curve reThink Podcast series focused on AI and semiconductor tech and the industry topics that matter.

In this mega-catchup recap episode, Leonard and Karl talk about some of the top headlines of December 2025. 

Topics that mattered in the AI and semiconductor universe:

🔥 Kicking off the agenda for Silicon Futures December 2025 (0:32)
🔥 NVIDIA acquihires and licenses IP from Groq for $20 billion! (2:21)
🔥 No quantum assets for NVIDIA in 2025.... why? (6:02)
🔥 NVIDIA seems to be diversifying their IP & architecture portfolio in 2025 (7:16)
🔥 NVIDIA's diffusion plays in 2025 to tap the custom AI infra market (9:29)
🔥 AWS launches Trainium3 instances & tees up Trainium4 (12:55)
🔥 Shifting competitive & technological landscapes impact "AI leadership" (16:11)
🔥 Marvell acquires Celestial AI for optical scale-up IP & advanced interconnect (21:30)
🔥 Karl's 10 AI and semiconductor industry predictions for 2026!  (23:31)
🔥 Leonard's AI and semiconductor industry predictions for 2026! (37:32)
🔥 Wrap up (41:16)

Hit Leonard, Karl, and Jim up on LinkedIn and take part in their industry and tech insights.

Check out Jim and his research at Tirias Research at www.tiriasresearch.com.
Check out Karl and his research at Cambrian AI Research LLC at www.cambrian-ai.com. Check out Karl's Substack at: https://substack.com/@karlfreund429026

Please subscribe to our podcast which will be featured on the neXt Curve YouTube Channel. Check out the audio version on BuzzSprout or find us on your favorite Podcast platform.

Also, subscribe to the neXt Curve research portal at www.next-curve.com and our Substack (https://substack.com/@nextcurve) for the tech and industry insights that matter.

NOTE: The transcript is AI-generated and will contain errors.

DISCLAIMER: This podcast is for informational purposes only.

neXt Curve Intro:

Next curve. Hey

Leonard Lee:

everyone. Welcome to this year end Next Curve Rethink podcast episode where we break down the latest tech and industry events and happening in this semiconductor industry and the AI industry, into the insights that matter. And I'm Leonard Lee, executive Analyst at Next Curve. in this Silicon Futures episode, we will be talking about a number of things, namely, AWS tra, coming out of reinvent 2025. What's the news there? We'll talk about Marvel Industry Analyst Day 2025, which took place, uh, at their HQ in Santa Clara. And of course we will touch on the biggest news that dropped just yesterday, which is NVIDIA's acquihire, IEP acquisition. Weird thing, with Grok. And I'm joined, by the fantabulous called Call Fre of Cambrian AI research Unfortunately in this episode, we won't have our good buddy, Jim McGregor of Aerus Research, but, he is here with us in spirit. In spirit, yeah. So, hey, before we get started, please remember to like, share and comment on this episode. Also subscribe here on YouTube and Buzz Sprout. To listen to us on your favorite podcast platform. Opinions and statements by my guests are their own and are for informational purposes only, and don't reflect those of mine or those of next curve. And we're doing this to provide an open forum for discussion and debate on all things semiconductor industry and ai. How you doing?

Karl Freund:

Pretty good. Doing pretty good. I had a good holiday. Ate too much of course, but I could use to put on a few pounds. That's not bad. yeah.

Leonard Lee:

Wonderful. Wonderful. Yeah. thankfully I'm losing weight. I have to. Watch myself now that, I am no longer deemed young.

Karl Freund:

Tell me about it.

Leonard Lee:

Really, really, really, really depressing, but, uh, yeah, yeah. But yeah. Yeah.

Karl Freund:

Let's jump right in. I was very surprised by NVIDIA's. To Acquihire Crock. They didn't acquire the company. They still spent$20 billion. So think about that. Yeah. Two and a half times what they spent on Mellanox

neXt Curve Intro:

Yeah.

Karl Freund:

Had real product and real customers and distribution and everything else. so prices have gone way up. Yeah.$20 billion just for the ip. They'll, they will license and of course, bringing Jonathan Ross, CEO and founder of Grok on board.

neXt Curve Intro:

Yeah.

Karl Freund:

I can't figure out why they did this. they have, they already have. Great. in my estimation. Yeah. Fantastic. Fantabulous as you put it.

neXt Curve Intro:

Yeah.

Karl Freund:

AI inference, processing capabilities with Blackwell going on to Ruben and, Ruben, CPX and everything else. Why? What they saw in GR is beyond me. They gr does have interesting memory, architecture. Yeah. very large SRAM based, memory. Yeah. Mm-hmm.

Leonard Lee:

It's

Karl Freund:

not a tiered thing like San Anova, which by the way, was acquired by Intel.

Leonard Lee:

Yeah.

Karl Freund:

Like fourth times a charm, I guess. I don't know what's going on. the employees of the company, as of. Today,

neXt Curve Intro:

uh,

Karl Freund:

26th of December, uh, apparently don't get anything. They now own stock options in Grok Cloud. So all the grok hardware and software technology has been licensed to, not acquired by, but licensed to Nvidia for$20 billion.

neXt Curve Intro:

Yeah.

Karl Freund:

I don't know what Jenssen's thinking, but I do know him well enough to know that he's got good reasons and we'll all find out about it at GTC, if not sooner.

Leonard Lee:

Yeah, it is really odd. but the way I look at it, it's also kind of an admission that when we look at the world of inference, it's different. if anything, this is a move that suggests that Nvidia doesn't have the moat that everyone thinks that it has. because this is really weird. This is very much custom silicon, right? it's a custom architecture. And this is very much a departure from what we see in the Nvidia camp or even the a MD camp, right? I mean, this is like a different animal. And, it lends credence to the arguments that we made. almost a year ago now. Actually more than a year ago that the whole custom and asics, movement in the systems based on them will be a different market. And I think when you look at what NVIDIA's done over the last year, we've seen changes in their approaches to networking. And, for memory, yeah, for this, what was quote unquote traditionally the custom. Systems market. Right. Which is predominantly, that we predominantly associate with Broadcom and Marvell and the hyperscalers with their custom stuff. Right,

Karl Freund:

And don't forget, they also acquired in Fabrica. Yeah. In Fabric is, is an asic that, that provides networking, networking protocol, trans translations and, very fast, access too. Large banks of memory. And yeah, you couple that with what they're doing with Blackwell, CPX and now with Rock, which is a very memory intensive, much smaller but very memory intensive architecture. it scales pretty well. I think we're starting to see breadcrumbs that indicate a shift in strategy or or least a shift in implementation of their strategy.

neXt Curve Intro:

Right,

Karl Freund:

from Nvidia. what they haven't done yet, Which is a bit of a surprise to me is they haven't acquired any quantum assets. I just wrote on Forbes earlier this week about, quantum computing laid out the five modalities of quantum that are being developed now and pointed to IBM as the leader in quantum systems that will be, demonstrating quantum advantage. And then, scalable fault tolerant computing in 2029. So, that's coming. It's gonna be a massive shift in the computer industry. I don't know what NVIDIA's game is right now. Right now they're acting as A-A-G-P-U adjacent to quantum computing and providing communication between that GP and the quantum nodes. With these recent announcements. they're, not enough to be a significant player in what will become a multi hundred billion dollar, market in the next five years.

Leonard Lee:

Yeah. the quantum, quantum stuff is still, I don't know, seems pretty nascent. I mean, I don't know what you're seeing that's changed your perception about, um, how different things are from, you know, quantum Day.

neXt Curve Intro:

Mm-hmm.

Leonard Lee:

You know, at GTC 2025, but IDI don't see anything other than, you know, press releases. Right. So, but going back to grok, I think there's two things that you can maybe surmise from the move is, number one, diversification, right? Because the landscape apparently and clearly is changing, right? You mentioned about, Google, in fact, you have a report on that, right? Mm-hmm. On forms about Google's TPU and Ironwood. How that shifted the narrative and thinking around not only, the custom, let's say application specific systems or apply, let's call it stack specific. It's not application specific, but stack specific. AI, super computing, systems. It's impacting the narrative. And narrative is such an a big deal, in this hype cycle right now. And I think you are seeing Nvidia actually now having to re react to things versus just dominating Yeah. The storyline, right? Yeah. And that I think is probably gonna carry over into next year. And so I think, so really interesting.

Karl Freund:

Yeah. What you're seeing are things that a year ago would've been unthinkable.

neXt Curve Intro:

Yeah.

Karl Freund:

Nvidia is already a leader in networking and they just bought the next generation networking With fabric. Fabric. And now you're seeing them buy, perhaps will become the next generation of infants. Processing is not based on GPUs. Yeah. And yet they're still investing heavily in GP based technology. So I think you're right. I think there's an element of diversification in here. It's expensive diversification, but, I suspect Leonard that. Nvidia was hearing things from their customers.

neXt Curve Intro:

Hmm.

Karl Freund:

They're very customer centric. Yeah. And Jenson spends a lot and his whole staff spent a ton of time with customers. They must have been hearing something that told them they needed to take an option out on Grok or some other kind of basic technology.

Leonard Lee:

Yeah. and then there's also like the broader diffusion play that, Nvidia has been executing on for the last year. The envy link Fusion is a great example, E especially trying to make inroads into the whole, custom, and asic side of the ai. Infrastructure story, right. which you and I talked about last year, it's a different game. It's a different market. It's one that Nvidia is probably not as core in at the moment, but then for a lot of the model builders and the AI firms that are trying to go to market quickly. GPU makes perfect sense. The whole kuda stack, the entire, AI factory concept makes a lot of sense for these types of, buyers, if you will, or buyer personas. But when you look at, hyperscalers that have. Large scale out, opportunities already. The application specific, the stack specific, there's a different value proposition for, building your own right. And then I know that there was questions last year about whether or not, it's economically scalable, but apparently it is. they're able to, at a system level, compete apparently, if you believe some of these reports that are coming out, that can compete with NVIDIA's systems at a lower price point. Right. And that's a huge value proposition for these large, large, Application specific. I hate using that term because it's not right. It's too specific. But you know what I'm talking about. These custom system requirements or custom business requirements, right?

Karl Freund:

Yeah. And you take the gro acquihire and you look at what's happening at Google, you look at what Amazon's doing with TRA three. Yeah. you look at what's happening in China, the industry seems to be taking a shift. A shift towards application specific. Processors. And more importantly, systems level architecture that These asics are really designed to excel in. So I, I think we're gonna see a very different 20, 26 than many people are expecting. Yeah.

Leonard Lee:

Which we'll get into. Yeah. With your predictions. Right. And then some of the additive things that I'll sprinkle on top.

Karl Freund:

Yeah, sure.

Leonard Lee:

Yeah. I think scaling that's where the economics are going to and the technology walls are going to experience. the first, let's say bumping of the forehead, you know, and now there is obviously a pivot toward, the scale up. That, that's a key area of optimization and, then the scale out. I'm still not quite sure about the scale across stuff, even though that's really in vogue at the moment. You know, these. Cross data center interconnects and stuff like that. it's really interesting there, there's so many nuances to how you need to engineer your networking and your networking, technologies, to be optimal for that application. But I have real questions about how relevant that's gonna be in the future, especially as the model training gets more efficient because you are focusing on better, non hardware based approaches to building these models. You know what I'm saying? At a software level, and model architecture level. Yeah. And speaking of, um. TRA three. You mentioned TRA three. Yeah. I was at AWS reinvent at the beginning of December. it was my second to last event, of the year. yes, they introduced new TRA three. Instances. Mm-hmm. They also announced, training for which we can expect at the end of 2026 or maybe, more reasonably, the beginning of 2027. But one of the interesting comments that Matt Garman made was that, they are considering MV Link fusion for training for, but I wanted to get your take on what you thought about. Geranium and the fact that they didn't even mention Inia.

Karl Freund:

I think that there's a dichotomy in the industry right now. You have architectures that claim to be designed for inference like crock, like cereus. But I would point both of those architectures were actually designed for training. CPX coming up.

Leonard Lee:

Right.

Karl Freund:

CPX. That's different though. I mean Oh, okay. Right. That's, that's an extension of A GPU. Yeah. With more and much cheaper memory. for compute intensive workloads that do not require the memory bandwidth and latencies of HBM, which is Training. So I think the lack of in friendship discussion is an indication both of the importance of. Inference and a statement that kind of says that they're gonna use their training chip for inference. Okay. Yeah. And so that, that could significantly reduce their CapEx and development costs if they just focus on one architecture, which is singing. Nvidia song isn't so, although now. Now video is gonna go off and buy an inference asy, but, you know, I, I, I would point to how is going to. Install a million Google TPUs.

neXt Curve Intro:

Yeah. Yeah. That

Karl Freund:

says a lot to me. That says a lot about what training three is. They do all about training Three. They have in house before anybody else that their big daddy investor invented it, right? Yeah. AWS. And yet they're gonna go buy a million Google TPUs.

neXt Curve Intro:

Yeah.

Karl Freund:

not 10, not 20, not 50, not a hundred. not a thousand, a million. And that's just, they made it very clear. That's just their first tranche. I still suspect that in five years there will be noran

Leonard Lee:

million, but you know, they have a million. so they did, bring, project Rainier online with a half million, and I don't know where they are. The target is by the end of the year, a million, accelerators. yeah. But, uh, and by Anthropic, right? Anthropic is. Securing 2 million. Obviously not, but that 2 million accelerators, that's a lot.

Karl Freund:

That's a lot. And so on top of what they already have too, which is a lot of GPUs from Nvidia. Yeah. As well as Tanium two, which they weren't very happy with. No. Everything I'm hearing, so I don't know. I wouldn't rule'em out, but I wouldn't bet the house on it either. I think the leaders next year will be, of course Nvidia, they'll still have an excess of 80% share. I suspect by the end of the year. Which is a significant step down because Google is gonna take a big chunk of it. AMD's gonna take a chunk, not as big, but the ordering of the industry just changed. Right? Yeah. It's gonna be in as Nvidia and a MD and then everybody else. Yeah. And now I think it's Nvidia. Google and everybody else. So a MD kind of, I think gets demoted to one of the players of every one, to be everybody else instead of,

Leonard Lee:

yeah, I mean that, that really does put a MD in a tough spot because there's still a minor player, very minor player, and they're getting crowded out on what you might consider the growth. I think what I'm saying,

Karl Freund:

I agree.

Leonard Lee:

Unless they look at, enterprise and no one's code on enterprise.

Karl Freund:

Yeah. I don't know. that could be lot to think about. But, the difference is that Google didn't. Build a chip and then a system. Technically they did, but they thought through the system design as something they had to control, they had to invent and they had to have massive innovation across the entire stack.

neXt Curve Intro:

Yeah.

Karl Freund:

Whereas a and d still playing the old game of, here's my GP. Hmm. What am I gonna connect it to?

neXt Curve Intro:

Yeah.

Karl Freund:

And, they'll get there. But they, they're going to, they're, I believe that thinking is, was too conservative. That's consistent with Lisa Sue's strategies, plays played very well for them in CPU world. they're gonna end up at 50% share.

Leonard Lee:

they're like leaders in, you know,

Karl Freund:

not GPUs because the GPU is only a small component of an entire data center scale system. And this is what GS has been telling us for years. It's not the gs, it's kind of the old Sun microsystem the, it's the networking.

Leonard Lee:

the problem is, is everyone's still fixated on the chip. Everyone talk about the chip, chip, chip. from a trade policy or tech policy perspective, the lens is still focused on the chip. And we've graduated from that a long time ago. but the industry still thinks in terms of the chip. And if you look at what's happening, like my comment earlier, what's happening with scale in. That's not as much of a factor, on a system level in terms of, scaling influence, right? Mm-hmm. It's interconnect, like the scale up, still, right? And so we have co packaged, optics, but still like, okay, Marvell, industry day 2026 is there, where we have, Nvidia now talking about making a transition to optical. We have Marvell. Still aiming to, lend longevity to copper. Right. as much as possible. And even though, they're teeing up, co-pack the optics, no doubt. This is where, I think you're going to see a lot more of the focus, especially next year, this is what we saw trending, this year and probably ramp into next year. And then the scale out, for these mega clusters and then the scale out. Scale across, which is still kind of a let's see.

neXt Curve Intro:

Yeah, let's see.

Leonard Lee:

Yeah. X gs, yeah, yeah, yeah.

Karl Freund:

Who's driving that?

Leonard Lee:

But yeah, I wish Jim was on, because then we could get his take on the MV link Fusion four. Tanium four. Mm-hmm. Because the fact that. AWS is looking at train., MV Link Fusion is interesting. and I actually asked that question, during the CEOQ and A and Matt's response was, well, we're gonna see, which is better, between, UA link. And Envy link. so, you know, I obviously, they have some questions, themselves about how are we going to approach the next generation of interconnect, for our systems?

Karl Freund:

Well, they better hurry up or decide next because if they're gonna do envy link a year from now and they don't know what their interconnect is, that is not a credible statement. you have to know. 18 months in advance before you start laying out gates and out. the controllers are on the chip. It's not something you attach to over PCIE.

Leonard Lee:

maybe, there's stuff that the AI. Team knows that he doesn't know. It's not always the case that, uh, you know, CEOA CEO know everything,

Karl Freund:

or, he doesn't wanna Osborne his current technology, so Okay. If he's going to go that route. Yeah. How much do you wanna spend on his current technology?

Leonard Lee:

Now, good

Karl Freund:

0.1, I would argue that these are not things that get Osborne, they're things that get, get, extended.

neXt Curve Intro:

Yeah.

Karl Freund:

You're not gonna. Rip out billions of tens of billions of dollars of hardware to put in a new interconnect. Yeah. Your next interconnect that will sit next to your current interconnect. That can be a different technology. So I think he just doesn't want to give Nvidia too much credit. They know. They know what they're gonna do.

Leonard Lee:

Okay. And one thing that I want to point out really quick that what made headlines, going into Marvell Industry Analyst Day 2025? Was the announcement that Marvell would be acquiring, celestial ai. And part of that acquisition would be, the acquiring of. That company's, I think photonic fabric mm-hmm. For scale up networks. Marvel's always been good at scale, out, scale in, and then now scale across Right. For the networking products. And now they looking at, optical for, scale up.

Karl Freund:

Yeah, I, I think there's a lot of startups that were probably vying for that acquisition and, Celestial obviously won it. I've always been interested in technology. They've been presenting at AI Hardware Summit in places like that. it looks like very, very powerful technology and, Confirms their leadership position.

Leonard Lee:

Yeah. And one of the things they were touting was end-to-end networking and interconnect. So you scale in with the advanced packaging,

Karl Freund:

very much what Buy it very much what buy has been touting as well, right?

Leonard Lee:

Yeah, yeah. Systems custom, Asics, right? Yeah. Customs, Asics. So like, if you wanted to approach anybody to build a custom infrastructure. Literally right on top of your custom stack. Marvell has that portfolio. But my question is how scalable is that? In terms of scaling down, right? Because this value proposition works really well for the hyperscalers who can afford to, invest and build out these custom infrastructures. But I mean, how does that then scale down so that you can address, markets that aren't that. Massive. You know what I'm saying? Mm-hmm. Because then, you know, up here with this hyperscalers, this concentration risk that everyone I think is a little bit concerned about, but yeah. Yeah. So that was great stuff. It's always a great event there. So, hey, let's get into your predictions. Let's talk about predictions in 2026. Now, you ready? Incredibly, your Mac doesn't work right? My

Karl Freund:

Mac just died literally this morning. Oh my gosh. This morning the Mac died, so I can't even pull up my predictions. I'm on an iPad, which is working pretty damn well by the way, but iPad is not. Yeah, no, it looks great doing multiple windows and stuff. Yeah, you could remind me what I predicted. You can argue I'll,

Leonard Lee:

we're gonna go, row by row here, item by item. They're all really good actually. So, AgTech AI moves from demos to staff digital teams.

Karl Freund:

Yes.

Leonard Lee:

Yeah, yeah,

Karl Freund:

yeah. That, that's probably already starting to happen, so that wasn't a hard one to call. Everything's moving to a, not everything, most. Most productive work is being channeled into agents and there's a lot of issues there. are you legally liable for what your agents. Because you're gonna be combining different components together. Yeah. To create a solution to a specific problem. so where does, where's responsibility like for that integration? what's the, regulatory environment gonna look like if there is one? doesn't seem like the current administration wants to regulate anything, so. Well they do, but we won't go there. But clearly, enterprises see agents as the best way for them to monetize the work they have been doing and will be doing in ai. Yeah. And that's what's gonna drive this.

Leonard Lee:

I know that you and, uh, uh, Jim are pretty bullish about this. Based on what I'm seeing, I'm not as, optimistic. I think there's gonna be a learning that happens in 2026 in this regard, but these are your predictions and we could be right and wrong, but one thing that's gonna be cool during the course of next year, we will be debating this. Fully, and it will be fun. So you want to tune it. Make sure that you keep track of us all year and every year. number two, inference dominates ai, CapEx and reshapes hardware mix. Oh my God, I love this. This one that is, that

Karl Freund:

is. Again, I took the easy route on some of these. That's obvious, right? It's already happened. No, I don't think, don't, Hey,

Leonard Lee:

don't think that you're industry is still talking about chips instead of systems and infrastructure, so don't assume

Karl Freund:

True, true, but they're also talking about services. And so a inference as a service is how most enterprises will deploy it. That will tear down barriers and make it much easier for companies to monetize the mountains of data they're standing on. And that will turn into inference, which is will drive profit. So I think another way to look at that is 2026 is the year. That profit becomes king. Not technology.'cause the reason you're gonna do inference is to make that inference is a inference, is a profit center. Training is a cost center.

Leonard Lee:

Yeah.

Karl Freund:

Okay.

Leonard Lee:

That's a great way of characterizing it. I mean, absolutely. Profit center, actually, it's more of a revenue center. Profit's different. but anyways, and we've had that discussion actually last year, so if you're interested, go and check out, we'll make the recommendation and notes, domain specific and smaller models beat general AI LMS in the enterprise.

Karl Freund:

Yeah, and this really came from the time I spend with IBM and listening to what their clients are doing. IBM as a service company now, of course, spend a lot of time helping clients understand what AI is and how it can be profitably implemented. that's where I get that thought. Mm-hmm. is that in inference is of course the profit center and, will drive a lot of enterprise adoption.

Leonard Lee:

I think that was a good one. That's a really good one. number four, CSPs and hyperscaler, asics start to bite into Nvidia, especially Google. Ooh.

Karl Freund:

Yeah, sorry, Nvidia. if Nvidia goes from 97%, which is where I think they are today to 80%, it's from a massively growing pile of money.

neXt Curve Intro:

Yeah. So

Karl Freund:

they're gonna be just fine. they will set the standard. They are the standard. They will continue to set the standard. However, I am very bullish on where Ironwood sits and what the next generation of Ironwood of TPUs will look like. I think, Google has finally cracked the nut of producing an AI system with over 9,200 TPUs attached over fiber optic cables, and switches, which are provided by momentum technologies. Yeah, a stock that has been just crazy this year. I think Google is ready to say, okay, we're gonna let this beast out of the walled garden, which is Google Compute Cloud, and make it available more broadly, at least through other cloud service providers, and maybe in some cases actually setting up, TTPU clouds within other companies like Meta. The market didn't like that when they realized that, oh, who's gonna set these up? Who's gonna bend the metal? That's low profit business.

neXt Curve Intro:

It's

Karl Freund:

not Google. It's Broadcom. And so that's why Broadcom predicted, decline and operating margins in the future. It's because they're gotta getting into that system integration business.

neXt Curve Intro:

Yeah. Yeah. Just not

Karl Freund:

just chip development. So, but this, the amount of money involved is staggering. And so I do believe that all the CSPs in less than a year ago, I was fairly, negative on the CSP Act actions. I still am on some of them, like

neXt Curve Intro:

not

Karl Freund:

impressed with what Amazon's accomplished, but I look at what Google's accomplished. I'm like, okay. E either this is the exception or it's the exception that proves the rule.

neXt Curve Intro:

Yeah,

Karl Freund:

and it's probably the latter. It's probably the exception that proves the rule, which applies that the other CSP provi, uh, developers and, uh, users will become the rule. And now we see. N video acquiring gr and maybe that kind of puts the, I dot crosses that T and dots. There's no I in exceptions.

Leonard Lee:

No.

Karl Freund:

But anyway,

Leonard Lee:

it's getting funky.

Karl Freund:

it's an interesting concept.

Leonard Lee:

It's gonna be a weird year next year, so I can say, okay. Five data, especially synthetic data, becomes as important as model architecture and ai.

Karl Freund:

today we're running out of data. The internet's been pretty much depleted as a source of training data. So there's only two other ways you can go. You can go to proprietary data, which is what enterprises will do, or you can synthesize the data, which is a lot of what's happening right now in, automatic driving. they're synthesizing data to make the vehicles smarter and safer,

Leonard Lee:

I'm thinking for a large language model, it's data quality more than synthetic data. I'll just make that note. because there's been a lot of focus on data quality. I mean, that's, I think, the primary reason why, meta, brought, Alexander. Wong on from scale ai, right? It is all about data, better data., Not synthetic necessarily. Okay. Six. We will see the first AI world models appear. Yeah, so there's, I guess it depends on what we mean by world models, but like you put quotes

Karl Freund:

there, models, world models, they're trained inherently on real world data.

neXt Curve Intro:

Yeah.

Karl Freund:

Okay. it could be video, it could be, audio. But they're trained on the physical world around this. Yeah. Not on the synthetic world, which is language. there's a lot of folks who are staking their future and a lot of VC money, being applied to real world models. Mm-hmm. And they're not gonna take over the world. We've invested. Roughly a trillion dollars already in total on language models, that's not gonna go away. But there's new problems that language mo models are not really good at solving, and that will affect the entire stack. Mm-hmm. And that's what these new, these startups are all about.

neXt Curve Intro:

Mm-hmm.

Leonard Lee:

Yeah. number seven, identify theft and ai. Deep fakes will scare the be Jesus out of a lot of us. That's not what you put here. Put a bunch of special characters, but yeah.

Karl Freund:

Yeah, I'll, I think the launching point that for that is the midterm elections, and I think we will see some, pretty scary stuff being generated to influence people's votes. that will scare the heck out of us. it's the old joke that isn't a joke where your boss calls you and asks you to do something. And it's not your boss. it's an ai and, determining the legitimacy of that voice is difficult. So now what do you do? you gonna follow the boss's directions? Are you gonna be insubordinate? the implications here are pretty astounding. I presented to the IEE, conference on computing technology, last month. And I talked about, agent AI being applied. In the background as proactive ai, not reactive. Everything we do today with AI is reactive. You give it a prompt, it gives you the answer. But if you have AI that's just running in the background absorbing data from all kinds of sensors that are becoming available. then you get into all kinds of questions about, well, what is the, governance around those? Right. who is responsible?

neXt Curve Intro:

Yeah.

Karl Freund:

it is kind of like, the German government's concerns about autonomous driving. Take that to the power of 10. And that's what you're gonna have. And I think we'll start to see it around the midterm elections.

Leonard Lee:

Yeah. I think that's pretty reasonable to expect. And cybersecurity overall, I think is going to be a hot field because of threats. And the attacks are going to be, tremendous. And we are already hearing a lot about AI or gen ai, orent, you know, augmented, attacks out there. So, um, nine AI hardware roadmaps, double down on memory and interconnect just flops. Yeah, we, we've been talking about this through the course of the year, but yeah, I agree with you. I think, um, this is a good one.

Karl Freund:

we're not, compute bound, but that's just'cause the compute can't get access to enough data. So it's really a network in the memory that are becoming the issue. And we're starting to see some interesting things. There's a startup that just kind of changed directions and they're focused on logarithmic, compute, So you don't do any more multiplies, you just do ads. Well, that's gonna have a massive impact. NVIDIA's also talked about logarithmic math. I don't think, that's something that's gonna materialize next year, but the problem will. Steers more towards these very large memory systems, like, like, uh, CPX, from Nvidia. my Forbes article about this goes into various technologies that will materialize in 2026 that enable huge amounts of memory. Salmon Nova is one of them. Sam Nova has a tier where they have all three tiers. they use sram, they use dram, and they use fast storage yeah, I think HBM is in there. Yeah. HBMI think is in there. Anyway, the point is

Leonard Lee:

everything,

Karl Freund:

everything, you're gonna throw the kitchen sink at this problem,

Leonard Lee:

you know,

Karl Freund:

break the memory, memory wall.

Leonard Lee:

I think, Next year we'll be, and we're already kind of seeing it is going to be the year of heterogeneous memory.

Karl Freund:

Yes.

Leonard Lee:

Exactly. Yeah. Because you know, you look at what, uh, Marvell introduced last year with custom HBM, but this year they're focusing more on sram. So the intro, custom sram and then we saw, so Cam. really kind of make its debut at GTC. Was it GTC? yeah, I think it was 2025. So memory, like you said, memory. and a lot of this is driven by, uh, in my observation, uh, the pursuit of more efficient. model

Karl Freund:

infrastructure support of, long contact

Leonard Lee:

model efficiency, right? Yeah. So

Karl Freund:

you take a look at what D Link has done? That's really interesting. They, they built an accelerator and they found they couldn't feed it. It was really fast. They couldn't feed it fast enough. even with HBM, they couldn't feed it fast enough. They're not a memory company. they're an AI company. everybody should look up D link. It's interesting. Yeah. They've got a 3D memory stack that is, they say four times faster than HB m's. That's crazy. HBM four. Four times faster than, it's easy for me to remember. Four times faster than four. So, it's fascinating technology and I think it's gonna have a huge impact. In fact, One wonders who will acquire them. because they're really smart scientist engineers.

Leonard Lee:

And finally, number 10, AI productivity gains show up unevenly widening the AI gap between firms.

Karl Freund:

so the people that are just simply trying to use chat GPT to make money, that's not gonna work.

neXt Curve Intro:

Yeah, you're gonna

Karl Freund:

have to the ones that are bold enough to apply agent AI to automate, automate that workflow that you still have a human involved in the loop right now.'cause you're just asking questions, getting inundated with the answers. Yeah. and only AI can handle the answers that are coming back from ai. And so that tells me. Okay. Agent A, the companies that drive agentic AI into their workflows, they're gonna be the ones that realize profit. The ones who stay in the reactive world of a chat GPT are gonna struggle to generate profit from their investments.

Leonard Lee:

Interesting. Yeah. and I think that's, a very good final prediction. So, hey everyone, he's published this on Forbes, on his Substack. I'm reading off of his Substack, so make sure to check it out and subscribe. And also make sure to contact him if you're interested in, inquiring further. About his prediction. So my predictions really quickly that kind of overlap with yours. inference will be, tipping the balance of the AI supercomputing landscape and narrative. So I think we agree on that. agen AI will have its moment of reckoning, so we. Kind of maybe different in our thinking here, but, um, monetization will still be a challenge for AI generally. spurring the move toward advertising. So you're going to see a lot of that happening, next year. edge and enterprise. We'll benefit from good and improved economics, for a broader range of ai, not just gen ai, all the ML stuff, right? The economics will improve and we'll start to see AI move closer to the sensor. So, I think a good thing, especially for the guys that are doing Edge AI stuff. scale in to scale out thermals and power will be a top topic. and, a key to quality versus capacity. I think right now, in the last three years there's been, a predominantly a conversation around capacity. How much, how fast, not as much of a conversation on quality. And as you start to look at these data, center scale, discussions, you're finding out that quality is an uptime. You know, the nines. the six nines, the, six sigmas, every, all the quality metrics, performance, the execution, metrics for a full life cycle, they're gonna be really, really important. they're not so well known. Today, we don't talk about'em, but they're talking about them at OCP, hot chips, et cetera. And I think it's going to come to the fore. we will see a diversity in application specific systems for inference and training based on TPU and other proprietary silicon, start to sprout. AI supercomputing scaling will hit economic, walls as well as tech walls. I think the industry is compressing too much research level stuff into commercialization and there's a cost to that. we will see circular investment cycles get more creative and expansive raising antitrust concerns abroad.

Karl Freund:

The good list. I wouldn't argue with any of those. Really? Not really. Not really. Oh, is that what we want? Come on. I think what we should probably do is have an argument, a debate about quantum'cause neither of us mentioned quantum, and that's because we're really talking about predictions that matter. And right now, Chrome's still research. It's absolutely. Research. but I spent enough time with IBM that I've become a believer in. My latest articles about IBM Quantum, really become really being the leader. There's others that are doing quite well, like Quantium, and uh, cute.

Leonard Lee:

Ion Q

Karl Freund:

Ion Q they're doing really good stuff with the completely different technology. So, I'd love to have a debate with you about that and see where you think it's gonna be, uh, where it's gonna be landing. And more importantly for Quantum is when it's gonna be landing. a friend of mine asked, Gemini three, what, quantum World in 2033 is gonna look like for a 70-year-old man.

neXt Curve Intro:

He

Karl Freund:

can guess the age of my friend. he wants to know what it's gonna feel like when he turned 70 in 20 30, 20 33. And the response was mind boggling. I asked him this morning if I could just get his permission to post it on, my substack because it's really thought provoking. the question I have is, okay, all this is possible. how could it be afforded? I don't think it's an affordable vision of the future, but if it were to materialize, it's, life changing. So anyway, we'll save that for next month. Okay.

Leonard Lee:

Yeah, definitely. We have to have Jim there. Otherwise he'll be very upset with us.

Karl Freund:

he's enjoying some well-deserved time off in Arizona. Oh my God. That, that guy works way too hard,

Leonard Lee:

way too hard. So I hate

Karl Freund:

to keep up with that youngster.

Leonard Lee:

Anyway. Hey, this was great, Carl. we meant to keep it short, but it went long, but I think that's okay. This is just really great. Alright, were spot on. or we'll be spot on, no doubt. Right.

Karl Freund:

We'll find out. We'll find, yeah.

Leonard Lee:

Yeah. And, make sure to reach out to Carl fro at, Cambrian AI Research at www dot Cambrian. ai.com. He's also on Substack, as we mentioned before, and Forbes. And also reach out to our good friend Jim McGregor and TEUs Research who are here in spirit@www.teusresearch.com. And please subscribe to the our podcasts, which will be featured on the next Curve YouTube channel. check out the, audio version on buzzsprout or find us on your favorite podcast platform. Also subscribe to the next curve research portal@www.next. Dash curve.com for the tech and industry insights that matter. And we wish you all the best in the new year and see you in 2026, right?

Karl Freund:

Comes 2026, whether you're ready or not.

Leonard Lee:

Yep.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The IoT Show Artwork

The IoT Show

Olivier Bloch
The Internet of Things IoT Heroes show with Tom Raftery Artwork

The Internet of Things IoT Heroes show with Tom Raftery

Tom Raftery, Global IoT Evangelist, SAP