The neXt Curve reThink Podcast
The official podcast channel of neXt Curve, a research and advisory firm based in San Diego founded by Leonard Lee focused on the frontier markets and business opportunities forming at the intersect of transformative technologies and industry trends. This podcast channel features audio programming from our reThink podcast bringing our listeners the tech and industry insights that matter across the greater technology, media, and telecommunications (TMT) sector.
Topics we cover include:
-> Artificial Intelligence
-> Cloud & Edge Computing
-> Semiconductor Tech & Industry Trends
-> Digital Transformation
-> Consumer Electronics
-> New Media & Communications
-> Consumer & Industrial IoT
-> Telecommunications (5G, Open RAN, 6G)
-> Security, Privacy & Trust
-> Immersive Reality & XR
-> Emerging & Advanced ICT Technologies
Check out our research at www.next-curve.com.
The neXt Curve reThink Podcast
Silicon Futures for November 2025 (with Jim McGregor and Karl Freund)
Silicon Futures is a neXt Curve reThink Podcast series focused on AI and semiconductor tech and the industry topics that matter.
In this mega catch-up recap episode, Leonard, Karl and Jim talk about some of the top headlines from September through November of 2025.
00:31 Industry Updates and Conferences
03:14 Quantum Computing Insights
07:08 AI and Data Center Evolution
10:14 Google's TPU Strategy
19:39 Intel's Manufacturing Advances
22:13 High NA Integration in Manufacturing
22:57 Intel's Strategic Decisions and 6G
24:05 US Manufacturing Capacity and Challenges
26:58 Nvidia's AI Roadmap and Market Position
29:26 Nvidia's Investment in Synopsys
31:16 Nokia and Nvidia's 6G Partnership
35:51 Google's AI Advancements with TPU
42:51 Qualcomm's AI Infrastructure
43:46 Micron's Memory Innovations
Hit Leonard, Karl, and Jim up on LinkedIn and take part in their industry and tech insights.
Check out Jim and his research at Tirias Research at www.tiriasresearch.com.
Check out Karl and his research at Cambrian AI Research LLC at www.cambrian-ai.com. Check out Karl's Substack at: https://substack.com/@karlfreund429026
Please subscribe to our podcast which will be featured on the neXt Curve YouTube Channel. Check out the audio version on BuzzSprout or find us on your favorite Podcast platform.
Also, subscribe to the neXt Curve research portal at www.next-curve.com and our Substack (https://substack.com/@nextcurve) for the tech and industry insights that matter.
NOTE: The transcript is AI-generated and will contain errors.
Next curve.
Leonard Lee:Welcome everyone to this next Curve Rethink podcast episode where we break down the latest tech and industry events and happenings in the world of semiconductors and AI into the insights that matter. And I'm Leonard Lee, executive Analyst at Next Curve. And in this Silicon Futures episode, we are going to be. Breaking down a number of months because we've been so busy, so busy this quarter, just, attending all kinds of conferences, engaging with the industry to bring the insights that we're going to be channeling into this episode. I'm joined by the illustrious Carl Fre of Cambrian AI Research Also Jim McGregor of the infamous and the famous TE Research. Hey guys, how's it going? It's been a long time. Good. How you doing? Oh, ready for a vacation.
Jim McGregor:I'm still freaking out that we're right around the corner from 2026. I'm like, oh my God.
Karl Freund:Oh, I already published my predictions the other day on Forbes, Oh, I'm almost done. I got one more p two more papers to crank out this year and
Leonard Lee:wow. Wow. I only have one more event, which is Marvell next week. are you, Jim? You're not going, but are you gonna be there, Carl, or are you gonna skip out? No. Oh, okay. Damien, Damien McGregor from Interiors Research will be there, okay. All right. Wonderful. But before we get started, please remember to like, share, react, and comment on this episode. And also subscribe here on YouTube and on Buzz brought to listen to us on your favorite podcast platform. And remember, opinions and statements made by these two gentlemen right there are correct and are their own. don't reflect. You sound like Pete,
Karl Freund:except on South Park. Remember to lie.
Jim McGregor:I'm sorry, I had to finally get that in there.
Leonard Lee:Okay. Just had to put that out there. I do invite you guys on Lord knows what you're gonna say. it's usually really good stuff. And I think the, audience is gonna enjoy this, update. So, hey, why don't we get started? So much has happened in the last three months. Yeah. and I think the last time we were all together Amazing. Yeah. All last time, all three of us were together was, at a Snapdragon Tech summit. In Maui, right?
Karl Freund:Yeah.
Leonard Lee:That was September, wasn't it? That
Karl Freund:September.
Leonard Lee:Oh my God.
Karl Freund:Ah, I miss the beach. I miss the beach.
Leonard Lee:Yeah. Yeah. So, so much has happened. I think one of the biggest news items is that, Mario Morales, our good buddy, Mario Morales, who is the, GM of the semiconductor. Group at was DC was, yeah. Is now at a MD and he's, he's the vice President of Strategy and Partnerships at, a MD. Yeah, I think that's the biggest news. Don't you think?
Jim McGregor:that, that, that is big news, but I wouldn't say that's the biggest news. let's face it, there is big news coming out, especially around AI and the data center. Every single day. It is ridiculous how much, it's, you think how much
Karl Freund:competitive landscape has changed just in the last three months, four months. Just amazing.
Jim McGregor:Oh, it is amazing, there's new partnerships, new acquisitions, new, new Silicon investments, new silicons, new, It every time you turn around. I, I was amazed just being at Supercomputing 25 and the fact that everyone from the minute I walked in was talking about Quantum. I'm guys, yeah, you're not gonna even be able to implement this for five years. But they were all talking
Karl Freund:about Quantum. Yeah.
Jim McGregor:I'm
Karl Freund:gonna change my company name to Quantum to, to Cambium. Quantum. Quantum and I
Leonard Lee:Quantum. Yeah, I think you have to.
Jim McGregor:Oh, geez. Yeah. Everything's quantum. No, you gotta be near Morphic. You gotta be one step ahead. Oh yeah. I don't
Karl Freund:know if I want to be that far out. Neuromorphic is not gonna happen in my career time. Maybe in my lifetime. I don't know.
Leonard Lee:Hey, hey, hey, hey. Don't talk like that. Yeah, yeah. it's interesting. Yeah, and interesting in, in that I think, there's only one computing technology that's. Even more, challenging to understand an ai and that's quantum, right? You literally have to have a, a physics degree, PhD to even begin to, yeah. Crack a surface on, understanding, what, what, what that technology is. But I think there's al already a ton of misconceptions. I hear all kinds of weird stuff from the media, that's good for us because that's plenty of opportunities for us. We
Karl Freund:can help'em sort it out. Yeah,
Leonard Lee:exactly. Yeah. And the, and, and the most common misconception I think is that, quantum computing is gonna re eliminate all previous generations of computing, and that's absolutely incorrect.
Jim McGregor:No, it's an accelerator technology and I keep telling people, I says, think of what the math code processor did initially for, traditional computing. This Is it on steroids? Yeah. What I
Karl Freund:tell That's good. That's good. What I tell people is a little different. I tell'em Quantum is designed to only solve problems we cannot currently design, solve. Right. If you could solve it today, you would. And if you can't, hopefully quantum will. Yeah. it's, it's that that kind of positions it as an augmentation technology, not a replacement technology.
Jim McGregor:Yeah. But it is, it is interesting though, quantum is still a ways off. Everyone agrees on that, that, getting to a point where quantum is really useful in terms of. Error correction, error mitigation, as well as, just stable qubits and everything else, right. We're looking and being able to scale up to something that's reasonable, which everyone seems to think, a million qubits is kind of the, the benchmark. That's kind
Karl Freund:of the target. Yeah. We're, we're,
Jim McGregor:we're looking at 2030 to 2032, however. What's amazing about it is, especially at SC 25, is everyone's planning for it. this is the first time I think, in the history of our industry that companies are planning ahead. yes, everyone was talking there about, liquid cooling. They were talking about 800. Volt DC power and 1500 volt DC power and optical networking, especially co packaged optics. But they're even thinking beyond that saying, listen, it's not just this. We have to continue densifying compute, we need to continue to improve AI and everything else, but we also have to know that we're gonna have this other technology right down the road that has to be part of that data center. Yeah, so the data center, the whole data center community is really thinking ahead for the first time rather than thinking how do we implement these older technologies?
Leonard Lee:Yeah, that's interesting. Yeah, and I think that's what a lot of these conferences has become like I, I just got back from reinvent and obviously a lot of generative AI as well as agentic ai. But I think one of the concerns that I have is, the technology conversation investments are going so are happening so rapidly and so massively can organizations digest the velocity of, technologies and And capabilities that would need to be adopted, right, in order to justify a lot of the investments that are going into data center technologies. And I think it, it is be that. That perspective is becoming, clearly lost, Well,
Jim McGregor:and that, that's a key point that I-B-M-C-E-O made this week, was the fact that, he's, his argument was, and it's not just his, several people have made this argument, especially in the investment community that. This level investment just isn't going to have the payback and the time, because you gotta realize these systems have a very short lifespan and it keeps getting shorter with new technology coming out every year. Mm-hmm. So are we going to be able to get that investment back in that short timeframe? With this, with these, we've got just tons of capacity going in terms of these, mega data centers. I don't blame them. it's a good question because it's a, our industry has a history of overshooting. Yeah.
Karl Freund:It's a good question. But I think Arvin may be missing a point here, which is that AI is going to become. Pervasive. Mm-hmm. Which means every application you use will have AI in it. So there's a GPU or an ASIC somewhere along the line to help create that model As and is helping to and do inference on that model in order to provide that, that end user value. So it's not clear to me that we're dramatically overshooting that kind of capacity. We're just talking about chat bots, Arvin Tri. Right, but I don't think he's talking just about chatbots. Yeah, and I think that the pervasiveness of AI will shock many people. excuse my little eye problem here. I had a eye surgery this week. Really? I couldn't recommend it strongly enough. If you have, if you're my age and don't think you have cataracts, go see your ophthalmologist. You'll find So
Jim McGregor:did you get the bionic version? Can you see? For two miles now
Karl Freund:I, I can see for two miles I can do digital readouts. On the, on this lens? Yeah. I carry a little battery here.
Leonard Lee:Yeah, a little Oh and a puck for the compute. A puck. Puck, yeah. Meta should call you.
neXt Curve:Yeah.
Leonard Lee:yeah. Yeah. But the conversation has changed pretty dramatically on number of fronts. Obviously there's now this whole AI circular investment concerned that surfaced. that's become pretty, yeah. I mean, think about it, three months ago when we la, or it's almost four months ago. When we last, jumped on the podcast together, that was not the temperature in the room, right. That we weren't really talking about, circular investments, or at least that wasn't in the foreground. Now it's one of the, it's an elephant in the room. Right. Well,
Karl Freund:and interesting side effect has materialized just in the last. Few weeks, as Google has changed their go-to market strategy for TPUs, and there's no question they have, seven will be in all the cloud. All the neo clouds have not taken money from Nvidia. Right. So don't expect it in Core Weave, but I do expect it in others. In fact. So they
Jim McGregor:are going to be selling the TPU to them.
Karl Freund:To them ab it's Right, but they'll also be setting it up in clouds. Jim and I both saw at the C scale booth in Supercomputing two weeks ago that it was funny. You walked up and it says, sir, scale, Google Cloud.
neXt Curve:Mm-hmm. So
Karl Freund:I asked the CEO, what's up with Google Cloud? Do you know what his response was? I can't comment on that. Hmm. I can, you really can't comment. It's all over your booth. No, I can't comment on that. So my conclusion, nothing he said he was totally, up upright. I like the guy a lot. he said, I can't, I really can't talk about that. I, that means there's something there and it's on the booth, which means that probably next week, maybe at, the AI Summit in New York. there'll be many announcements probably, I'm almost certain there'll be one announcement and that is that, Google's going to be reselling TPUs to other, cloud service providers. and that's a big shift. It's a big shift in, it is a huge shift, global addressable market for Google and for Broadcom and momentum, and there are other suppliers. but it's a big shift in the competitive landscape. I, I'm, my phone's ringing off the hook now, and the investors are all asking me, how bad is this gonna hurt Nvidia? Mm-hmm. My response is. It's not gonna hurt their top line, it may hurt their bottom line because we've already seen rumors are they've had to find some steep discounts because the alternative is customers will say, look, I can save 30%.
neXt Curve:Mm-hmm. Mm-hmm. With,
Karl Freund:with just, just my, my TCO will drop by 30 to 40% compared to GB 200, GB 300 respectively. If you don't gimme a good discount, and it's going to, it's gonna impact their, their margins, I suspect, but I don't see it. Lemme put it this way, Nvidia will stay sold out all next year.
neXt Curve:Mm-hmm. Mm-hmm.
Karl Freund:Okay. it's not gonna change their, their, their total revenue. Yeah.
Jim McGregor:Well, and, and let, let, let's make it clear. Google hasn't announced that they're selling TPUs yet, but No, we believe that they will. And, and quite honestly, I, I had that question earlier this week and I said, I wouldn't put it past them. They have sold, TPU IP. especially for embedded applications, NX P'S been actually, using that for years. Matter of fact, they just, made an NPU that you can license, that synaptics include in their LS 2,600. I wouldn't be surprised if we see TPU technology in, in a broader perspective. And it kind of goes along the lines and, obviously NVIDIA's not standing still. We're gonna see an inference. Or decode type processor with, Ruben C-P-X-P-X, trying to, go after that same segment that really the TPU addresses.
Leonard Lee:Yeah. And then, but but if we look at it also from a cloud, neo cloud perspective, what we are seeing are a lot of the hyperscalers offload. And if you look at the, because you don't wanna hold a CapEx, what you mentioned earlier, these assets depreciate very quickly. Even the data centers, this whole argument that the data center has more longevity, I think that that's a brittle argument because the data centers themselves are becoming. Highly optimize. The technologies that are going in there for power, cooling are changing constantly.
neXt Curve:Mm-hmm.
Leonard Lee:the, the number of times you have to, the frequency of retrofit, generation over generation, I think is going to be. it's going to be persistent because, one of my takeaways outta OOCP is that there is no real standard yet. So if
Jim McGregor:you look at, well, especially with the hyperscalers, the hyperscalers all have their own rack designs or they all do their own level of customization, even if they're using a basic OCP solution. And let's face it, the hyperscalers. Underinvested in kind of the general purpose, processors out there because they all thought that their own silicon was going to be the silicon of choice.
Leonard Lee:Right? and so that makes it really interesting for the neo clouds that have a di uh, diversifying
Jim McGregor:portfolio. Right? and quick to first token, they're putting capacity in place a lot quicker instead of building these. Mega data centers, they're adopting this modular approach that, companies like Flex and and Vertiv and Schneider and all these other guys from the power to the IT pods to be able to put, capacity in place very, very quickly and not having to have a customized or specialized data center for it. Yeah.
Karl Freund:It's interesting you mentioned Vertiv. I don't know about you, Jim, but I was blown away by Verde's booth at, at supercomputer.
Jim McGregor:Yeah,
Karl Freund:I, in fact, I don't recall them ever being super computing before. although I did miss last year. you walk through this, this hallway, and it's, it's all of these tubes carrying hot and cold water back and forth, and they have a beautiful 3D plexiglass model. Oh, I love that. 3D model, wasn't it? I captured it on video. Haven't figured out what to do with it yet, but ver it is, ver is definitely on fire. Just other thing is everybody from Schneider to Vertiv to to Mitsubishi. Wherever you go, they're showing big honk and mechanical parts.
neXt Curve:Mm-hmm.
Karl Freund:and pumps, that you wouldn't expect to see it. Super computing. but that's where a lot of the actions gone. I, would put a little footnote on your comments, Leonard, about the longevity. It's interesting that Nvidia saw that coming, and they're gonna stay within the same rack at power and roughly power and cooling requirements for three generations, right? Mm-hmm. Blackwell, Blackwell Ultra and Ruben. It's not until Ruben Ultra where they're gonna have, have to do a change. So while they are changing rapidly, it's about one third the pace of silicon.
Leonard Lee:Right, but it's not, also not standardized. it is, but silicon, yeah, definitely. moving a lot faster. Maybe at a system level, slightly slower. but it's still an accelerated, pace, right? It's not like traditional center networking. But that's where I think that whole argument that somehow these systems have more longevity, falls flat on its face.'cause it's not about, useful life is not just about how long you decide to have it in service. It's about profitable. Service. Right, exactly. And that is the thing that has, that I think right now is a point of confusion. But I do want to note one of the things that Matt Garman mentioned in his keynote, actually it was during the q and a, because the topic of, depreciation and useful life came up, AWS has proactively, reduced the useful life. for the, AI infrastructure, right? So whether it's the chips or the systems or networking, they brought it in, not knowing, as he qualified, not knowing, what the economics are gonna be. But when you look at the evidence, it's going to be much more accelerated than that. You're probably looking at three, two years, not. Six years, or some are even suggesting 10 years. I think that's, that just is nonsensical.
Jim McGregor:Well, and I don't think that, I don't, I'm not even sure the three generations is going to be feasible, quite honestly, because it's not just the accelerator and the processor. You have to worry about. In terms of a a power perspective, you're adding more memory. You're adding higher speed memory. You're adding higher speed, networking and everything else. yeah, if you plan for the 800 volts and you already have the cooling, but it's still gonna be a challenge. Matter of fact, they've already had to go to. Now Nvidia says they're gonna stick with copper as long as they possibly can within the rack, but they've already had to move from top of racks switches to the middle rack because you can't span a whole rack anymore at the speeds that they're running with copper.
neXt Curve:Mm.
Jim McGregor:And so I'm, I'm kind of with the networking guys that. We're gonna see that shift to copper and co-pack optics a lot quicker than I think some of the naysayers are staying out there.
Karl Freund:Absolutely right. Jim Google, TPU only uses copper in connecting the four by four by four cube.
neXt Curve:Mm-hmm.
Karl Freund:Once you go outside of that 16 way, construct, it's all optic. and so you have companies like Momentum be betting, benefiting from that and helping Momentum.
Jim McGregor:Coherent. Yeah. Co company. Co. Yeah.
Karl Freund:I guess you saw that. oh, I forgot the name of it. Darn ES Steel Celeste, optical Optical platforms we ever just acquired. and I don't know if they were shipping product yet. I guess
Leonard Lee:Marvell right? Tomorrow. What? Yeah. Marl bought next week. We're gonna hear a boatload about that. Are you sure you don't wanna show up? I have a funny feeling. Jim's gonna pop in.
Jim McGregor:No, I'm, I'm not traveling between the holidays. I'm two months. Are you,
Karl Freund:is it in Scottsdale again?
Jim McGregor:No, it's in, the Bay Area. Bay Area. Oh,
Leonard Lee:okay. Oh, yeah, yeah, yeah. So, Hey, you know what? We are, we're, we're talking about AI infrastructure quite a bit, but I want to flip our discussion, toward manufacturing really quickly. and hold on. I have to change my
Jim McGregor:hats.
Leonard Lee:No, just kidding. Go ahead. Your hair. let's talk about Intel really quickly. So there's some news that came out of Intel, 18 A, we have Panto Lake, that's mm-hmm. Looks like we're gonna start to see some, action at CES 2025 or 26, which is just a. Couple of weeks or three weeks away, were,
Jim McGregor:I think he's suffering from, dementia. Should we, should we check him out for that? There's no, what year it is. No. Just kidding. Yeah, I don't,
neXt Curve:I don't
Jim McGregor:Any thoughts? No, but no, you're right. we're gonna see, obviously we've already had the ITT tech tour, which was, Intel's deep dive into Panther Lake and 18 A and the future of 14 a. it's gonna be interesting, obvious, and I've gotten the fab tour. Of the, the new new Fab 52, which is gonna be doing that manufacturing down in, Chandler Fab 62 is already built, but it's just an empty shell at this point in time. So they do have the capacity to expand. Matter of fact, they still have capacity to expand in Fab 52, you know, reaching that point both, with their server products and with their, PC products for at a manufacturing is huge. Now they have the most advanced manufacturing process in the world. Once again. Mm-hmm. they didn't unfortunately capture or haven't captured, that we know of at this point in time. Any external, any external customers for 18 a No, but you have to remember that this is a little, in Intel Foundry is a different beast now. Yeah. So instead of going from process node to process node, these process nodes are gonna have a 10 year or longer lifecycle. Mm. So the, they, they missed the, that design point for a lot of the key products out there that wanted 18 a. They miss that, by not getting their PDK locked down, stuff like that. Yeah, yeah. But as some of those, other products, are moving down their line as everyone else is on the bleeding edge is going to 14 a, some of those older products are gonna need that 18 A. So they'll get some of that, and they've got significant interest in 14 a. So we'll see how that pans out. Yeah. And 14 A is gonna be on the same, product lines and, and, process lines that they have for 18 a.
Leonard Lee:So did they make any comment about high A, are they gonna make that transition or,
Jim McGregor:right now, high A at this point is still optional. they have indicated that just using the high A equipment, they've seen significant benefits, Uhhuh. So there may be high, a kind of integrated, but it's not, what I wouldn't call is, it's not a hardcore switch that they've put on that says we're going from EUV to high NA, starting with 14 a. Okay. I think they're gonna start just integrating it into the process slowly. but they already have several systems. Yeah. the first systems from A SML, and they have been using them and they've seen significant benefits, cost benefits actually from using high A. Yeah.
Leonard Lee:Oh, okay. Cool. Yeah. Carl, anything, any reactions? No, I
Karl Freund:just let Jim, let Jim handle all the manufacturings.
Leonard Lee:Just happy that at a, looks like it's coming along. Yeah, I think it's, that's pretty exciting. The other thing I thought was pretty cool, the next group there was talk that they were gonna ro spin out the, the next group and, looks like they're gonna keep'em. Right. Mm-hmm. and so that was a thing that kind of bounced around a news cycle for a, a little bit. And I think yesterday there was a, a, a bit of media confirmation that the, that, n is gonna stick around, which I think is smart because I thought it was a little weird that they would get rid of that group, especially as we're looking forward toward, six G. Intel's. been a pioneer in bringing about virtual ran as well as a cloud ran, companies like mm-hmm. Ericsson in particular have bank heavily on, Intel. So, it's good to see that, next will still be anchored there and that, the non-China ecosystem for, mobile wireless. it, it is going to be intact, which I think is important, although Nvidia obviously is, making a heavy push of their version of AI ran for, six G. But, in the near term, I think, yeah.
Jim McGregor:yeah. And just real quick, let's not get off of manufacturing just too quickly because Okay. there's other announcements you have to keep. TSMC is bringing up two fabs in, Chandler, Arizona. Amcor is building new packaging facilities and they're gonna be supporting both Intel and, TSM. And they're actually working with Intel on their advanced packaging. So, there is a lot of capacity going in place all at one time, all in one location, or what we call the Silicon Desert. So TSM C seems to be on track. Amcor seems to be on track, Intel's on track. It's, it's an impressive, impressive time down. There's
Karl Freund:also, maybe a, a little side note, but IBM is moving their quantum chip wafer production to 300, 300 millimeter mm-hmm. Up in, in, in northern New York. And that, that cap that's capable of 2, 2, 2, two nanometer, fab lines. So we're starting to see the beginning of a, of a US based manufacturing capacity. Question for you though, Jim, if, if the world were to come to an end, which means Taiwan's blockade or something along those
Jim McGregor:lines, do you know something? I don't know, Carl.
Karl Freund:No, no. I'm just speculating here. If that were to happen, how long would it take companies like Nvidia and A MB to shift to Intel? Manufacturing.
Jim McGregor:Well, the fact that you'd have, the manufacturing itself, plus you'd have to have the, the, you'd have the initial runs and everything else. You're talking probably 18 months. you may be able to shorten that down to a year. But that, but that's once again, actually getting the products process moved and actually running the test wafers and getting. Production quality products out the door. It's not something that's quick. Um, the the, the bigger challenge though would even be rest of the ecosystem, the rest of the value chain. Yeah. we don't have a lot of backend assembly and test in the us. Matter of fact, Intel's got their facility in, uh, out by 11 X and Rio Rancho. other than that. Vast majority of it's offshoring in places like Malaysia and Thailand and stuff like that. Mm-hmm. So, you know, having these facilities by as well as TSMC wanting to build a facility for backend assembly and test and it doesn't take as long as building a fab, but that takes a couple years and we don't have a lot of the board level assembly in the us. So, I mean, the harder part isn't it where everyone's been focused on the fabs, but we have to build up the whole ecosystem, and that's gonna be tough. You have to remember that the US is only going to be good if it's capital intensive, and some of these solutions still aren't completely automated at this point in time.
neXt Curve:Yeah.
Jim McGregor:So getting, getting that, I think assembly and test getting the board level and getting the system gonna be increasingly challenging.
Leonard Lee:Yeah. Thanks. Interesting. Yeah, and by the way, I just wanted to give Intel a little bit of love because, you know, they're making these accomplishments and overshadowed by some of the headlines out there. lot of the NVIDIA investments that are happening, open AI, I mean, did the A MD OpenAI thing happen?
Jim McGregor:The a MD uh, thing is happening Where a m d's gonna give up, uh, shares, uh, for, uh, some capacity. Yeah, 10%. but there's fact that we've had two major announcements or several major announcements. I think there's a total of nine, uh, facilities, uh, for the Department of Energy that are gonna be using Nvidia and a and. Intel kind of got shut out of that one this round. Yeah.
Karl Freund:And would say that, go ahead. I'm sorry. I would say that if you look at, and at NVIDIA's latest version of their AI roadmap, uh, they seem to have caught the wave. They're trying to catch the wave for complex, inference processing.
neXt Curve:Mm-hmm.
Karl Freund:using inexpensive, memory. Not HBN, it kind of sounds like the direction that NVIDIA's going with the CPX. So they, they've deselected train, training completely. I'm not sure. Smart. because the modern version of inference requires a little continual training uhhuh. as the models learn, you don't wanna throw that away. So you don't wanna have to shift it all over to a separate set of hardware and shift it back. That's why the Google PPU can do training. They say it's inference first, but it's not inference only. So that may be a mistake, but at least the in intel does have a clear target market and has deselected markets for their AI roadmap.
Jim McGregor:Well, and Intel has also announced that they're going to be coming out with one of those decode processors too. I think it's called Crystal Lake or something like that. Crystal Lake. Yeah. Crystal Lake. So they're gonna have that. The only major vendor we haven't seen, kind of that decode specific processor is a MD so far.
Leonard Lee:Yeah. Yeah, yeah, yeah. And Yeah, one of the things that, I think is also, interesting is, some of these investments that have been happening with, Nvidia investing in Nokia, then you have synopsis. That one I think is really, I'd love to get your guy, the reactions from both of you gentlemen on that one because there's, there was a lot of stuff in the news cycle. About it, but I thought there was some goofy stuff being opined about it. So let's ask the experts
Jim McGregor:who wants to, and, and Nvidia$2 billion investing in synopsis to support advanced AI for future chips, future chip designs. Yeah. that's gonna be interesting, The big three, which are Siemens, cadence and synopsis have all invested in AI and adding AI into their tools. but so far a lot of the semiconductors, especially for the really detailed processes like place and route, haven't adopted those AI capabilities'cause they don't really trust them. Even Nvidia, said that, I guess it was last year, 2024 at the hot chip. it'll be interesting, obviously Synaps, synopsis is gearing up significantly. They just, they acquired, ANSYS, and they're continuing to invest in their own, tools and AI capabilities. why, why Nvidia make the investment in one of those three when those tools span the whole process from chip design to system design? Mm-hmm. it's an interesting one. I
Karl Freund:can, I can imagine that cadence called up Jensen as soon it was an out to say, let's, hey, what about us? Let's talk, let's, let's talk about what you can invest in us for. What, what do you not get from Synopsis that you could get from Cadence? I don't think we've seen the last, chapter of that book.
Jim McGregor:I, I would agree. I, I think, Nvidia more than anything is trying to, and they've done this, we've seen this with every new product that they've come out with, and every investment, co-pack, optics, the, the, DPU, on and on and on. They continue to look at where the bottlenecks are. Building future platforms? the only place they really haven't gone, I guess you can say they have, I was gonna say they haven't gone into memory, but they kind of have with KB cash. they continue to look for those bottlenecks throughout the industry and I think that's a key part of it. And I think, co package optics is a good example of that.
neXt Curve:Mm-hmm. Hmm,
Leonard Lee:hmm. Yeah. Okay. any thoughts on the Nokia thing? Or you guys don't care about,
Karl Freund:it's all about six G, right? they're, they're, they're having a go to market Strat go to market partner Yeah. In Nokia for, for six G base stations. Mm-hmm. makes total sense.
Jim McGregor:No, I, I think that, the six G effort really kicked off, right after Mobile World Congress this year with the first, workshops, focusing on defining what the six G six G standards going to be. I think everyone's kind of gonna be gearing up, and I think we're gonna hear a lot more about that. A lot more partnerships, a lot more investments. it's still, three, four years away, at best, but. We're gonna see a lot more coming out about six G, especially over the next two years.
Leonard Lee:Yeah, I guess for me the real question is what do they do with Reef Sharkk? Do they go all in with, Nvidia, one of the things Ronnie and some, emphasized quite a bit. And this is, this something that Ronnie, mentioned to me. when we were at GTC. earlier this year is homogeneous AI ran infrastructure. So what does that mean? Is that all Nvidia GPUs and G Nvidia systems or, it, it is, reef shark can be part of that. it's, I guess that's like the big questions. What is, what is, Nokia's gameplay here? Are they just gonna become a Nvidia. OEM, like a lot of the, the AI infrastructure OEMs right now, predominantly the rolling out Nvidia racks.
Jim McGregor:I, I, I think there's a bigger question there, and the, the, we already know that, you can benefit using AI in just about any application. You use it for the base station for handling, the, the, the traffic you can handle, use it in the antenna array to handle, beam forming and everything else. There's, there's thousands of applications within the, the, network that you can use ai. The real question is, is what is the end play for the carriers in this, in, in ai? And there's been a big push, especially by an Nvidia, to try to, convince the carriers that listen, you can become that AI provider. You can be that data center provider. That hasn't always worked out for the carriers in the past. They don't, it kind of goes against their business model. Yeah. So, and, and, and let's face it, open Ran hasn't really, changed the market significantly. It hasn't really burned down the doors, in terms of, driving change in the market. So, the real question is, is how does the business model change as we go to six G? Yeah. Does that change who the key service providers are? yeah. I don't know that it does. We'll have to wait and see.
neXt Curve:Yeah.
Jim McGregor:yeah. And I, you guys, any thoughts?
Leonard Lee:Yeah, I think there's a conflation of the silicon debate with the neo cloud versus, or plus, mobile network. Business debate. Right. And I think commingling those two, and, and just confusing the viability discussion is what's really bringing that whole thing down. And I, I don't think a lot of operators, even though they sign up for AI ran alliance, they're not really bought into, a lot of the aspirational vision. I think they're really looking at. AI for the ran, how do you, how do you introduce those efficiencies? look at where, actual, app, the application of AI imbalances with performance improvement. with, the cost of intelligence in mind, right? Because there, AI doesn't come for free, right? That intelligence isn't, realized, without power, without, additional costs in terms of, chip design and systems design. These all, have to be factored in as you look at the ROI for, placing intelligence, across the ran or across the network. And so that, those are the big question marks as the, industry, tries to figure out, okay, where does AI fit? How much of it do we have to use? But one, one thing I'm pretty sure of a lot of these aspirational visions are not gonna play out. And it, much in the same way that the aspirational visions of multi-vendor, open ran that didn't play out right. So you, you have to, you have to. Examine things, add a couple levels deeper than just, the press releases and the, the marketing spiel. So yeah. Mm-hmm.
Jim McGregor:It, it really comes down to business models and Yeah. You have to understand absolutely the business model and where it makes sense and where it doesn't make sense.
Leonard Lee:Yeah. Yeah. Hey, Carl, Google, Ironwood. Yeah. Tell us a little bit about that, because there was a huge stink about that over the last couple of weeks, right? Yeah.
neXt Curve:we talked about
Leonard Lee:earlier Google is ahead in AI and open AI is like being left in the dust.
Karl Freund:If you think about it, Google has more AI PhDs than pretty much the rest of the industry combined. these guys are really smart and they've ta taken a very, rational approach. Mm-hmm. Let's make sure this thing works. I've got six generations behind them and lots of variations on those. Not only at the silicon level, but at the system level.
neXt Curve:Yeah.
Karl Freund:And, and especially in memory sharing, you can share, memory across, high bandwidth memory across 9,216 or 60 fours, 9, 2 16, TPUs within a pod. That's a phenomenal amount of memory. Yeah. Very low latency. massive bandwidth. when they first came out with some more data on Ironwood, I, I wrote an article about it on Forbes and I said, Google's turning AI up to 11. And I told some of my friends in the industry, that you need to look at, look at these guys. You need to be aware of what they're doing. Mm-hmm. And some of these guys, they don't have competitive analysis teams because they think the only thing they compete with is the speed of light. Right? Yeah. in fact, they're, they're gonna compete now with a. A, a, a pretty powerful force if Google rolls out a wider distribution network, which will allow them to be, have, have more fungible assets across the, the, the, the hyperscalers or not. Beholden to Nvidia because of eVestments, Nvidia wisely made to keep them beholden to Nvidia. you're gonna see a big shift. don't think this material, as I mentioned earlier, I don't think this material hurts Nvidia. I do think it might material hurt. A-A-M-D-A MD's kind of gone from, they're second place to a. I won't say distant, but a third place player. because they haven't focused enough on systems now, they, they bought CT systems that should help them get the expertise they need for the, the design work. They're not, they're not, not, they're gonna sell boxes, but they'll design the boxes. They'll design the racks, they'll design the multi racks. but the, the PA. acquisition has not produced an NV Link class competitor. That's one of the reasons you saw, this week am Amazon, AWS announced they will use NV link.
neXt Curve:Mm-hmm.
Karl Freund:not exclusively. They'll also use a UA link, but, as kind of helps reinforce. Their old image as, as the, this, the Switzerland of, of, of semiconductor technology. I think that was a wise move on their part. but back to the, the really emphasis of the question is what's this Google, TPU, you're gonna see a lot of innovation come out of Google. It'll be widely available, not just within the Google Cloud, and more importantly, they will learn a lot. From their new clients who did not want to use TPU now, do wanna use TPU because they don't have to go to Google Cloud From a business strategy standpoint
neXt Curve:mm-hmm.
Karl Freund:This also allow, allows Google to have a, a direct hardware sales channel that, they're gonna leverage, Broadcom to build. At the same time reinforce their cloud. So what they're saying to people is Here, not here. Here's a truckload full of hardware that's saying, here's two truckloads full of hardware and I'll have three truckloads reserved for you back in the cloud if you need more capacity. And of course, everybody needs more capacity. So in a supply constraint environment, I think Google's strategy is just brilliant. And a lot of us have been pounding the table for a long time. Come on Google. Ship these things outside your walled garden. Yeah. And now I think they definitely are.
Leonard Lee:Yeah. I'm still on the fence of whether or not they really wanna start a hardware business for themselves. I guess, my view is that they're going to probably stick with, delivering, their. Their ai, that capacity via a, EOS kind of, or, modality because you think,
Karl Freund:you think philanthropic is just, you think philanthropics just a one off.
Jim McGregor:But they, they, they could,
Leonard Lee:they could also arrangement because of the, the scale of the, the buy, right? they split it too, right? It's 600.
Karl Freund:600, 600, 400.
Leonard Lee:Yeah. 600, 400. But then there's also risk, you want to, going back to Neo Cloud, offload that whole, dynamic that we're seeing where, most famously, Microsoft, instead of building out, the shells that they had for their own neo cloud, they've, they've offloaded to coral reef. Right. and that's a risk play. And, Satya Nadella even on, on, On, a number of, podcasts has stated that, that he doesn't wanna hold these things and that he wants, build out an entire data center, a hundred giga watts in one generation, right? And so there's this offload strategy that, not just Microsoft, but others have been playing. So for Google it might be, a prudent approach given the scale of what. it looks like they have in terms of, an arrangement with, with, anthropic. That's my view anyways. I could be wrong, so.
Karl Freund:Okay. Do you have any
Jim McGregor:perspective on
Karl Freund:this?
Jim McGregor:They, I will offer a different, different viewpoint there that I think that they still could, license it, just like they've done with, the TPU for embedded applications or the NPU, they could license it through Broadcom. I actually have Broadcom sell the chips and just license the technology from, Google. I, I think that we will see the TPU in different forms. The question really is, is does it really come from Google or does it end up coming from somebody else? And I'm, I'm still. Still kind of surprised that we're going to see, Google actually selling TPUs to philanthropic or providing TPUs to philanthropic. But we'll see what happens. I, I think, I think that there's, Google isn't really good in the hardware business. At least they haven't been on the consumer side, so, we'll, we'll, we'll, we'll see how it, how it plays out.
Leonard Lee:Yeah. No, this has, this has been a good, piece of our, our bit in our conversation here. And, and Carl, thanks for sharing your insights. It is just, now, when we, talk about the, the business aspect of it, it is just, I think we might have divergent views here. We do have divergent views, and I think that's cool. I think it's. this is the stuff that we, we wanted to happen, right, Carl, when we started this thing. Mm-hmm. We wanted that debate, so, yeah. hey, the other thing that happened, Qualcomm has a, has their own. Right. They have their own AI infrastructure or their their own racks. Now with the introduction of
Karl Freund:AI two 50 and ai, AI two, AI 202
Leonard Lee:50,
Karl Freund:the real shift is gonna happen with AI two 50, AI 200 dis the data center. AI two 50 makes them hard to beat. it is, it is radical, radical, new near memory architecture for ai
Jim McGregor:and supposedly a new memory architecture. Yeah.
Karl Freund:Yeah. I, I think we're gonna see more innovations in the a, in the memory architectures. not more innovations, but more significant. PCO Impacting innovations. Yeah, memory architectures, especially for inference. Then we'll see in, in compute a flops, a flop.
Jim McGregor:Yeah. Memory and storage. That whole data plan is becoming critical. Matter of fact, we didn't even talk about it, but obviously. Samsung, sk, Hynes and Micron are, are really focused on it. They've all dropped their DDR four product lines, leaving anybody using that DDR four technology and alerts. Right now, prices have just skyrocketed on D-D-D-D-R four this year, and Micron just announced this week that they're dropping their crucial product brand, which is our consumer product brand, starting in February to focus really on that high margin, high need for HBM memory in the data center.
Karl Freund:Once you get those high margins, it's hard. It's hard to let go. You wanna build more, more business that way. And if they're supply constrained and they are,
Jim McGregor:yeah. Well, it also plays
Karl Freund:focus higher. It also plays to
Jim McGregor:their strength in the fact that all of their storage and memory product and memory products have 20% or more. advantages and power efficiency over their competition. So, microns in a very good position. I mean, the only limitation is they don't have the same capacity that, Samsung and SK Height, and they're working on that upstate New York as well as, other facilities in Japan, bringing more capacity online. But, they, they. Competitively, they've got their own, leading edge process technology. They've got their, their architecture advantage. They're actually in a very good spot. If they had the capacity, they'd be capturing a lot more of the market, I think. Yeah,
Karl Freund:I think so. I've been a big fan of their technology for several years. It's good to see them finally getting their due. Yeah.
Leonard Lee:Yes. Wonderful. So we could probably go on and on and on and on, but we're not,'cause we gotta wrap it up here. but hey gentlemen, it's great catching up. I'm glad we're able to do this, and this is it, it's great to recap basically three months from September, October, November, and, hopefully we'll have a chance to just recap the whole year for our, holiday, plans aren't way too busy, which I'm sure they will be, but, hopefully we'll be able to convene again before the end of the year or at least early. Next year to recap. So, Carl, I gotta wa I, I gotta read your, article on your prediction. Yeah. It's on Forbes with,
Karl Freund:my, my 10 AI predictions. I was gonna call it Cambrian AI predictions for said no, you gotta call it yours,
Jim McGregor:ahm fine. You beat me to the punch, Carl.'cause I still have to get mine out there. Okay,
Leonard Lee:cool. Yeah, I'm looking forward to it. So, hey, on that note, thanks gentlemen. happy holidays. I'm sorry, I'm not gonna Happy holidays we Yeah. Next week at Marvell, but, I will, let you know, my
Jim McGregor:younger twin will be there so you can join him.
Leonard Lee:Yeah, yeah, yeah. You're essentially gonna be there. At least some of your DNA will be there, right? Yeah. anyway, so everyone please make sure to reach out and follow Carl Fre and Cambrian AI research at www dot cambrian ai dot com. Right. Did it get a hyphen? ai.com. So once again, www.cambrianai.com and he's on Substack as well as Forbes as he just mentioned. And, also reach out to, uh. Jim McGregor Anterius research@www.teusresearch.com. follow him. He's quite a funny guy. Very informative. So that's a nice, nice balance of fun and education, on all things semiconductor. And of course, Carl's Carl is just madly in love with all this AI stuff and gigantic chips and racks. Data centers and, please remember to subscribe to our podcast, which will be featured on the next Curve YouTube channel. we're on Buzzsprout, and if you go check us out there, you can pick us up on your favorite podcast platforms and also subscribe to the next curve research portal@www.next curve.com, as well as on substack for the tech and industry. Insights that matter, and we'll see you hopefully in a month to recap December and make our predictions for 2026. So gentlemen, thank you.
Karl Freund:Be holiday. Cheers everybody. Thanks.
Leonard Lee:Take care guys.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
The IoT Show
Olivier Bloch