The neXt Curve reThink Podcast

Silicon Futures for May 2025 - IBM THINK, CadenceLIVE and more!

Leonard Lee, Jim McGregor, Karl Freund Season 7 Episode 22

Send us a text

Jim McGregor of TIRIAS Research and Karl Freund of Cambrian-AI Research joined neXt Curve to cover the first half of another action-packed month in the world of semiconductors and accelerated and non-accelerated computing on the neXt Curve reThink Podcast series, Silicon Futures. The trio also shared their thoughts on IBM THINK 2025, CadenceLIVE 2025, and other events of note in the first half of May.

This episode covers the follow topics:

➡️ Highlights and key takes from IBM THINK 2025 according to Jim (3:23)
➡️ Jim's recap of Intel Direct Connect and TSMC Symposium 2025  (11.27)
➡️ Highlights and key takes from CadenceLIVE 2025 (17:30)

Hit Leonard, Jim, and Karl up on LinkedIn and take part in their industry and tech insights. 

Check out Jim and his research at Tirias Research at www.tiriasresearch.com.
Check out Karl and his research at Cambrian AI Research LLC at www.cambrian-ai.com.

Please subscribe to our podcast which will be featured on the neXt Curve YouTube Channel. Check out the audio version on BuzzSprout or find us on your favorite Podcast platform.  

Also, subscribe to the neXt Curve research portal at www.next-curve.com for the tech and industry insights that matter.

Leonard Lee:

Hey everyone. Welcome to this next Curve Rethink podcast episode where we break down the latest tech and industry events and happenings into the insights that matter. I'm Leonard Lee, executive Analyst at Next Curve, and yes, we're in a rush here because we don't have that much time, but this is the Silicon Futures. episode we will be talking about the happenings in, actually a number of events and things that have occurred in the last couple of weeks, right? So this is a little bit of an ad hoc episode, but I am joined by the highly accelerated and scaled up and scaled out Carl Fre of Cambrian AI research The four D packaged Jim McGregor of Curious Research. What do you guys think? Date

Jim McGregor:

you good? I think you're getting more Lee in your hair. Leonard. at least you've got hair. Well, yeah, I know, but I think his mind's going with it, so, you know, I don't know.

Leonard Lee:

Hey, it's been a crazy year. I'm losing it. Yes. You guys are tuning into like a. Big challenge I have, keeping myself together. But, anyways, hey, before we get started, please remember to like, share, react, and comment on this episode, and also subscribe here on YouTube and on buzzsprout to listen to us on your favorite podcast platform, opinions and Statements by my guest. That means these two guys. are their own. And you know, I think they like it that way'cause they like to be independent and, don't reflect my opinion at all. Right guys. Yours isn't

Jim McGregor:

right. So, you know. Yeah.

Leonard Lee:

Yeah. So that makes it easy. Uh, so yeah. Yeah. And we do this all to provide an open forum for discussion and debate. Yeah. On all things. related to the semiconductor industry, which is the foundation of almost everything that you can imagine, according to Jim. And probably Carl, but, hey, a lot of stuff going on. This is complete insanity. Carl, you and I were at Cadence Live. Jim, you were at IBM Think, and so there's a lot. Yeah, it looks like it. So, let's start off with you because you're the one that who, who really bugged me about recording something, I think, or was it the other way around? I. I think it was me. I don't know. So we're gonna let you kick things off, because I think Carl and I are very curious about what you saw at Think and what your takes were there. And then I think, Carl, you and I should really provide some insights and perspectives and share what Jim, what we saw at. Cadence Live. Right.

Jim McGregor:

Well, and also building on Cadence Live. You gotta remember there were two major events for semiconductors, in the previous weeks with the TSMC symposium and, Intel Direct Connect, A lot of it. So my mine was more focused on systems and services. the rest of it was pretty much focused on semiconductors. Think, as is pretty much IBM's. Big conference to present itself and all of its works and its wares. And IBM, for the most part is a software and services company with some systems, obviously with their mainframes with the power and the, Z series, Linux one series. this year is really building on everything that they keep doing. It's very similar to seeing, Nvidia with ai. it's IBM and the enterprise, and providing complete enterprise solutions, Secure, highly reliable mainframes for transaction or critical, processing elements. Mm-hmm. To, providing the different elements. So one of the coolest things they talked about this year. was, Watson X Orchestrate. Now, this is something they introduced several years ago, but it's morphed over time and this is kind of really an all inclusive. AI orchestrator, for the enterprise. And one of the coolest things about it now is now they've turned it into being able to do custom agents within five minutes, by selecting other, AI models and stuff that are out there to where you can actually customize an agent. And they've already got some pre customizations, ones for elements like HR purchasing or stuff like that. But the fact that you can take a lot of that, structured data and now unstructured data, that's a critical element that they've added to it now as well. And these different pre-designed AI models, whether it's from IBM or a third party or in-house or whatever, and really create custom agents to be able to do, just about anything within enterprise. Very impressive. they also have a solution, a hybrid, kind of orchestrator basically to implement and automate a lot of the, processes with the enterprise. so it's interesting to see how, Nvidia on one side is really that enabler and IBM is really that second half of the puzzle in a lot of cases where they're, enabling the enterprise to be able to do all these things and they provide complete. Obviously, consulting services to help walk companies through this process. So it's pretty amazing to see how, they keep evolving AI for the enterprise. And one thing that we've said in the past is, they're AI enterprise ready out of the box. And, everyone else keeps struggling with developing their own models, using open source models. working with, in-house, data scientists or external consultants, IBM comes in and has this huge portfolio and says, pick and choose what you want, and still having mainframes that are very, very competitive in the market.

Leonard Lee:

So how does that different from what, Salesforce is doing with Agent Force because that's sort of a platform play. Oftentimes, IBM is from a software perspective at. The platform level, right? They're not necessarily, or application level, that's typically what you see with SAP or Oracle with their applications. And all these guys, these ISVs are also introducing. Their own agentic frameworks and stuff like that. So, I mean, you know, it was weird because when you mentioned all this stuff on, chat that we had earlier. I was thinking, are these guys getting into ERP? Are they trying to replace ERP? Because if these are function specific out of the box stuff and you're stitching'em together with the. Orchestrator, which is like, for people who are, who've been working with enterprise, software for a long time, know that workflow and business process management, tooling is not a new thing, right? Where is IBM taking this stuff?

Jim McGregor:

And very much so, and IBM's coming from a different realm. I IBM, especially with their mainframe solutions, and they've done a great job. The mainframe is still very applicable. It's that highly reliable, highly secure, high-end processing solution, but they've morphed it into a standard rack. in 2014 they went to a standard 19 inch rack. So it looks like any other rack you'd see in a data center or on a hybrid data center. but they're already critical for financial transaction processing, healthcare processing, military and defense applications. So anything that's highly has to. Be highly reliable, highly critical, and now, so instead of bringing it up to the enterprise, they're bringing it down from the enterprise from, I would say the most demanding type of work applications that you'd have. And they even noted during the conference that, they are seeing both existing and new customers. yes, there are actually new customers for mainframes, believe it or not. They're moving from just using these for critical applications and using their processes and everything else to actually running standard applications on the mainframes. And even now, AI applications. And one demo they had, they said, listen, in some cases we've seen a customer go from. 286, processing nodes, X 86 processing nodes down to 16, processing nodes on one of their, Linux one systems. that provide a huge savings, not to mention, smaller form factor and everything else. Yeah,

Karl Freund:

yeah.

Jim McGregor:

and then that's gonna be different for different applications obviously, and how you're using those data center resources. But it also provides a lot of cost savings and a lot of consolidation within the data center, which is important today.

Karl Freund:

I think they're doing a good job of bringing AI to where the data already resides, right? That too. Both from a tele two processor with the AI accelerator for simpler, actually generative AI too. But yeah, generative AI really is more focused on Spire accelerator card, right? That's true, that's that IBM designed, inference processor that you can rack and stack as many as you need and attach them into your Linux one system in the future right now on, on system Z. And that way you can do some. Very sophisticated ai, right on the mainframe. You don't have to migrate it off an X 86 server with, Nvidia Accelerator. You can do it all right there. And I think that's a major step forward for IBM and their

Jim McGregor:

clients and having an accelerator on the Tulum tube processor, as well as on, the Spire Accelerator gives them kind of the hybrid AI strategy in a single platform. Mm-hmm.

Leonard Lee:

Well, I wanna point out something that's really ironic here. It's, for the longest time, IBM had subscribed to this notion that software was eating the world and that they needed to make this huge migration to software becoming more of a software business, right? Mm-hmm. Yet we are talking about hardware. and how that hardware is such a necessity and essential part of the whole. Now, I think. System discussion that everyone's having right across the board, even at Cadence, right? It's all about the system. And, anyways, I wanted to point that out because, I have a long legacy with IBM because I worked there for a long time. Yeah, yeah. And so, and Carl worked there too. So I worked there for 10 years And I get de power and Z so I did. But you work with IBM for

Jim McGregor:

it is impressive to see. and you're right. And their software strategy is, I would still say the crux of it. It's nice to see what they're doing with mainframes, but the mainframes wouldn't exist without the software. They're introducing new granite models. constantly, and those are open source models for enterprises to use. Not to mention, like I said, all these other tools and assets that they can use, Lakehouse for actually managing, diversified, structured and unstructured data. So, it's very much they continue to build out that portfolio to support the enterprise. That's probably the most impressive part of it.

Leonard Lee:

Okay, cool.

Karl Freund:

So really quickly. Go ahead. they added one of the few companies that have added support for Cerebra for scale engines within the IBM Cloud. So now you can access an IBM, cloud, and you can go all the way from embedded AI processors all the way up to a cereus rack for, fastest inference processing on the planet.

Leonard Lee:

Yeah. Thanks for reminding us. Yeah, glad you're covering that stuff, Carl. so really quickly, Jim, you mentioned TSMC and, Intel Connect. neither Carl or I attended either of those, so maybe you can, share quick highlights and takes.

Jim McGregor:

these are both focused on semiconductor Foundry services, but really highlighting the technology. TSMC introduced their a 14 process node upcoming, while Intel was talking about the status of its 14, a process node not to be confused. despite. The similarities in numbers, the processes are completely different. Intel's trying to use, they haven't committed to it yet. A high na lithography on their 14 A TSMC says, absolutely not. We're not gonna use that for a 14. different strategies there, but definitely showing that the fact that. The semiconductor process, technology is still continuing to advance, and the process nodes, they note TSMC noted that they expect significant, improvements in performance efficiency with the 14. With the A 14. Similar to what, Intel's looking at. Intel's really big jump, I think is with 18 a, which goes into Production this year. but 14 a provides'em a lot more flexibility. That's where they see some derivatives that they're introducing that will be more applicable, not just to server and AI and HPC environments, but also to mobile and other environments. But despite. The processes really the shining star of both events was packaging. Yeah, yeah. just the advances in packaging. that, TSMC is talking about, which they refer to as silicon on wafer. as well as Intel's, mib and ROS solutions. They're basically doing chips on chips. this is almost going to look like a Chip city When you locate these solutions, the amount of integration at the packaging level is just so mind-numbingly, intense. It's incredible and dense. It's incredible. and, you know, and intel, obviously we seem to have the lead with the backside power because it's not just a packaging technology to them, it's also a silicon technology and they're architecting their silicon to be able to do that. So it's, it's gonna be interesting and. Even though we think of these two as being competitive, the real way to think of them is being very complimentary because as we get to those silicon on wafers or these, city of chip solutions,

yeah.

Jim McGregor:

you're gonna have to be using, probably die from multiple foundries. Intel's not gonna be competitive price wise in some of the older foundries. they're gonna, they're trying to be competitive on the newer ones. and they may be a little bit more advanced on some of the newer ones, especially with the backside power and stuff. You're going to see, I don't know of anybody, especially as they start getting these really dense configurations, they're gonna be using chips from a single foundry they're gonna be doing, having to use it from multiple ones. Matter of fact, one of the interesting announcements is Intel builds out its ecosystem, which TSMC already has in place. They announced the new value chain. Alliance and a new chip alliance. And part of that was also, partnering with Amcor. So not only are they offering their own, advanced packaging solutions, but they're gonna be working with Amcor to be able to do it too. And Amcor is building a brand new facility in Chandler, Arizona And then that's where the two main Intel fabs are going up right now. 52 and 62. So, a big bonus for Arizona, but also I think for the ecosystem to see how all these entities are working closer together. So even though we think of these guys as being, competitive, they really are gonna be very complimentary. and Intel's making strides, synopsis got up there and said, listen, for the first time we have a full tool chain that's fully optimized for 18 a. I have a huge statement to say, but they had, Siemens was there, cadence was there. So obviously there was support from the entire ecosystem. But, seeing the foundry really start to come together. But once again, I want to hesitate, when I say this, it's still gonna take Intel, another five years before that foundry. Really all. It meshes together and really becomes a competitive solution.'cause once again, they're gonna have to work with the other foundries. They're gonna have to work with the rest of the ecosystem. And they're still learning how to be a foundry.

Leonard Lee:

Yeah. And it makes you wonder, is that too much? To bite off and chew?

Jim McGregor:

Well, I think they're getting so much input from the industry. and obviously there's a lot of stakeholders here. There's the US government, there's the, other western governments, Japan and Europe that are investing in a lot of the Intel Foundry services. so I think they're getting so much input that it took global foundries a decade before they could really learn how to be a foundry. I don't think it's gonna take until that long. Yeah. But obviously still putting the fabs, they're already doing back, backend assembly for foundry as a foundry service for wafers that they haven't produced at all. Mm-hmm. So, they're already doing very well on the packaging side.

Yeah.

Jim McGregor:

But to put all the rest of the capacity in place and to get everything working and even to staff up.'cause they're gonna need dedicated engineering teams for pretty much every customer that they service. And that could be a few people to, as they indicated, a few hundred people for some of their major customers. So it's gonna take time. Cool. Hey

Leonard Lee:

Carl. He's pretty good at this, isn't he? Yeah, he's pretty. Pretty good. Chip guy. Pressure on him and the guy just like frigging wrapped. Yeah. Off like wind him up. Let him roll. Yeah. And now he's gonna tell me I'm talking too much again, Jim. I know. Hey, you were at These events. So, I was at RSAC, we're not gonna talk about that, but, we could, I think we're really, really curious about what you saw at the semi-related conferences no RSAC already talked about. So that's okay. We have a podcast on that. I got you guys covered. Don't worry, I don't have to brag about that stuff, but hey, Carl. Almost most of the oxygen out of the room. But hey, let's talk about cadence.'cause you're probably curious on what you missed out. Right

Jim McGregor:

Jim? Yes, absolutely. I could not, they invited me and I was already committed, so Yeah. Or I should be committed. One of the two. I can't keep it straight. All right,

Karl Freund:

let's put Cadence Line was interesting. Me, the two main things that came out were at this system level where they brought up their new millennium server the first time. Cadence has a full stack of EDA solutions and not just EDA. Solutions. They have molecular biological solutions and other, optimized design software all running on Nvidia. GPUs actually all running on Blackwell. So they introduced this new rack of, NVL 72. it's a cadence millennium. interesting enough, Jensen on stage ordered 10 systems. Now these are a million and a half bucks pop. So he plays a$15 million order on stage. Surprised the heck out of Roo, the CEO of, cadence, probably surprised a lot of guys at Nvidia that, hey, you're gonna have all his hardware. I think that was big news, that they're really accelerating their entire stack. Now. Not, not entire stack, but what ostensibly is going to become their entire stack on, Nvidia GPUs. that's a very big deal. That's a very big deal. and puts them in a unique position. The other thing that really caught my attention was the, agentic ai, so. They had a slide that showed, the full stack of design and verification solutions and the word agent was added to each one. And I asked the head of, agent systems at Cadence, so I. Is there any engineering behind that or is that just slideware? And he said, no, no, no. We've added software to each of these solutions that allow them to plug and play, and combine them into whatever solution you need for your problem. this is interesting.

Jim McGregor:

This is really interesting'cause this was one of the key topics at the day Zero Hot Chips seminar last year. It was a quite, well, first off it was very clear that even though you had AI capabilities throughout all these tools, most of the, semiconductor firms didn't trust'em yet. They would only use'em for verification, validation. They wouldn't use'em for chip layout. But the other question came up was, okay, do we use, do we put. AI in the tools, or do we have an agent that more efficiently runs all the tools together? What was Cadence's view?

Karl Freund:

Well, I think they do both. Right? Okay. So they have their cereus not to be con confused with Cereus. so this is their solution for intelligent layout. Saving people a lot of engineering time. Also producing higher, value chips, better performance, lower power consumption, all using AI to lay out the components of the block in a way that human probably wouldn't even think of. it perhaps that don't even make sense intuitively, but, really, through, analysis of, think of it like playing a game of go. You got, 10 to 37 possibilities of ways to lay this chip out and it will find, okay, if you wanna optimize for power, you do it this way. You wanna optimize for performance, do it this way, right? You wanna optimize for chip area, you do it that way. And you can pick and choose what you want to do. some people say, well, is this gonna replace. circuit designers answer, heck no. They got five more circuits they need to work on that are in the wings. Yeah. So I think it does help, somewhat address the skill gap that the, industry's realizing today. and of course, cadence and synopsis, providing the IP solutions that you can plug into whatever solution you're building. between the AI. Portion of the story about age agen ai, gen tip design, which is a real thing. They've got over a thousand tape outs now, using their AI layout tools. cereus, cere. Bruss with a U. Yeah. and so I think it's a big deal. it's gonna help. I mean, take a look. You've got Nvidia is now saying, have been saying for a year now, Hey, we're gonna go to an annual cadence, unintended, annual cadence of chip releases. They didn't hire, an entire new 10,000 engineer workforce to do this. They working smarter, they're using AI themselves mm-hmm. to accelerate the chip design process. and therefore they can get more work done. Yeah, roughly the same amount of engineers.

Jim McGregor:

so we're saying ai, designing the chips, ai designing the systems, ai writing the code. Not entirely.

Leonard Lee:

So Carl, I think he had a chat with Paul Cunningham, right? Yeah, I did. Yeah. Him, yeah. And we, I ended up having a mini round table with him and I asked him those questions as well. Definitely a lot of progress happening in terms of identifying where, generative ai, which has taken now a like LLMs. Okay.'cause that was like the big thing last year. Where Agentic AI is, able to, accelerate and scale certain tasks, within the design flow and process. they're still in this discovery mode, right? Definitely finding areas where, both the more traditional stuff that they've been working on. in terms of like the verification related, AI applications and the generative design, capabilities that they've had before, how that can be augmented with this agent layer and, the application of like, let's say reasoning model. So on the R model front, that one, that part. Is still something that they're, working on. Right? So in terms of, domain specific or building domain specific models, that's like a next step. But yeah, definitely Are not at what a lot of folks in the industry are talking about, which is autonomous design, which, so we're seeing a lot of these, representations of autonomous design, that, borrow from the autonomous. Levels of, the autonomous, automation, framework, the SAE, levels of, vehicular, automation. Mm-hmm. Yeah,

Jim McGregor:

I was think I was thinking Skynet, but okay, let's go there. Well, we're, we're, you know,

Leonard Lee:

so Chuck Albert did get up on stage and say, Hey, yeah, we're now starting to introduce level four. Which, okay. Um, sure we will. But they did outline a roadmap. I think they're, based on my conversation with Paul, much more aware of the challenges and issues and limitations of the technology, which I think is a good sign. How they get to four. I still have a lot of questions about, I'm hoping that, that's a discussion that we collectively will continue to have with Cadence. I want to go back to one of the things that really stood out for me. We've been talking about system for a while on our podcast, right? this whole notion that cadence is moving toward a, from a EDA company to an SDA company, and this is the thing that popped right, Carl. Jensen got up on stage and they were talking about systems. And so in many ways, cadence is being pulled toward becoming more of a systems. based design automation software company, right? the really interesting thing about M 2000 was now as Paul would say, you're going from Boolean operations to now numerical and, matrix math type stuff. That has huge implications because now from a simulations perspective, you're starting to now take things to a broader level in terms of what you can simulate and then take it into new frontiers. that. Oddly, these guys are talking about helping to design and model AI factories at like a, like a mm-hmm. Infrastructure facilities level. So that's going from chip all the way to data selling this massive

Karl Freund:

scale, right? Which, and then you add in things like an Nvidia dynamo to orchestrate. Yeah, the AI at a data center level.

Leonard Lee:

Yeah. So it's some pretty crazy stuff, in terms of Sure. Expansion and what I'm, what I would say is frontier expansion and then there's the verticalization aspect.'cause they, there was a lot of talk about drug design, which is. Their parlance different from, and actually in reality different from d a drug discovery, right? So there's a lot of things to digest coming out of, cadence Live. That I think have, reflections on what's happening in the industry, but also are, evolutions and transformation of what's happening with players like Cadence. as these, technological trends as well as industry trends are. leading them in an entirely new direction, right? Mm-hmm. Where, and we need to start thinking about them in a different way. And we heard that at GTC, right? That was like, one of the things Jensen really wanted everyone to understand is like, look, we don't freaking make GPUs anymore, and we're not like a chip company, right? they've graduated from that. But anyways, how did I do? Okay. Okay.

Karl Freund:

Yeah.

Leonard Lee:

Okay. Well we got, we have to, I'm actually late for my. Next meeting. Gentlemen, thank you so much. This is really fun, right? Not bad for being under time pressure. Right. We got stuff out really quickly. Yeah,

Karl Freund:

absolutely.

Leonard Lee:

hey, really quickly share with our audience how they can get in touch with you if they don't already know that, which they should.

Karl Freund:

Yeah. carl.fo at cameron ai.com. go to Cameron AI website. You can see a lot of my work there.

Leonard Lee:

Yes.

Karl Freund:

As well as on Forbes and EE times and

Jim McGregor:

also, yeah. Jim McGregor Furious research.com. definitely look for us on four of the times and, other publications around the industry. we'll have more to say on that in the future. And, look for our podcast as well. Yeah, app, definitely, definitely, definitely. Like, and subscribe.

Leonard Lee:

Yes. Yes. And Speaking of liking and subscribing, please subscribe to our podcast here, featured on the Next Curve YouTube channel. also check out the audio version on Buzzsprout and find us on your favorite podcast platform and also subscribe to the next curve. Research portal@www.next curve.com for the tech and industry insights that matter. And we'll see you next time. Take

Jim McGregor:

And look for Leonard and myself at Convex.

Leonard Lee:

Yes. Yes. We're gonna go hiking. Yes, absolutely. And checking out some of the tech. Yeah. Oh yeah. Have a

Karl Freund:

great trip, guys.

Leonard Lee:

See

Jim McGregor:

you.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

IoT Coffee Talk Artwork

IoT Coffee Talk

Leonard Rob Stephanie David Marc Rick
The IoT Show Artwork

The IoT Show

Olivier Bloch
The Internet of Things IoT Heroes show with Tom Raftery Artwork

The Internet of Things IoT Heroes show with Tom Raftery

Tom Raftery, Global IoT Evangelist, SAP