
The neXt Curve reThink Podcast
The official podcast channel of neXt Curve, a research and advisory firm based in San Diego founded by Leonard Lee focused on the frontier markets and business opportunities forming at the intersect of transformative technologies and industry trends. This podcast channel features audio programming from our reThink podcast bringing our listeners the tech and industry insights that matter across the greater technology, media, and telecommunications (TMT) sector.
Topics we cover include:
-> Artificial Intelligence
-> Cloud & Edge Computing
-> Semiconductor Tech & Industry Trends
-> Digital Transformation
-> Consumer Electronics
-> New Media & Communications
-> Consumer & Industrial IoT
-> Telecommunications (5G, Open RAN, 6G)
-> Security, Privacy & Trust
-> Immersive Reality & XR
-> Emerging & Advanced ICT Technologies
Check out our research at www.next-curve.com.
The neXt Curve reThink Podcast
Silicon Futures for June 2025: AMD Advancing AI, Cisco Live!, Sensors Converge 40th and more!
Silicon Futures is a neXt Curve reThink Podcast series focused on AI and semiconductor tech and the industry topics that matter.
In the month of June, Leonard and Jim talk about their key takes and highlights from AMD Advancing AI, Cisco Live!, the 40th Anniversary Sensors Converge event. They cover the big news across AI computing, networks, edge AI, and the fast-evolving sensor edge, and provide their analysis of Qualcomm’s announced acquisition of Alphawave Semi and much more!
Topics that mattered in the AI and semiconductor universe in June of 2025:
- Highlights and analysis of AMD Advancing AI 2025
- The uber growth in AI computing demand - Tiria Research forecast
- Highlights and analysis of Sensors Converge 2025 - 40th Year!
- Highlights and analysis of Cisco Live! 2025
- The power and cooling problem with AI data center
- Intel cleans house with layoffs while Apple and others clean out their support
- Qualcomm makes a big AI data center move with Alphawave Semi acquisition
- Is the defense industry the new frontier of innovation and tech?
Hit Leonard, and Jim up on LinkedIn and take part in their industry and tech insights.
Check out Jim and his research at Tirias Research at www.tiriasresearch.com.
Check out Karl and his research at Cambrian AI Research LLC at www.cambrian-ai.com.
Please subscribe to our podcast which will be featured on the neXt Curve YouTube Channel. Check out the audio version on BuzzSprout or find us on your favorite Podcast platform.
Also, subscribe to the neXt Curve research portal at www.next-curve.com for the tech and industry insights that matter.
Hey, welcome everyone. This is um, next Curves Rethink podcast. This episode we're gonna be breaking down the latest tech and industry events and happenings into the insights that matter, and I think that's what we always do. Anyways, I'm Leonard Lee. Executive analyst at Ncur and in this Silicon Futures episode, which I do in collaboration with TEUs Research and Cambrian AI research will be, talking about the happenings in the month of June, 2025. And I'm here with my. Very good friend Jim McGregor of the optically co packaged serious research. How's it going, Jim?
Jim McGregor:Good. How you doing, man? Had I done good? You've done good. You always find a unique way to introduce me? No, I'm starting to run out of these things, though.
Leonard Lee:That may be a good thing. I don't know. But before we get started, please remember to like, share and react and comment on this episode. Also subscribe here on YouTube and Buzzsprout to listen to us on your favorite podcast platform. Opinions and statements by my guest, including Jim, are their own and not. They don't reflect mine or those of ncur, and we're doing this to provide an open forum for discussion and debate on all things semiconductors and chips and hyper-converged X, Y, Z, and I. Uber scale. Welcome,
Jim McGregor:neuromorphic. You know, pie in the sky, you name it. Oh my God.
Leonard Lee:Well, we're not that. We have some six G. No, just kidding. no, no, no, no. Little early for that one. You're going overboard. You're going outta control. We need to apply guardrails to you. yeah. Jim, it was weird because this month was, it was actually quite semi light for me. I know that. No month is semi light for you, but, mm, I'm looking forward to comparing notes and, comparing what we thought were the big highlights of the month. So, why don't you go ahead and start. You want, you have something that you wanna talk about.
Jim McGregor:Well, one of the events that, transpired over the past month was AMD's AI event in San Jose. Oh, wow. it was their big opportunity and this is kind of morphed over the years. It started out as their data center event and blah, blah, blah. So they use this mostly to introduce a couple of key things. one was their MI three 50 mm-hmm. Which is their latest generation accelerator. And built on the MI 300 generation, and start talking about the MI 400, which I'll have next year. So they, they're on that same annual cadence, as well as the rest of their platform that's gonna be coming out. So it's kind of a deep dive into that. Obviously more performance, more power efficiency, everything else going across the spectrum for this product. but also, it's gonna be part of their system solution, which was probably the biggest announcement there, which was Helios so much their competition, NVIDIA's doing now they're focusing on actually developing a full rack, or in this case it's a half rack. they're just specking it. So, unlike Nvidia, they don't plan on building any. That are gonna be branded a MD, but they plan on at least providing this rack specification to their OEM and ODM partners. To be able to build that and take it to market and even modify it, for the customers that want customization, they also talked about rock em. And I have to say that, while I still don't put it on the same level as Cuda, it's getting much, much better. you can now use it on Windows. they are making significant advancements and they're starting to do a lot more developer outreach. They even had to developer track as part of this event in San Jose. So that was. Pretty significant. a MD is investing heavily. They're still going down this road. And quite honestly, with Intel falling by the wayside with their roadmap or, lagging with their roadmap, they're the only other. Yeah, they're up in town. They're the only other solution. So it is good to see that A and B is still investing heavily and still doing well. And they also took the opportunity to talk about the other products that are gonna be coming out, you know, especially next year with Epic, with the next generation Epic and everything else. Yeah, but they also had a separate day for some of us to talk about their new thread ripper. Now I love thread ripper thread. All my systems are thread ripper. I have to say that all my demo systems, it is the ultimate, worst station processor, especially. We can stick, either a high-end radio on or GForce A GPU or GPUs, I should say, in it. You've got just incredible power and I really. Think that the workstation's gonna be an incredibly valuable market going forward. Yeah. We saw this at Compex. Where a MD, Nvidia and Intel all focused on the workstation. The fact that, it's getting expensive to use a lot of those cloud resources and we're starting to run into, resource limitations. Even open AI is having to start throttling some of their customers because they don't have enough GPUs to service everyone. I think that the workstation market's going to really be probably one of the most important spaces going forward, especially for developers, AI developers, being able to do, whether you're running, AI workloads in inference mode, or whether you're developing new models or optimizing models or whatever. I think the workstation is set to go through this resurgence. the same thing we saw when people started developing videos and images and movies and everything else. I think we're gonna see the same thing. I think we're seeing a new wave with AI that's, gonna drive the workstation market forward.
Leonard Lee:Yeah, and in regards to the A IPC narrative that's. Been sort of the prevailing one for the PC as it relates to ai, ai, pc. Right. I think the workstation has that potential, as you're saying, to take that beachhead for enterprise. because one of the things that I've been saying for the longest time, security. And confidentiality are huge challenges with large models, we have to recall the early thesis around large language models is that you're going to have this one Uber model that is going to. be trained on all of your corporate data. That is not turning out to be the case at all. And so, I totally agree with you and we are seeing that set up where the workstation is going to find its role and will probably be that catalyst to drive enterprise, AI or serve as that beachhead, right? Because right now, enterprise ai. Across the board is struggling. We're still talking about POCs, right? We're not talking about scale out, deployments yet in the enterprise. And there's a lot of reasons for that. Right? and I think you're making a really great point, and we're also seeing some of. That interest or that potential, being reflected back in, with some of our, chip maker friends. And one of the things that I wanted to, you know, you were there. I wasn't, I was at Cisco Live. but one of the things I noticed, Jim, I wanted to, get your reaction to this was there was a lot of talk about actually the CPU. and it, to me, it seemed like, the keynote presentation in particular had a leaning toward inference, right? That a lot of the value propositions and differentiation that a MD wanted to talk about or seemed to want to talk about was how inference is different in their portfolio. Was actually better tuned to address that opportunity than let's say, their green competitor. Right? And so they did, like you mentioned, they brought up Epic. But in the context of how Agen ai, when you look at the workloads of a quote unquote. Agentic AI application or framework or stack, a lot of that compute is not accelerated. It's the old stuff, right? It's more the traditional CPU bound, workloads that aren't going to be shifting over to A GPU. Did you get that sense or did you disagree with Uh, no.
Jim McGregor:No. I would agree. And matter of fact, we've seen this messaging from Intel, Nvidia and a MD over the past year where they've all made the case that the CPU is very important to ai. It's important for that pre-processing, it's important for running certain types of workloads. it's going to be a key part of that. And the fact that if you increase the performance of the CPU. In an AI platform, you're going to get more performance outta that platform. It's not just about the accelerator, it's about having really a CPU that can help offload and do a lot of that heavy, pre-processing and even some of those AI workloads. So, no, it's not surprising and obviously. Comparing Grace to, epic is completely different. You've got a very efficient low power core in Grace, that's based on the arm architecture. You've got the X 86, which is a much more complex and performance centered type of architecture, so. I would agree that you're going to see, and it's gonna be like everything else. Certain workloads are gonna run better on this architecture. Certain workloads are gonna run better on this architecture. Yeah. But I would definitely agree that, anybody that discounts the CPUI. Over a GPU is foolish. Not just because it can process these workloads, but it can process these workloads in a lot of cases. A lot at a lot lower cost. Yeah. All these systems have CPUs in them. It's like you need to use that resource because the GPUs are expensive. They're power hungry, and. Just like we were talking about the workstation, they require an infrastructure to support them. Yeah. Especially when you get to these higher power levels that we're looking at with these latest GPUs that are running, 600 watts and up. So it's not going to be easy. And the CPU becomes a critical part of it. Matter of fact, even if you want to go a step further, start talking about what IBM's doing, they're integrating AI accelerators. Into their data center processors, both with Tulum and Power. So you've got that capability, in the processor on certain types of AI workloads as well as their spire accelerator outside the processor.
Leonard Lee:Yeah, and one of the things I thought was really interesting was their announcement for supporting, distributed inference, which I thought. Was really interesting and I had to take a double take on that because it, it's basically what we're seeing with, NVIDIA's dynamo, right? Mm-hmm. And I think that's a big deal that also places a MDI. in a flexible situation, sort of how Dynamo has, is theoretically and, and probably practically, positioning Nvidia to, you know, further, build on its repurposing, talk track. Right. So, for all these, really large, Uber ai, super competing deployments that a lot of their customers are doing. I thought that was number one surprising. They're fast, very fast following, Nvidia in some really important areas, so. Mm-hmm. Um, and you know what, credit to Intel, a lot of the points that you're making earlier, they were talking about it for a couple of years, and you and I, as well as your colleagues at ERUs, as well as Carl, we have been talking about this. For a long time. This is not new stuff for us, it's just we're observing the eventuality that we had been advising the industry since the advent of, chat, GPT. So, it is interesting to see a MD now position itself in this constant continuum of generative AI evolution, right? Into this agentic world. Yeah.
Jim McGregor:Well, and quite honestly, you know, Jensen said it best that, the new unit of computer is the data center. Because you have to have, and whether that's a single data center, multiple data centers, whatever, you have to think of these solutions as a single entity, a single system, a single server, single rack, single everything, all working together to be effective. just to give, some of our viewers a feeling of what that really means. It was funny'cause Terry Research did our first AI forecast in 2023 and we projected a 19 times increase in demand, by the end of 2024, we were wrong and we were one of the most aggressive forecasts out there. It was more like 500 and some odd percent. It was ridiculous. things are changing so rapidly. we were new with LLMs at that time. we didn't have agentic ai, and just to give you a feeling, agentic ai, you're talking about, having agents that can reason, so they're using multiple models and maybe going through the models multiple times and even calling other agents, right? So you're building out an AI ecosystem with agentic ai where we're probably gonna have more users when you consider all the agents than we have. People on the planet. that's a huge thing that changes, what the demand is. Not to mention the fact that they're generating more and more tokens or images or videos every time they run through something. Yeah. We're still working on our new forecast. We've done the token part of it. We're still working on the images and forecast, but we're talking going from 667 trillion tokens generated in 2024. To almost 80 quadrillion tokens by 2030. I mean, we're talking about, compound annual growth rate of 127%. On average. which basically means we're doubling more than doubling the number of tokens every single year. it's insane. So, and it was funny'cause I hear people talk about, are we gonna reach an ebbing of AI and it's gonna drop off? It's like, if we're gonna reach it, I have no idea when, because we are running so fast, so hard. And all these guys, Intel A and B, Nvidia, I give them credit. They're looking at this and saying, listen, we can't afford 10 to 20% increases in performance. We need to think of everything holistically.'cause we have to deliver a hundred or a thousand times more performance, not only to meet the demand, but to bring the cost down of ai. And that's a huge challenge.
Leonard Lee:Yeah. And, it is such A complicated matter because there's so many factors in scaling and tweaking, and, progressing the economics of all of this. it's pretty mind boggling, you know? and we'll touch on a couple of things that I've, managed to encounter at sensors converge in that regard. So, yeah. So let's, let's move on. One of the events, yeah. I Tell me about sensors Converge. I wanna know all about this. Come on. Sensors converge. It was, Hey, it was all about Edge ai. So you can't escape any of this AI stuff. it's infectious, it's infecting, it's like a bacteria. anyway, you know what, it's all about scale down, right? a lot of what we've been enamored with in the AI world has been scale up and scale out, but this is. What you see in the sensor world is more about scaling things down. And so there's been a lot of progress, especially this year with algorithms making their way, toward, microcontrollers. I mean, we're talking about models that are compressed down to a few kilobytes You know, I mean, just really, really. small, highly functional, but application specific, models that are progressing. This notion of smart sensors, right? And a lot of the focus seems to be on, machine vision, right? Because when you look at the sensor world, in terms of data volumes, right? And data processing requirements is still video, right? LIDAR seemed to take a bit of a backseat. Last year it was also pretty tepid, but it seems to have hit a bottom maybe, I'm not quite sure. But you don't see, it's still expensive. It's still
Jim McGregor:expensive.
Leonard Lee:Yeah. And so I think the industry is leaning toward machine. vision, simply because, cameras are readily available and the edge AI capabilities are becoming much more capable. And, a lot of mention about the silicon becoming much. Cheaper as well. And that's always really an important factor when it comes to embedded and, edge ai.
Jim McGregor:That's also because there's a lot of things you can do with other radio waves, whether it's radar, whether it's wifi, whether it's, whatever. There's a lot of things you can do to do very like UWB to do very accurate positioning. Yeah. To do, sensing. So there are alternatives to lidar. So I think that's a reason. Yeah. And, and it varies by segment. Some segments are gonna use this, some segments are gonna use that. But I think that's another reason is because we're still really learning that there's so much we can do with, and let's face it, they're all waves, whether radio waves or light waves. Right. But there's a lot we can do in the radio frequency realm that, we haven't tapped yet.
Leonard Lee:Yeah, and you're making a great point here because there's. Increasing talk about more capable sensor fusion, right? And so I coined this thing, and by the way, large language models and even s SLMs have not really made their way down too close to the sensor. Those are still pretty experimental, nascent type concepts. But a little bit further away from the sensor, there is some, let's say. emerging conversations around what I call long thinking sensor data fusion, right? Where you might have a SLM that basically interprets aggregated data off of an array of sensors. Mm-hmm. And then applies a certain degree of probably pretty low level contextualization of that data. I think next year you're gonna see more talk about that because in terms of the technologies and some of the economics, it's there and we're seeing some of these types of patterns play out in the automobile. Right? Although, mostly in the infotainment system. But there is this thinking that, hey, we can introduce some of this longer reasoning type, Functionality, within some of these, operational systems or even, mission critical systems, right? But it, it's a layer of reasoning. It,, it's not gonna replace a lot of the real time, critical functions that, sensors and sensor, you know, perception systems today, play. So that's one of the interesting thing. Hey, Here's the other thing, a lot of talk of IMU, or inertial, measurement units. Mm-hmm. And I think people are searching for the next big thing beyond the smartphone. Right. You can tell because there's a lot you, when you go to the different, the booths. You do hear a lot of the vendors talk about how they were approached by x, y, Z company about, how they can use their technology for intelligent or contextual computing type application. So, our buddies at, in into narrower there, right? So Edge, edge AI Foundation had their pavilion and one of the things they were, showcasing with their, T one spiking neural processor was, a, new demo. I don't know if you've seen this one, Jim, but it is like a gesture control. application on a smartwatch. a lot of those types of demos at the show this year. And then the other thing that I thought was really cool, and I just bumped into these guys, I didn't even know that they were, there, is a company called Omicron. Are you familiar with them? No, I'm not Okay. So what they do is they focus on mems mirrors, for optical switching. Oh, okay. Yes, yes, yes. Okay. Yeah. Are you familiar with them? Yes, I am. Oh, okay. Yeah. So, what was his name? the CCTO, Trent. Trent at Huang, I think. Yeah, because he has the same last name as Jensen Wong. yeah, he was talking about their, optical mem solution that they have and their technology being used in some of these more advanced, LIDAR applications. Importantly, these guys I think are former Google. Networking guys. they're really honing in on the AI data center opportunity and, looking at the optical, circuit switching and bringing about the next generation of optical switches for, AI supercomputing. So I thought, that was weird, but, right now the ball neck is. Apparently, according to Trent, Huang and Jensen Huang this, OCS stuff, Which they're looking to solve with their, mems mirror technology. it was a great event. I think next year is gonna be even bigger. And full disclosure, I'm an advisory board member for sensors converge, so everyone come next year.
Jim McGregor:It's gonna be awesome. You know, I used to speak at Sensors world, but I haven't been invited to speak at, sensors converge. we'll have to change that. Okay. Yeah. Would love to. I think it really fits in with some of the same stuff we're seeing at Embedded World. Great event. I'm expecting to see very similar stuff at EFA later this year in Germany. I agree with you. the edge is lighting up significantly with opportunities, with innovation and everything else. not just how to use ai, but how to use these advanced sensor technologies, advanced semiconductor technologies. it's gonna be very interesting By 2030, everything's gonna have some form of AI in, it might be something as simple as doing battery management or maintenance detection or whatever. but everything's gonna have AI and everything's gonna be processing ai. Yeah. So the challenge I think is designing, for AI first and then figuring out how everything else fits around it. That's a good thought. I feel like so far we've tried to bolt stuff on, but I think companies are finally getting that you have to design for AI first and then figure out how other things fit in there, because AI's gonna take over a lot of the stuff you did previously, and do it much more efficiently.
Leonard Lee:Okay. we'll have to have a further discussion on that for some drinks at some point. I'm not disagreeing with you. I, I think there's nuances to that because, you know, innovation is complicated, right? Yes. But, we'll have to record that session. Hopefully we will have some nice drinks that we are imbibing or have I bbe several of before we
Jim McGregor:get a actually in innovation's. Very simple. You just have to follow the two rules of engineering. First rule is make it simple. Second rule is ignore the first rule
Leonard Lee:I was about to say. Oh, that's awesome. I love it. I love it. Okay. so I was at, Cisco Live 2025. That was my first time. Yes. Believe they're live, which is really cool. And I thought you might find this really interesting. So, you know, I mean, you know. Cisco's there with AI supercomputing, right? they announced the, partnership that they have with Nvidia, with, connect X, right? So they're using that as well as creating, networking fabrics for all variations of AI supercomputing, doing a lot of work, especially with the hyperscalers, as well as the new, neo clouds, right? So that was one of the things that they really wanted to emphasize. And it, is a growing reality, right?'cause they're involved with Humane, G 42, all these Middle Eastern ai, Uber mega AI initiatives that were announced. Mm-hmm. Was it last month or the month? Just, it was just last month. Geez. Like so much stuff happening in such short time. Yeah. That was right
Jim McGregor:before Compex. When was Compex?
Leonard Lee:May. It was just, ah, see, it's, it's getting crazy. But, hey, I, I did bump into, Martin Lund. Do you know him? I. Yes, he's the c VP of Common Hardware at Cisco. And it was really cool, because I met him at a mixer, right? A social with the, executives. I initiated the chat bot because what he did was he showed me this, G 200. MPU, which is not a neural processing unit, it's a network processing unit. Right. And the frigging thing is like the size of, a Blackwell. Mm-hmm. I mean, it's massive, right? and so I said, oh, so did you use COOs L for that? And after that. We were like best buddies. He goes, you know what COOs Ellis is going? Yeah, I do. I was like, oh, that's awesome. That means we can talk X, Y, Z. And next thing you know, he opened his kimono and we had a great time, talking about, the future of AI supercomputing and the important role of networking. Right? And so we have to remember the networking partner or the scale out element has been something. That wasn't necessarily top of mind or prominent in, this whole AI supercomputing narrative. it's been about a, I would say about a year old. Right. once Nvidia started to introduce, the concept of the Blackwell. systems. it's not just networking. Now cooling, the three infrastructure things for the data center Yeah. That people forget about that are probably the most important parts of the data center connecting all these servers and compute units is the, networking. The power. And the cooling, power's going through a huge change. we've already upgraded. a lot of the hyperscale data centers from 48 volt DC to 400 volt DC throughout the data center. But now, especially with Blackwell and the NVL 72, that's up to 1.2 megawatts per rack. So now they're having to upgrade to. 800 volts DC throughout the data center to support these new racks and new systems and solutions. but we also found out, even though they said that, those systems could be air cooled, it didn't work too well. So basically we're having to go to liquid cooling on all these solutions. Yeah. So it's really a significant change for the infrastructure and the infrastructure's become so critical. Matter of fact, one of the biggest problems right now is the fact that. you can get these racks in, possibly as quick as six months after you order'em. However, it still takes you two to three years to build a data center if you're lucky, Yeah. And a lot of that has to do not just with the physical building, but those three components.'cause a lot of times you get customized so. Companies have to start thinking about not customizing those and going to more modular solutions from the likes of Flex and Vertiv and Schneider and Delta and blah, blah, blah. Yeah. Because, the only way you're gonna actually get data center up and running in 12 months is to start thinking about everything holistically, including those three factors. So I had a really interesting, conversation with Denise Lee, who is the VP of data Center mm-hmm. At Cisco. And so they're actually working on a lot of really interesting, power solutions, including this insane POE, technology that they're starting to bring out of r and d. But, one of the things she mentioned was how there is growing interest in immersion, and I know how much you hate immersion, but we did see, Some of this stuff at Compex, right. Okay. Hear me out. Hear me out. No, no, no.
Jim McGregor:I think we're gonna see a lot of different talking smack about, I think aversion immersion is going to be a lot of different things. I don't think it's gonna be just dropping a rack in liquid. I think we're gonna see a lot of where we're just, using immersion on Part of the board or the module, we're gonna using immersion on maybe the board or certain parts of the system. Yeah. and yeah, there's all the way up to, you know what Microsoft did, we're sinking a fricking data center in the middle of the ocean. Yeah. We're gonna see a lot of different versions of immersion. So I think that we're gonna have to break out this category, pretty soon.
Leonard Lee:Yeah. Yeah. And it'll be interesting though to hear, what folks have to say at Hot Chips this year, but here's the thing. She made an interesting point. The value proposition of going immersion is that you can deploy that kind of cooling architecture in a. Conventional or traditional data center, you don't have to do that heavy refitting or re-architecting that's associated with water. Cool. Right? Mm-hmm. Or, liquid. Cool. And so that,, that was a cool takeaway from Cisco Live. And so with that,,
Jim McGregor:I, it's not that easy. There's some components that don't. You are, you're using, a liquid dielectric material. Yeah. And there's, there's no single one. There's everyone's experimenting. You can even use the gel. We did that in some of the military applications we did with Motorola. you never know how it's gonna interact with other components, other materials, other things. So, you know, when people say, oh, it's easy, it's not easy.
Leonard Lee:I think you've made the point that it's difficult and the industry recognizes that I'm just pointing out a value proposition that's often talked about. And then there is an associated cost with, refitting older. Data centers. this is one of the reasons why we're seeing more air cooled, offerings, right? Whether it's from a MD, Nvidia or even Cisco. there's this recognition that, some of these older data centers are not going away, But they need to be on the modernization path for accelerated computing. they're not being ignorant of the fact that there is a brownfield reality that this revolution is going to have to deal with. Right. Like every other revolution. So, anyway, before we hit the record button, there were a couple things you wanted to talk about. I know, one was related to the end of Lifeing of a lot of stuff.
Jim McGregor:Lately. Yeah. We sing to see, be seeing a lot of companies start to say that, we don't want to keep supporting some of these products. Nest did this with their first two generations of products. Yes, they're, thermostats will still work, but they won't have the learning and some of the smart features associated with'em. I guess Apple is re. calling the, is it the iPhone six is now, I don't know what the term they use with agent, but, whatever. They're not gonna be supporting. And there's been a couple of other announcements I think we've seen recently where, you know, we, we.
Leonard Lee:iPhone six,
Jim McGregor:I can't remember what generation it was, quite honestly, but it was one of the generations that they said, you know, but we're seeing that I think. Yeah, yeah, yeah. Yeah. I, I think, you know, maybe it's just summer cleaning. Oh, no, no. Companies, they're looking up and saying, okay, we need to stop supporting some of these older products.
Leonard Lee:Well, for the big news was their end of life and their support for. any Mac running on Intel? Oh, that was one of'em,
Jim McGregor:yeah, that was one of
Leonard Lee:the, yeah, that was the, well
Jim McGregor:big, that was a big enough. Yeah. So no, I think we're seeing a lot of that spring or summer cleaning, but I also think we're seeing a lot of, cleaning house across companies. Not just in terms of their products, but also in terms of, how they're organized, how they're structured. Obviously the biggest one right now is Intel. they have announced that they're gonna be doing, or at least I've seen that they've announced or there's been reports that they're gonna be laying off 10,000 more people. I know that they are going to be, getting out of certain markets, and we saw this before when they were in trouble and they were falling behind with a MD and Paul Linney took over. He got'em out of communications and a lot of other segments. Well, so far, one of the ones that's hit the floor is automotive. we don't know what else yet. But it will be interesting to see. I actually sat on a plane, next to a lady that used to work for, I think it was Ocom or something like that, but a company that does, educational software. And she said basically AI. Eliminated their product lines, or some of their product lines because, didn't need anymore. So she became a teacher, And she says she uses chat GPT every day. She just gives it a brief outline and it creates a lesson plan for in seconds. So I think that, it's interesting'cause even me, I'm finding new ways, almost every week to use some of these tools. And in a lot of cases I'm finding out that I want more. They're not doing enough. it's not good enough yet. So it is interesting. I am, fine. I think that, We're getting a lot of people, every day, every week, every month over that, initial hump of what AI is and how to use ai, especially for generative and ai. Yeah. And it's gonna be a fun time, but it's also transforming the enterprise.
Leonard Lee:Well, you know, I'm, I'm sort of on the cautious side of the coin on this, just because I see What kind of risks it poses, especially from the standpoint of trust, cybersecurity and the quality of information and content that we have. on the web. I see a lot of trash. Quite honestly being produced. And if you go onto YouTube these days, or even Facebook, you see a lot of junk. And so I think, the challenge is going to be how to manage some of that, to avoid some of the detrimental impacts that all of that is inevitably gonna be have, but ensuring that. Folks like you and other folks who are domain experts can leverage technology in a way where it can have benefit, right? And it can yield, um, productivity and, value creation. I don't think that's necessarily you, realize that through a lack of discipline, there's best practices. There's a lot of things, in terms of, disciplines and guardrails that we as users also need to apply. And I don't see that happening. as much as it should. although, that being said, like what we see at, what some of the vendors doing, and I would include Cisco in this, as well as Apple. they're in a material wave looking at some of these gaps and problems. Like for instance, Cisco has hyper shield as well as their ai, defense. They're looking at. ensuring trustability from the model all the way to, end-to-end, transactions. Now this stuff is still early, right? But the recognition of how big the problem is, and we are also seeing it with other cybersecurity companies that are more real about generative ai that, are being much more cautious. Especially with the gentech, they're freaked out. I get what you're saying, Jim. but there's also tremendous misuse, that is also happening. It's not even is will it happen? It is happening. And that's creating a tremendous concern, especially around cyber trust and cybersecurity. But that's where I'm coming from with this stuff, right? And that's that balance that you and I and, that debate that you're just a downer. I'm an upper. I know. It's great. I think otherwise, again, our show would be really freaking boring if we agreed on everything. And the reason why I love you guys so much is because we disagree on
Jim McGregor:things.
Leonard Lee:But I'm right.
Jim McGregor:know? No, just kidding.
Leonard Lee:You are. You're in your own little way. You are
Jim McGregor:right in my own mind. Is that what you're trying to tell me? Yes. Yes. I'm just trying to be nice. Right.
Leonard Lee:You know, I'm a cordial guy. Um, but, um, we've talked about a lot of things here. Yeah. Yeah. Yeah. Okay, so let's also talk about this other big piece of news because, Qualcomm acquired. Uh, and now that they're acquiring Alpha Wave Semi, and I know that you have talked about Alpha Wave semi forever and ever, well, yeah, they, I think this is a big deal, for Qualcomm. I mean, it's a beginning on what looks like is gonna be a pretty long journey and what is becoming an increasingly crowded market. But I mean, what's your take?
Jim McGregor:I'm sure you know, they're not saying much about it at this point in time. Yeah. And, I understand the reason for being conservative, but obviously this is IP play. They want to be able to pull in, alpha waves, technology and be able to utilize that,, and that's not the first one. they've been acquiring several companies that are going to enhance their capabilities throughout, AI segments going forward. So, they are definitely, getting more aggressive. Yeah, I would say that thing about Christiano, Amman at Qualcomm, he is, looking at Qualcomm as being a much broader company. Targeting, not just obviously mobile and automotive, but a host of different IOT segments. now the data center as well. Getting aggressive on the data center where they, they pre announced a new CPU for the data center. definitely transitioning the company from its traditional markets into kind of a new growth phase. Opportunity to grow in a lot of segments, some that are gonna be slow growth. It's kinda hard. A lot of the IOT markets don't grow very quickly and they're very fragmented. But positioning Qualcomm is once again being that engineering leader to help advance some of these technologies in some of these markets. And quite honestly, a lot of'em,, especially ai, when you look at it, connectivity is so critical to it. So if nothing else, they already have that foot in the door for so many different types of applications through their connectivity offerings.
Leonard Lee:Well, yeah, but not only that, They're not really a newbie to this space because, cere Reverse announced that they'd selected Qualcomm's, a 100, right? Mm-hmm. Yeah, for inference, right? cloud AI 100, Yeah. A AI 100 for inference. So, and then, I know that it's not, a hyperscale stuff, but, they also announced that, AI appliance for. Edge infrastructure. Mm-hmm. That's far from being, you know, all this, scale out, scale up, AI supercomputing. But they are there and like you said, they're probably gonna be coming to market. We'll find out a Snapdragon summit. Hopefully.
Mm-hmm.
Leonard Lee:for data center, which is something that actually Nuvia, before Qualcomm acquired them that was Nu i's game right. At the time. So, yeah. And, you know,
Jim McGregor:I mean, if you think about it, going all the way back to a thorough CSR, cell wise, arriver, Nuvia Edge impulse, alpha wave. the good thing is, is they're making very strategic acquisitions for the markets that they wanna serve.
Leonard Lee:Yeah. And then, they pen that MOU with, humane. Yes. Right. Wow. Them and everyone else under, under the sun. Under the sun. I know, but theirs was particularly interesting though. Some of them were, but this one seemed more like a co-development, right? Yes. Development play. Yeah. So this is where that IP, I think can come into. Mm-hmm. Come into the equation and be something valuable for Tarek and his team as they look to, build what might be a bespoke or a, humane specific, you know, infrastructure. Mm-hmm. We'll see. But, Yeah, I mean I thought it was interesting'cause the Alpha Way portfolio, like you mentioned, networking brings the networking aspect to Yes. Data center portfolio. Also some interesting, things in terms of chip. Chip
Jim McGregor:ip, right? Yeah. And we've seen this with every company that's wanting to get in the data center. Obviously with Nvidia, with Intel, with Marvell, a MD, Marvell, companies acquiring other pieces because you can't just be a single solution provider or single silicon provider For the data center if you're gonna be a player there. And we have to remember, data centers are more than just ai. They are communications. They are embedded data centers for military and government There's a lot of different types of data centers. There's enterprise data centers, but you can't be a single silicon provider. You have to provide a platform, and that's where I see Alpha Wave fitting in as part of that platform for connectivity.
Leonard Lee:Yeah. And you know what, I think one of the things that really plays in Qualcomm's favor is that, in terms of the silicon, especially for the CPU core, CPU, right? Mm-hmm. they have a very, very clean, technological platform, that can scale down as well as scale up and, it'll be interesting to see. How Gerard and team leverage that, for the AI supercomputing opportunity that they have in front of, themselves.
Jim McGregor:No, I agree. And Qualcomm's always had some of the best engineering talent. they pride themselves on being able to solve very difficult problems and we wouldn't be nearly where we are with, wireless technology or anything else without Qualcomm today. And, I look forward to seeing, what they can do in a lot of these segments. I would expect, we're gonna see, a lot of innovation and the data center especially is ripe for innovation at this point in time. You know, we, um, uh, pat Gelsinger talked about this in competition. Well, yeah, pat talked about, at, Intel Direct Connect about having to put everything closer together or what I call the densification of compute. The increased performance and efficiency. That means the CPUs, the accelerators, the networking, the memory, everything closer together, and last year I was talking to people and they were talking, yeah, it's gonna be three to five kilowatts. I'm talking to people this year and they're saying, Jim, that's gonna be 20 to 30 kilowatts. Per module. I mean, this is insane. one of the areas that I think is gonna, a lot of innovation is gonna be the server design. I think the pizza boxes or trays are going to outlive their usefulness. Because I think when we start thinking about. Thermals in terms of cooling and power and everything else. We need to re-architect the rack. and I think going to bricks or blades are gonna be much more efficient. And yeah. I'm talking about my history at Motorola with VME, my PCI, I'm not gonna let it go. I think this, I think we've done everything on the silicon level and the data center level and the rack, the rack's gotta change. The server's gotta change. Yeah. Yeah. I, trust me, I'm gonna be right.
Leonard Lee:You, no, I'm trying to figure out ways to disagree with you and it's really Yeah. Especially given how annoyed it is that you keep bringing this thing up, it's gonna happen. Yeah. as Forrest Gump. once said, that's all I gotta say about that. So,
Jim McGregor:in 10 years, the rack looks completely different in 10 years. We're not curing around a rectangular device in 10 years. Everything changes. Yeah. Well,
Leonard Lee:things do change. Right. And they are changing really. Quickly.
Jim McGregor:Well, just what we're seeing in robotics. we're seeing, I think, robotics is nothing new. We've had it for a long time and over the past couple years we've really improved the sensor capability and the motor function and everything else, but. Adding a gent AI to it is that final step that really enables that physical AI or those autonomous machines, the robots, to be able to do stuff that we've always dreamed they could do, but never really could get them to that point. I think 2025. Is a turning year for robotics with agentic ai and I look forward to seeing what we see over the next couple years, just having a robot walk up to me and introduce itself and shake my hand. I was blown away.
Leonard Lee:Oh, no pun intended. Right. No, I, I agree with you. Simply because, if I were to make a pun, I think the defense industry, which is heating up big time mm-hmm. Is going to drive a lot of that. I don't, I, I, oh yeah. Don't think that industrial is going to drive a lot of that, as much as we think. and we do see that momentum, whether it's for the newest generation fighters and this whole idea of using, nag which is the next generation advanced. I don't know what the D stands for, but I can't remember. and then, drone orchestration. Yeah. and then, you know, in order for that to happen, you have to recognize that these drones are gonna be largely autonomous. Uh, we should do another pocket
Jim McGregor:just on the cool stuff in defense. It's going to
Leonard Lee:be awesome. Yeah.
Jim McGregor:Um,
Leonard Lee:yeah. And I've been scary. Yes. Awesome.
Jim McGregor:Totally.
Leonard Lee:Well, yeah, it's scary. But then this is where a lot of the interest. Is going into, and I think a lot of the leading edge in terms of, applications,'cause we're seeing it in Ukraine, right? Mm-hmm. whoever thought it wasn't drone delivery that made drones a big thing. It was what they're doing on the battlefield right now. Yeah. Yeah. And so, um, and you know how we try to keep it real, right? I mean, geez, these are the realities. And so we just have to track the market in the industry and the way that it's actually expressing itself. Not in the way that people, might be led to believe, but, uh, it's exciting times. AI is just. Going nuts. The crazy thing is this was supposed to be a silicon or semiconductor light month, and this is probably our longest podcast, but hopefully everyone enjoyed the banter. Jim, thank you so much. hey everyone, make sure to connect with TEUs Research Okay. At www dot TEUs research. Dot com. Jim and team, they're fantastic. They are, dare I say, legendary in the industry. And I know a lot of folks are very happy that we're doing this collaboration along with, Carl Fre of Cambrian, AI research and, I think you guys have your own, podcast,
Jim McGregor:right?
Leonard Lee:We
Jim McGregor:do stuff with E Times. We do stuff on our own YouTube channel. We do stuff with all kinds of people. So happy to work with the industry. Yeah. All kinds of people
Leonard Lee:like me. Absolutely. And so, hey, remember to subscribe to our podcast here, Silicon Futures, in collaborations with research and, Cambrian AI research, which will be featured on the next cur YouTube channel. And check out the audio version on Buzzsprout. or find us on your favorite podcast platform. Also subscribe to the next curve research portal@www.next curve.com for the tech and industry insights that matter. And we'll see you next time, next month. Next month futures alright, we do it again. Yeah.
Jim McGregor:Cheers mate. Bye.