The neXt Curve reThink Podcast

Silicon Futures for Midpoint of August 2025: More Intel Drama, Trump's AI Export Tax, MIT's AI Report

Leonard Lee, Karl Freund, Jim McGregor Season 7 Episode 34

Send us a text

Silicon Futures is a neXt Curve reThink Podcast series focused on AI and semiconductor tech and the industry topics that matter.

In this special August midpoint episode, Leonard, Karl and Jim talk about some of the top headlines of the first couple of weeks of August while we were together at KeyBanc Capital Markets Technology Leadership Summit which took place in Deer Valley, UT. 

Topics that mattered in the AI and semiconductor universe at the midpoint of August of 2025:

  • Karl's latest Forbes article: "Skipping Nvidia Left Amazon, Apple And Tesla Behind In AI"
  • Intel's cash malaise - Softbank and the U.S. government to the rescue?
  • Trump's AI export tax and its implications on NVIDIA and AMD and U.S. AI leadership
  • Jim recaps his takes from FMS (The Future of Memory & Storage) Conference 2025 and the evolving role of memory in AI supercomputing
  • The MIT NANDA Initiatives controversial report on AI, "The GenAI Divide: State of AI in Business 2025"

Hit Leonard, Karl, and Jim up on LinkedIn and take part in their industry and tech insights.

Check out Jim and his research at Tirias Research at www.tiriasresearch.com.
Check out Karl and his research at Cambrian AI Research LLC at www.cambrian-ai.com.

Please subscribe to our podcast which will be featured on the neXt Curve YouTube Channel. Check out the audio version on BuzzSprout or find us on your favorite Podcast platform.

Also, subscribe to the neXt Curve research portal at www.next-curve.com and our Substack (https://substack.com/@nextcurve) for the tech and industry insights that matter.

NOTE: The transcript is AI-generated and will contain errors.

neXt Curve:

Next curve.

Leonard Lee:

Welcome to this next Rethink podcast episode where we break down the latest tech industry events and happenings into the, uh, insights that matter from the world of semiconductors and ai. And this is the Silicon Futures series that we do. And I'm Leonard Lee, executive Analyst at Next Curve, and I'm joined by the Perplexity Augmented. Carl Fre of Cambrian AI research, and we have the supercharged custom built Jim McGregor of Famed TEUs research. How are you guys doing?

Karl Freund:

I'm great. Doing great. Looking forward to the weekend.

Leonard Lee:

Awesome.

Karl Freund:

yeah,

Leonard Lee:

yeah. And so, you know, people don't know and probably don't care that this is a Reduc episode. we had a technical issue with the original recording, so all insights that we share here are actually a week old. So we just want you to know how current we're, it's just so I, I, I think we should fire the host. Yeah. Good luck. Good luck, Jack. Oh my gosh. Okay. You guys are ridiculous. Okay. But before we get started, please remember to like Sharon react and comment on this episode. Also subscribe here on YouTube and buzzsprout, opinions and statements. My, my guests are their own and don't reflect. Mine are those of next curve. And, we are doing this to provide an open forum for discussion and debate on all things, AI and. Semiconductor industry stuff. So, in this installment, this is a little bit of offbeat, this is like a, a talk, a tick talk. This is a talk episode for the month. just because so much stuff has been happening. Don't shake your head, man. What the hell's your problem? But, there's a lot of stuff that, there's like so much stuff. Most, prominent is Carls Forbes piece, which we're gonna give. Call opportunity to basically, oh, thanks. Command everybody who didn't go with Nvidia for their ai. We're gonna let you have an opportunity to just go to town with that. Okay. So, hey, Carl. Let's start off with your piece. I had a chance to read it, by the way. I loved it even though, it rubbed me the wrong way in certain areas,, that's common between you and I. Right. But that's right. I thought it was.

Jim McGregor:

Is that, is that too much information?

Leonard Lee:

way too much information.

Karl Freund:

Way too much information. Yeah. the basic premise is pretty simple. companies are spending billions of dollars to not buy Invidia products that work. And they're trying to compete with them and I understand why they don't wanna spend the high price, they don't wanna be locked into a single vendor. Got all that. But there's a price to be paid. the examples I gave were things like Apple, which has A B, A BN strategy. Anybody but Nvidia, had they simply used Nvidia, maybe Siri wouldn't be. Two years late, at least now they have some technical issues as well. I don't mean to minimize those. And they're not easy to solve. They have an installed base. Generative ai, Siri will be two years late according to Apple.

neXt Curve:

Okay.

Karl Freund:

Yeah. So had they just used Nvidia, would they be on schedule? Who knows? AWS is spending a large fortune on traum and inre. I get why? but it's interesting how much trouble they have with their own engineers and, and other engineers, that they kind of shove tra down their throats. Traum two's not competitive with Blackwell. maybe training three will be, but then. Nvidia will beyond the Blackwell Ultra. Right? but more importantly is a software stack. It's really about Cuda. I was talking to a developer who left Nvidia, who will remain nameless, he said, with another cloud provider now. And he said when management told them they had to use TRA and they said, okay, we will, we will be late to market. It will cost more to develop the software and you want us to do this? Why? Now they're building massive cluster geraniums. and, we'll see how well that goes. so the article's basically just kind of a wake up call to people. Say, don't assume. That your engineers know how to build a better AI chip at Nvidia. In fact, I would claim they do not. and more importantly, they don't know how to build, rack scale infrastructure. There's only one rack scale infrastructure available today, and that's Nvidia. Unless you wanna buy Cereus, which isn't really rack scale, but it's that big and it's that expensive, right? Yeah. They can scale. but it's not cheap. It's very good, but it's not cheap. It's playing

Jim McGregor:

a supercomputer.

Karl Freund:

Is it by buying the smallest supercomputer ever built s Exactly, Jim. So that, that was really the, the article about it, and I just wanted to kind of shake everybody up and say, you know what, there's no free lunch here. I am sure they'll solve their problems. I'm sure Apple will eventually have an Apple Intelligence, which is also called ai. And I'm sure that tridium eventually will be a good chip, but it's gonna take years and it's gonna take many, many tens of billions of dollars just so that you can be late to the party.

Jim McGregor:

And that's only one side of the equation. The other side of the equation is the developers. The developers aren't. Accustomed to that platform aren't developing on that platform, it's not gonna be effective, it's not gonna be cost effective to use the platform. Matter of fact, we're seeing that, just in, the number of NVIDIA GPUs deployed is making a huge difference in terms of who's winning and losing in terms of those instances for AI out there in the marketplace today.

Karl Freund:

And, you call me, perplexity augmented, guess what Perplexity uses. O only, why?'cause it's the best. It's the best hardware. It's the best software, it's the best rack scale infrastructure. and they've got more stuff coming out that, at Hot Chip next week. And yeah, keep an eye out for what they're gonna announce. And we're all going,

Leonard Lee:

right? I'm gonna be there. I'll be there. Yeah.

Karl Freund:

I'm gonna be, on the phone. I'm just not interested in traveling out there. Really. yeah, I can do it remotely. you get better screenshots.

Jim McGregor:

I won't argue that one.

Karl Freund:

Take a look at it. You can see it on cameron ai.com. as of Friday, I believe. And you can look if you have a Forbes, subscription. You can look at it Forbes right now.

Leonard Lee:

it was well done. I mean, it does beg the question, what would be the alternative play? I mean, that was the thought provoking. aspect of the, piece Carl, so Yeah.

Karl Freund:

Yeah. I think it's better play is, get your AI product to market as quickly as you can. And building your own chip is not the way to do it. Now there are some chips that I wouldn't. Put into the same kettle of fish, right? Such as, what meta has built, right? that, that's to solve a very specific problem they couldn't really solve the way they wanted to solve it with GPUs. And so they is for recommendation engines and, uh, yeah, if you have a use case and design your own chip makes business sense. Good luck. I think it's great. and competition's good. I think training three is gonna be a good platform, but I just wanted to point out that there is no free lunch and these are very expensive., The other one I would make an exception for is TPU because Google's got more money than, most Gods right? so they can afford to do this and they have enough workload to make these spreadsheets all work out, in a good way. They're positive. So nothing against TPU, but all the train stuff The list is long.

Jim McGregor:

You know, the funny part about this is there's a historical trend here in the fact that, companies trying to do their own, asics usually don't. It usually doesn't last very long, because it's really hard to keep pace with people. Develop silicon every single day. You know, that's all they do. it's impossible to keep up, especially when there's an annual cadence for companies to keep up with that cadence and keep pace with technology. So historically, we've always seen this, unless you're investing in having a full silicon solutions group that is just focused on developing a new chip. Constantly and usually you have to have multiple generations going simultaneously. And that, that's a huge challenge and a huge investment. So I agree with you Carl.

Karl Freund:

the day I published it, Microsoft announced that they're next generation AI chip's gonna be late. And I haven't met too many developers who are using what they have today'cause it's just not competitive.

Leonard Lee:

there's also that scale element though, right? And, we're talking about systems more than we are the chips these days, right? And software I've noticed is as people continue to talk about ai, super computing, there's still a fixation on the accelerator itself, but. It's been months, if not, a year and a half since we've graduated from that notion. Jim, you mentioned wrap scale. in many of the conversations I've had outside of, this hyperfocused, group of folks hyperfocused on ai, supercomputing, they haven't graduated their thinking to the rack yet, much less the data center. And so these different levels and scales of. AI compute, are still fixated on the smallest unit, which is. The quote, unquote, GPU or the chip or the accelerator. So, as we look at the landscape though, it's still diverse. I mean, asics and custom, custom systems right on top of custom silicon still a thing. Although I think Nvidia is making some really interesting plays, especially on the networking side, right? A lot of audiences are still behind, the kind of perspective that we present. Do you know what I'm saying? For us, it may be totally obvious. I'm telling you, outside of this small group of folks who are breathing this. Sub-industry or industry day in and day out. It is so not obvious, right?

neXt Curve:

Mm-hmm.

Leonard Lee:

So, I think, the piece that you put together, Carl, really, good thought provoking and thanks for sharing and everyone check it out. It's like freaking awesome. So, let's move on to our next topic, which is all this drama around Intel. We have the Trump administration, or Trump himself telling. Lip tend to take a hike because of quote unquote conflicts. And then, lip, Liu makes a visit to the White House and next thing you know, he's a success. and then. Now we have, talk about the US government taking a stake in Intel, in sort of, I don't know if you wanna call it in lieu of the CHIPS app grant funding that they had been earmarked for. But then also, SoftBank stepping in with, I think it is like 2 billion, right? What do you guys think? What's going on?

Karl Freund:

Government, government go, go. I think the Trump administration's trying to renegotiate the CHIP act. They're trying to say, oh, that money we promised you, there are strings attached. I think Intel is the first of many. Thank you. They suffer that. So Samsung and others are gonna have, have, probably have their moment in the Oval Office where they're gonna be forced to give something up to get what has already been promised to them.

Leonard Lee:

You know, that is a very interesting point of view right there that you just shared. Yeah. It's a renegotiation of the CHIPS act. Yeah. It's interesting. Jim, what do you think? What's your reaction? No.

Jim McGregor:

my reaction, I wasn't really that surprised. Intel is important to our country and our industry in many levels. Not just the products, they're the only option we really have for having a us, leading edge foundry. which is necessary, especially for defense and government applications, but also the fact that Intel's been critical to our industry, especially from an RD standpoint. There's so many technologies we wouldn't have, even some that they contributed to, but didn't come out with like silicon, like co package optics and stuff like that. But when you look at 300 millimeter wafers, EUV, high NA. FinTech Gale around all these technologies, we probably wouldn't have without Intel. So they are a critical kind of foundation of our industry and I think that we do need to see them survive. So having the government step in and say, okay, we need to make sure that they're around. Let's face it, this administration looks at how do I make money? And, kinda looking at it, what happened with the GM bailout, get in, make money, get out. So I kind of view it from that standpoint. I don't think it really impacts the market. It doesn't really impact Intel's product roadmap or anything else long term. I think it just gives them a cushion that they need to actually, get over the hump.

Karl Freund:

Gives them a bridge. The future gives them financial underpinning they need, to do what the US industry really requires of them, which is to be a competitive foundry.

Leonard Lee:

Yeah. from a cashflow standpoint, the infusion is gonna be essential given, what now is apparently, A not too healthy situation. coming into, 2025, the perception was yes, they have issues, but maybe the situation was salvageable, that their roadmaps were on track Jim, you've mentioned how the PDK was late and how there were some slip ups that Have landed them in a less than, optimal situation where they really needed to be optimal, especially With the ramp into 2025 with the ramp up of, 18 A, and then very importantly, the, successful launch in ramp up of Panther Lake. So, Yeah, it's been a bit of an eyeopener and it's just that I think we haven't had a very good track record with government, intervention or this industrial, policy being instituted in the semiconductor industry because, one of the red flags for me is how policymakers talk about the semiconductor industry. It's indicative. Of the level of understanding that they have. And the big problem here is the semiconductor industry. for folks that are on the outside is The industry that is prone to the most severe unintended consequences, right? Yeah. This is the problem. and one of the reasons why my confidence level in a lot of this intervention is low. I also think that market forces at the moment are probably. And I important, cleansing mechanism for a lot of the, trouble that Intel's in at the moment. So anyways, that's my 2 cents.

Jim McGregor:

You know, I don't trust anybody that can even spell technology like ai. which, how do you spell that? Basically Washington or they put a hyphen in semiconductor, or when they start talking about silicone and a silicone, well, yeah, that is explainable, but I, I, no, it didn't. No, it isn't. No. So, so I agree with you, that, I would rather, I think the government has a role. definitely in our industry there, there's gotta be regulations, and I think they should encourage innovation and there's better ways to encourage innovation than direct investment or control or anything like that. I just think we got caught in Intel, especially as in this situation, where. Unfortunately, they went through a couple of different leaders that didn't really have a vision for the company and the company faltered in terms of some of its products, some of its roadmaps and everything else. Intel's still very important to us and we don't have another option for them right now. So totally agree with that. it's not that they're too big to fail, but we can't afford for them to fail.

Karl Freund:

Yeah, they're too important to allow and really

Jim McGregor:

can't afford to break them up at this point in time. You, the Foundry's not competitive. So you know, until we see, and their tips, I

Karl Freund:

mean, they still don't have, they still don't have a competitive AI platform. It's 2025. They're not gonna have a competitive AI platform. from what leadership has said, we should hear fairly soon what their new AI strategy's gonna be. I'm anxious to hear that, and, I'll be all ears, so far this'll be their fifth try in building an AI chip.

Leonard Lee:

Yeah. and for the audience, if you want some of our retrospective thinking about this, check out our, episode, which we taped at a bar. in Las Vegas during Intel vision. Intel vision? Yeah. 25. It's a good one. It's a good one. We had David, Alta Villa on as well, who shared his perspective and it was fun. And we had drinks afterwards, I think. Right? If I'm not mistaken, I don. Okay. Anyway. Or were we drinking before? I don't know. Both. Probably

Karl Freund:

not during, not during.

Leonard Lee:

Uh, and oh, so speaking of, US government and the semiconductor industry, another hot item of the last week or two has been, the Nvidia and a MD Trump. Trump, I, you wanna call it a tax? I, I don't know. It's a

Karl Freund:

tax, it's

Leonard Lee:

a tax. It's a

Karl Freund:

tax, an export tax. It's a tax gets

Leonard Lee:

sold to China,

Jim McGregor:

first, you can sell to China. No, you can't sell to China. Oh, we'll sell you, we will let you sell to China, but you have to give us 15%. Yeah, that's gonna backfire. not only is that bad for those companies and a bad precedent to set for the industry, but China was not happy about it. So they're telling companies to stay. The Chinese government's telling companies to stay away from'em.

Karl Freund:

Yeah, the opportunity may evaporate. There's a lot of speculation, whether Nvidia will even include China in their forecast when they announce earnings.

neXt Curve:

Mm-hmm.

Karl Freund:

they'll play it safe and say, you know, we don't know if they're gonna buy it. We don't know if we're gonna be allowed to sell it. So we're not gonna include that in forecast, even though they're building a specific chip, beyond the age 20, a Blackwell based solution. 20 or whatever they're gonna call it. Yeah, yeah,

Leonard Lee:

yeah, yeah, yeah. This has, this has been a really, strange one and in particular, the confluence of different, perspectives, on both sides of the. Pacific. Right. where, the Chinese government is now surfacing a lot of security related concerns, right? So, you know, the national security. Concern is a double-edged blade, right? It cuts both ways. And I think, one of the things that we don't think about, generally is, the concerns of the other side, right? That things like national security also actually are an agenda for the. Chinese government, right? Mm-hmm. and that sometimes falls out of the calculus of how people think about, how the market plays out. They just, tend to think in terms of US national security and. technological security. and so this is an interesting moment where we're seeing now, the Chinese government, assert itself and its influence on the, the AI market, for these chips. That may be counterintuitive for some folks, right? They might think, oh, well, you know, the demand is so high. You know, of course they're gonna buy all these chips. And it's something that I mentioned in a couple of episodes ago, actually, may maybe may have been in like, March or February that, the allowance, the, the permitting of these sales may not turn out the way that folks think, right? That a dynamic like this would play out. And so this is what we're seeing happening right now. but I think what's more interesting now is, how H 20 has aged. Its appeal fundamentally has diminished. I mean, geez, it's almost been like six months, right? Since a lot of the drama around, AI chip shipments to China, which really started with the ai, diffusion rule. I like that you brought up, Carl the new, Chip that, Nvidia is now, working on, that's based on the Blackwell. and that's the next thing I think you have to really look at in order to, as you look at forward to what is the Nvidia China. Business gonna look like, right. It may no longer be age 20 from what we can tell, right?

Karl Freund:

Mm-hmm. Especially this drags on and yeah. I assume that, the Blackwell chip, B 20, whatever they're gonna call it, is still progressing as planned.

Leonard Lee:

Yeah.

Karl Freund:

If they can't, then it was a big waste of money.

Leonard Lee:

Yeah. Well, I mean, Jim, how long do you think it would take for them to turn one around?

Jim McGregor:

Oh, not long. I would expect that, it was already in the works. And the fact that, they've already got the chip design, that the only limitation is how quick can they get them through manufacturing, which is quite an extensive cycle. Yeah. So you're still talking, you know, two or three quarters at least, just to get them into production.

Leonard Lee:

Yeah. this is a cool conclusion to this particular, interesting, topic, of the month. and we're almost to the end of the month. But Jim, you're at FMS 2025, number one. What does that stand

Jim McGregor:

for? It used to stand for the Flash Memory Summit. Now it stands for the Future of Memory and Storage Conference. Okay. So, it's kinda changed and it, and rightfully so, you know, we tend to think of memory and storage in two different sentences or two. Two different realms and they really are the same. It's one spectrum. It's what I like to call the data pipeline, or you can call it the data layer, the data level, whatever you want to call it. But you have to think of it all holistically. Everything that goes from on chip SRAM to, HBM, memory to GPU direct storage, main memory. Pooled memory through CXL to network storage, all these things make up a single data layer that's critical, especially for ai. Without the data, you don't have ai. So it's interesting to see how the discussions around memory and storage are changing drastically. Yeah. I would say there are three common themes at FMS this year. One is power matters. It really never really mattered for anything other than the processor, the accelerator. Now every ounce of power matters and,, paying a lot of attention to that, especially the big three players, micron, sk, Hynek, and Samsung all really highlighted that. And Micron, is in a great position in terms of, having, 20 to 30% power, efficiency, enhancements over their competition. So it'll be interesting. The, other key themes there is, was definitely the fact that there, AI requires. Different types of memory and storage solutions for each stage. whether you're talking about the data acquisition, data processing, or ingestion, the training or the inference processing, each one of those has different requirements, whether it's on the, Density or bandwidth, Performance requirements. all three were really high, or I would say all of'em, even Kay Xia were highlighting those requirements and, others at the event. And the last one was the fact that, things are going to change too. especially as we start looking forward, it'll be interesting to see how this plays out. Everyone was talking about customization with HBM four. Mm-hmm. Especially with HBM four E. Trying to add in memory compute with computing resources in the memory. I don't really see how that's gonna be feasible without standards.'cause nobody wants a single source, supplier. They're gonna be a GPU. you tell me and there's a lot of other innovation going on from high bandwidth flash, which is, if you're not familiar with that's like a, hybrid solution that includes HBM DRAM with stacks of HBM flash on top of it to increase density. Yeah. there was talk about pooled memory with CXL. There was talk about GPU during storage. So it is interesting that, that data layer, that data pipeline is becoming a critical importance to ai. And that's everything that's happening there for the data center is gonna filter down throughout the rest of the products, even down to cell phones and stuff like that. Eventually. Yeah,

Leonard Lee:

yeah, yeah. Well, the other way around too, right, Jim? Because, socom, right? Oh, yeah. We talked about

Jim McGregor:

that and how, trying to use LP DDR in, server applications to reduce the memory and also change the, configuration rather than having the pluggable dims, having the So camm modules. Yeah.

Leonard Lee:

So maybe 2025, or at least this point in 2025 is going to be when everyone realizes memory is important. We had a little bit of that in the, memory wall, concerns that popped up toward the middle of last year. But I think you're making a really important point here because. the diversity of AI workloads, as well as the implications of like novel model designs have implications on how the system needs to look in terms of memory and, coming out of GTC. One of the big impressions, especially with Dynamo, was the accompanying idea that you're going to have memory tears that you outlined. And, how now we have to think even more. we have to treat this level or dimension of complexity on top of everything else. my view, and I don't know what you gentlemen think, this. Conversation is becoming even more complicated than we have been used to. It is certainly way past just the chip. And so when we go back to the policy discussions and how policymakers need to view the industry, especially ai, this stuff cannot be generalized. And it's also fast moving, right? Carl, your piece. Highlights how quickly everything is just moving so fast. it's mind boggling.

Jim McGregor:

Well, and Jensen, did it, described it best as the data center is the new unit of compute. there's gonna be a lot of innovation at the data center level. It's a single system, a single building or multiple building. It's a single system that you have to think about and how we design racks, how we cool these solutions, how we power these solutions, how we, pump data, how we feed the beast, essentially, how we pump data through these solutions. All this is going to, change rapidly, especially as we're trying to reach, a hundred x performance improvements with each generation. It's crazy. But what we're gonna do with it is still mind blowing. I think Agen AI really is a key enabling step for physical AI and robotics. So, I mean, where we go from here I think is almost unlimited. And no, I don't think robots are gonna kill us, but that's just my viewpoint.

Leonard Lee:

No, it's because you and your family are gonna make sure that they come out with an anti terminator algorithm.

Jim McGregor:

That, that No, no, no. We're gonna come out with the Terminator to make sure it doesn't No, just kidding.

Leonard Lee:

Well, I don't know, John, if you Lyn wanna touch upon this MIT AI report that came out, that is like the buzz, You wanna skip it? You wanna cut it off now, or you wanna venture into this particular,

Karl Freund:

now you got it

Leonard Lee:

We

Karl Freund:

can't

Leonard Lee:

just, I can't help it. You're the one that brought.

Karl Freund:

Well, I think, it goes to show that if you think you know how to build your own ai, you're probably gonna fail. while the headline of those who haven't seen it, said that 95% of AI projects, generative AI specifically, projects fail in the enterprise, it also said that someone, like 67% of them succeed if they use. Pretty much off the shelf tools as opposed to building everything themselves. so that kinda lends to the credence to all the software companies that are building solutions. They, people just plug in with their corporate data and, I'm oversimplifying it to be sure, but, it doesn't mean that AI's dead. It means that you probably don't wanna start by rubbing two sticks together. you should build on what's already been created and not try to replace it all.

Jim McGregor:

Jim, I love that description and it's a lot of, misconceptions, people not really understanding the bounds. The needs, the tools, everything. And that gets to a key point that, when we're in these periods of hyper innovation, it's really hard because things are changing every day, every week, every month. It's changing so rapidly that, you're gonna have a lot of that failure, but we're gonna have a lot of success too. So I really hope people don't take that outta context of saying a AI is bad or AI is dead, or we can't use ai. No. You just have to realize that, some of the companies I've talked to recently said, in a lot of cases we were doing everything ourselves, and then we realized if we just waited two or three months, Nvidia was gonna come out with something and we didn't have to do that part of it. Yeah. Yeah. And that's a key part of it is leveraging the ecosystem. Yeah.

Karl Freund:

Yeah. The same thing may happen with networking, right? Everybody's rushing to compete with NVLink and Nvidia. GTC came out with NVLink Fusion. Of those of you don't know, it basically solves the problem that Nvidia struggled with from the beginning, which is no CPUs, no significant CPUs powered it for a while, but most CPUs, they don't speak NVLink, right? So you have to go out through a slow PCIE adapter. That's not gonna work. Well, NVLink Fusion says, tell you what. Qualcomm, whoever. if you wanna build an AI server for a specific solution, you can use our GPU and plug it into your CEPU and here's the IP that'll allow you to build NV link into your next generation chip. And now you can

Jim McGregor:

even link it to your AI accelerator questions, your AI, acceler and everything else. So yeah, it's, they're opening it up through licensing, to make it available. So it's gonna be interesting. All the

Karl Freund:

rush, all the rush to compete with it may just evaporate when everybody says, why don't we just use it? We'll see what

Jim McGregor:

happens.

Karl Freund:

see what happens. It's an interesting play.

Leonard Lee:

Yeah. Well, I find it absolutely fascinating, Carl, how you took the topic of the MIT AI report and somehow Completely, yeah, that was like a hard left, but it's the

Karl Freund:

example of use. Yeah, yeah, yeah. Use what's already been built. Those who do that. With AI

Jim McGregor:

software. He's trying to bring it back around to his article that you have to use in Nvidia.

neXt Curve:

Yeah.

Leonard Lee:

There you go. I wrote a Forbes, tech council piece, that talks about exactly what you mentioned, Jim, is that you don't need to be first, just let other people make the mistakes and misconception is. Absolutely 100%, one of the contributors to failure. And then, I'll be flat out honest here. You have to go and look at who your advisors are. Do that because we're here. we are analysts and part of our job. Is to help end users understand who to engage and in applying these technologies well, and also to be able to be conscious of the various readiness factors that you need to incorporate into your decision making. And that has been a massive failure. And so it's not a surprise that this MIT study highlight this. And by the way, Carl. So 65% of 5% is, is only 3%. So it's still pretty bad.

Karl Freund:

No, no. It's 65% in total. In total. But no,

Leonard Lee:

and but the thing is, the lesson learned here is that you need to be less misguided. And going back to our comments earlier, about the velocity. How quickly things are changing. And Jim, what you've just mentioned about innovation, by the way, not all of this stuff is innovation. Some of it's actually you're innovating in the wrong direction, which is just, you're just wasting money. the innovation that is moving the ball forward is moving so quickly. Yeah. You really do need to time based on the application of value. You're trying to extract through the application of these technologies when and how to get in, but you certainly can't. Come up with a strategy and shape a program based on misconceptions and misguided notions. And that's, I think, that should be a warning for all those who are listening to us and watching this podcast. and I will have to assert that we are a good. Source of sensibility here. So start with us. Right?

Jim McGregor:

for those of you watching and listening out there, that was Leonard's long way of saying don't listen to anybody else. Listen to us,

Leonard Lee:

Well, hey, come on. You know, do is listen to our podcast. we've been calling it. So, and not just from the standpoint of, the technology, but from a practitioner's perspective as well, I think it's important because it's one thing to just talk about the technology, it's another to talk about how that gets implemented and there's a lot of choices that have to be made. And, the thing I appreciate with about you guys and our collaboration is that we bring that holistic view for our listeners. So, anyway, any final thoughts on this? Controversial piece from the folks at MIT Nand. No. Are we good? No, I think we're, I think we're good. All right. Okay. This is dragged on much longer than Carl is comfortable with. I gonna blame him because he kind of complained earlier, which is fine. It's constructive complaining. So thank you for that, Carl. But hey everyone, thanks for listening in. if you want to. Tap into the genius of Carl Fre. Contact him, on LinkedIn, but most certainly at www dot cambren ai. And there, there is a hyphen there, dot com and that is. Cambrian AI research. Okay. And he is the man. And then of course, Jim McGregor of TEUs Research. You can contact him at www.guessitteusresearch.com. both of these gentlemen are on Forbes. They're all over the place so you can find them all over the place and they don't bite. So feel free to reach out to them, especially if you are, looking for, the leading edge perspective and advice on all things, data center and semiconductors.

Jim McGregor:

And make sure you also reach out to Leonard, take two. Lee for, next Curve.

Leonard Lee:

Thanks. I really appreciate that help. That

Karl Freund:

was cute.

Leonard Lee:

That was, yeah, I know. That was very, very cute. yeah. And just remember to like, share and comment on this episode. Subscribe, support the podcast so that we can continue and we'll continue anyways. But to continue to, share our industry, observations and dare I say, expertise with all of you. And, make sure to subscribe to the next curve research portal@www.next curve.com for the tech and industry insights that matter. And we'll see you next time actually toward the end of the month. And hopefully we'll have Mario on and we're gonna have another round of fun. Thank you, gentlemen. Have a great day.

Karl Freund:

Thank you. Have a good one.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

IoT Coffee Talk Artwork

IoT Coffee Talk

Leonard Rob Stephanie David Marc Rick
The IoT Show Artwork

The IoT Show

Olivier Bloch
The Internet of Things IoT Heroes show with Tom Raftery Artwork

The Internet of Things IoT Heroes show with Tom Raftery

Tom Raftery, Global IoT Evangelist, SAP