The neXt Curve reThink Podcast
The official podcast channel of neXt Curve, a research and advisory firm based in San Diego founded by Leonard Lee focused on the frontier markets and business opportunities forming at the intersect of transformative technologies and industry trends. This podcast channel features audio programming from our reThink podcast bringing our listeners the tech and industry insights that matter across the greater technology, media, and telecommunications (TMT) sector.
Topics we cover include:
-> Artificial Intelligence
-> Cloud & Edge Computing
-> Semiconductor Tech & Industry Trends
-> Digital Transformation
-> Consumer Electronics
-> New Media & Communications
-> Consumer & Industrial IoT
-> Telecommunications (5G, Open RAN, 6G)
-> Security, Privacy & Trust
-> Immersive Reality & XR
-> Emerging & Advanced ICT Technologies
Check out our research at www.next-curve.com.
The neXt Curve reThink Podcast
Marvell Industry Analyst Day 2024 Recap (with Prakash Sangam)
Prakash Sangam of Tantra Analyst joins Leonard Lee of neXt Curve to recap Marvell's Industry Analyst Day 2024 event, which took place in Santa Clara at the Marvell HQ. Marvell brings together many of the leading tech industry analyst from around the world toward the end of the year for a reunion that turns out to be a great recap and temperature check of the semiconductor industry given the diversity of Marvell's business. This year was the year of generative AI and accelerated infrastructure. It looks like next year will be as well, but custom and optimized.
We parse through the key announcements and insights that mattered at Marvell Industry Analyst Day 2024:
➡️ Overview of Marvell Industry Analyst Day 2024 (2:35)
➡️ Marvell's Accelerated Infrastructure opportunity (5:00)
➡️ Prakash's key impressions of Marvell Industry Analyst Day (7:21)
➡️ Marvell brings the system level perspective to custom AI supercomputing (11:00)
➡️ Sandeep "Mr. BLT" Bharathi and his custom accelerated infrastructure sandwich (13:17)
➡️ Custom accelerated infrastructure - the next big thing in AI supercomputing (14:40)
➡️ What can we expect at Marvell Industry Analyst Day 2025? (20:10)
➡️ The nuances of operational Gen AI supercomputing versus experimental (23:09)
➡️ What did Marvell say about telco, automotive, and enterprise? (25:50)
Hit both Leonard, and Prakash on LinkedIn and take part in their industry and tech insights.
Check out Prakash and his research at Tantra Analyst at www.tantraanalyst.com.
Please subscribe to our podcast which will be featured on the neXt Curve YouTube Channel. Check out the audio version on BuzzSprout or find us on your favorite Podcast platform.
Also, subscribe to the neXt Curve research portal at www.next-curve.com for the tech and industry insights that matter.
Next curve.
Leonard Lee:Hey everyone. Welcome to this next curve. Rethink podcast episode, where we break down the latest tech and industry events and, um, incidents and things that happen into the insights that matter. And I'm Leonard Lee, executive analyst at next curve. In this episode, we will be recapping Marvell Industry Analyst Day 2024, which took place in Santa Clara on December 10th. And I'm joined by my very good friend and local neighbor, right? You're like a neighbor, Prakash. Yeah, like a few miles away, right? Yeah, Prakash Sangam. Yeah, exactly. Uh, Tantra Analyst. So, how are you doing? Prakash exhausted with all the travel we did together this
Prakash Sangam:year, right? And well,
Leonard Lee:this is the last, that was the last travel of the year. And you're not quite finished, but we won't talk about that at the moment, but that was my last travel, right? Yeah. Good for you. Yeah, it feels good. It's been a crazy, crazy year. So yeah, but before we get started, remember to like, share, comment on this episode and subscribe to the Rethink podcast here on YouTube and on Buzzsprout, always mispronounced that to take us on the road and on your jog and listen to us on your favorite podcast.
Prakash Sangam:Especially when you're, driving for this holiday season
Leonard Lee:meeting. Oh, yeah, yeah, yeah, yeah, you know, and, uh, you can gift it's free. So just give it to your friends and family and your neighbors and your whole company. So it's nice and easy, but, uh, yeah. Marvell industry analyst day. What is like your second was like your fourth or fifth time going, right?
Prakash Sangam:Mine is, yeah, third or fourth, something like that. And they themselves, they said this is the sixth time, so we're not
Leonard Lee:Yeah. This is my second. Yes. Last year was really, really interesting. Well, it took place on the 10th, as I said, it was one day event and it kicked off early in the morning. Right. Yeah. Yeah. Pretty feverish agenda that took us all the way into the evening. But why don't we start off with your impressions? And what did you think about the event this year compared to last year?
Prakash Sangam:Yeah, as you said, it is chock full of a lot of great information, a lot of, information packed sessions. The 1 big difference I saw, um, Compared to the previous ones is their custom ASIC business used to be kind of a last session toward the end of the day. A lot of hush hush. Okay. We are doing good business there, but we can't share too much for our customers, what we are doing and so on. But, uh, this year it was front and center, you know, I think three years ago in all clouds are not equal. There are specific needs of these different cloud players and we are catering to those specific needs of them. So it was more like custom cloud was their kind of main message, but just here it was, it's a big message. Of course it's custom cloud, but custom is where a lot of the action is. And then we are best positioned to exploit that opportunity. And no more, oh, we can't share who our customer is and so on. They basically started off saying that they obviously there are now they made these announcements with AWS for next five years. And then Meta for interconnect for I think that's also a few years. So those were good proof points. And also they started off the event with the good news from our friends in on the Wall Street side, uh, stock has been doing good. It's reached a hundred billion dollar market cap and so on. So squarely on ASIC. Customization of cloud and data center was, and I think they made a good pitch on why they are better positioned in terms of, all the great IP they have expertise they have custom relationships they have and so on end to end. Uh, so yeah, I think it was a very, very well planned and executed a lot of good information. I especially like one of the sessions some of the parties that is, was the CDO chief development officer and they're using, sandwich. As an illustration on how they make this custom cloud possible with custom basics and technologies and so on. So, I think we'll talk more in detail. All in all, at a high level, yeah, I think it was pretty good and messaging was pretty clear. Squarely focused on data center and they called themselves as a data center company as well in the beginning.
Leonard Lee:Yeah It was a very different year. And I would say it's not just data center. It's really, this accelerated infrastructure that they talk about, you hear that, same kind of terminology used by NVIDIA and others. So, yeah, this is largely, I mean, the opportunity that they're looking at, and this big shift in, in their business is largely due to generative AI supercomputing and supercomputing actually came up. A number of times, and I think last year, there was a bit of diversity. We heard the diversity story or continue to hear that this year. There was almost none of it. In fact I did ask the question panel executives can you throw the tech telco and the automotive folks of bone? They didn't even bring that into the kitchen. The back burner stuff was a little bit more of the interconnect and networking, although that was really, I think more of the core of the story, but yeah, to your point. The announcements around Asics, custom asics for cybersecurity, correct? Yeah. It, it was the big thing. Yeah,
Prakash Sangam:it was, and also it was that XPOs interconnect and the memory. Right. They made the announcement with Samsung Micron and Eska. He, on their custom. HBW, high bandwidth named HBM, uh, yeah. Yeah. Custom. HBM.
Leonard Lee:Yeah. Just like last year, really well run program. Mm-hmm And I think they were really wanting to focus on their growth market. Yeah, literally market and so yeah we had Chris Coop men's get up there and literally show on a chart that data center is their largest opportunity. It's outsized now. Yes, I understand. I think if I'm right, it is the opportunity
Prakash Sangam:for them. Yes.
Leonard Lee:Right. And then as far as like future growth and it's portion of their business is going to continue to be that outsized driver as well as part of their business. So it's interesting. So much it's changed so much because last year, the tone was very, very different, right? I think it was that that's a inkling that, this generative AI infrastructure or accelerated infrastructure investment was going to Happen to start to take off and they're really in the thick of it. So in terms of impressions and takes, what were some of the key things that you took out of the event this year?
Prakash Sangam:Yeah, the key is basically they are clearly focused on the report. The others do exist, but last year a they're saying we are doing the investments and we expect that to be a bigger. Market for us, but this year it was in a very confident and it has happened and showing a lot of progress, a lot of point proof points that is one and obviously gen AI and I was stuck with A lot of different IP that bring to the table, which not many people do, I think that was a pretty impressive and they, showed all of it how they're bringing together and all that, that second and interconnect they've been, leaders and then as the data centers grow bigger, larger, you have to not only scale out, but also scale up. I think that's where the connectivity, you know, not only copper or the, it is, it's not a copper or a fiber, it's going to be. Both of them using based on the use case, based on their scaling, scaling out or scaling up, combining them. And I think that was interesting. I mean, if you look at the bandwidth needed to connect these clusters and the regional data centers, it's mind blowing, right? Yeah. Think about 400 k going to one terabyte links and then, two terabytes in the future and so on. So yeah, it was, that was. Pretty interesting. And then in the afternoon session the simile of, how you're managing thermals, why, what are the limiting factors on, your chip size and it's thermals based on, the trade that are For server racks and so on that was interesting. So I think they give a lot of good details on what they're doing, how they're doing and why that is making sense. So, yeah, and they did mention, a little bit towards the end and a lot of it to a question as well on automotive. They said it's it is doing OK. And year over year, you will not have that much of difference because it is, it's a slow moving search, right? It looks like they're still doing okay. There no, no more, no new information or shared other business. I think they're checking along, but. Nothing compared to the data center. I guess so. So basically, if you have to write the whole day in one sentence saying that data center is growing, it's a major market for them and they're winning with their custom silicon. And yeah, that's it. And, I agree with the. you know, the, first phase of GenEI being all GPU based coming from NVIDIA. Going forward as we move in from training to inference, there is different kinds of processing process needed, not just, uh, GPUs. And there'll be a lot of focus, especially from the hyperscalers on customization, bringing their own, uh, chipsets, like AWS already has Tenium 2 and so on. So I think that will be an interesting market to be in. So I think that's a great place to be in as such. And making sure Supporting these guys. And if you look at the track record, these hyperscalers obviously will do, will develop more custom silicon and probably they'll rely on a partner like Marvel based on the success there so far for most of the customization and the chipset part of it.
Leonard Lee:I thought this year they really brought the systems level perspective. And like you were saying made the impression that they have all the bits and pieces for, like you were mentioning the hyperscaler type customers, mix and match and custom build a platform. That's another key word optimized. And you're right. Nvidia does general purpose. Accelerated infrastructure, or actually, it's broader than that, because the reason why they have NIMS and all these other artifacts and frameworks that take these models and deploy them in inference environments is because they want to extend all of the stuff that's happening in. These massive data centers edges, but yeah custom, but optimize. And so it's interesting because they do have a small number of customers. And like you mentioned earlier, they have meta AWS. These are very, very large scale customers that are primarily in all likelihood, even though they didn't specify using a generative AI or let's just call it broader AI for like recommender engines and such. For operational purposes, different from what we're seeing from like NVIDIA and AMD, where a lot of these processors are being used largely for model. Development and training, and so you see these model makers or developers either buying these chips or these cloud GPU as a service guys providing instances. To these model developers. And I think the operational use cases are real end market. That's real end market model training is for model developers is not end market. Yeah. That's just R and D. And we're also seeing the sovereign AI stuff. That's also largely traditional supercomputing with maybe some of this LLM, sovereign LLM stuff going on, but the other thing that they impressed upon us was the innovation they're doing across the stack. So you mentioned the sandwich, which I loved. Yeah, I actually I actually dubbed sandeep. His name is Sandeep, right? Yeah. Sandeep. yes. Yeah. Oh my God. He's gonna kill Don't kill me. Oh no. I love the dude. I love the dude. But I call him Mr. BLT. Now that's why I forgot his name, because I'm gonna call him Mr. BLT from now on. Hi Sandeep. But I want to give a shout out to Nigel because he prod me to make sure that I made this point. About their custom HBM architecture. So they announced that in their partnership with SK Hynek and Samsung Semiconductor. And so, yeah, these guys are getting down into the weeds working with customers with bespoke needs. Mm-hmm Very probably even model or application specific needs that are, yeah. Largely scaled out, not general purpose. And I think that's one of the big impressions that I got. And we're also seeing that play with broadcom, right. And that's why these guys are sort of playing in the same lane in this horse race, right. Where, I think Intel. NVIDIA and AMD are actually running a different type of race. So,
Prakash Sangam:yeah, and the point you make about their process, this custom silicon and the whole solutions that they're bringing is not for setting up, which might be used for their was not sitting above just infrastructure for the the cloud user to use. But a lot of them. They need, they know, for example, if we take Meta or AWS, they know for their own operations, the kind of workload they run, right? The whole solution is optimized perfectly for that kind of workloads. So the workload that Meta runs is different from AWS, from different from Google, for example, right? So you have to make, and you have to, so you, So when you're offering this as an accelerated architecture or accelerated infrastructure for somebody else, then you don't know. Then you have to be a little bit more generic in terms of the process you run and so on. Right. But here you're using it for your own operations, which are huge operations in themselves. Then you can, customize the heck with the heck of it, right? Exactly what you need, rather than running it on custom. So I think it's a large market and it's not very easy to get in. So once you get in, once you're very close to the hyperscalers, then it's a good market to be in and for the long run.
Leonard Lee:I, personally think it's a better one. Actually, Carl Freund and I did a podcast earlier this year about exactly this and this dynamic playing out. And I think the real risk, I think for the folks who are selling through into the AI VC community and their minions. Are the ones that are going to probably be exposed to the higher risk in terms of how all these out simply because once these models are developed to a level where they're good enough and you hit this ceiling and, I have some of our friends in the semiconductor industry who cover generative ai supercomputing, who probably push back on this assertion that at some point it's gonna be good enough and you can't get better. Mm-hmm It's getting too big and it's getting too ridiculous. The folks who are operationalizing this stuff at scale, they'll just take these models and then. It'll just be one massive model. One of the challenges for the model makers and these guys on the other side of the equation, doing all the exploratory and development stuff they're having trouble with monetization. Monetization is very difficult for them. On the operational end of things, this is where the technology is being applied. It may not be monetizing anything, but it's driving some form of operational benefit, and it's being scaled out with custom stuff because if we look at what meta is doing, they're buying GPUs from Nvidia for the model. Development, right? So the llama stuff that they're doing, but internally for their, let's say recommender engines and for new stuff that they're in, for
Prakash Sangam:example, right? Yeah. Yeah. Yeah. It'd be cooperation for link to what's up operations. Yeah.
Leonard Lee:You're going to be using these things because there isn't a diversity as much of a diversity of workload for the operational side of the business. So. Yeah. That's like the, it's not the aha, it's the affirmation that I got Yeah. To Marvell Industry Analyst Day.'cause like I said, I already opined about this earlier in the year, so that was gonna kind of cool thing for Correct. And
Prakash Sangam:I think maybe for people who are not following this very closely. Maybe I think they should have made that clear customization. So why customization? I think they talked about it in the previous ones, but all the clouds are different. The cloud workload for different hyperscalers is different. Maybe they should have clarified that a little bit. So we are building for AI, but we are not building this for Gen AI that will be offered as a service that somebody, it might be to some extent, but the prime market for customization is running the operations, which means there is. Immediate need for these technologies and these solutions that, and the infrastructure that they're bringing in, readily available workload to be run on them, not looking, we'll build the infrastructure and then, we'll build, tell you the
Leonard Lee:truth. That's more is the role of analysts. We, we should be able to connect the dots and see, I mean, that's the value we should be bringing to the. Yeah, industry is the more holistic view, being able to see the nuances of what's going on. And oftentimes you and I know, a lot of tech companies don't see the force from the trees and that really is the purpose of industry analysts. So I don't blame him for not focusing. I totally agree. This is the key point wasn't. I agree with you. Made as emphatically as it probably should have been, but that's a missed opportunity for Marvell. But I'll say here, this is one nugget for everyone. Matt Murphy is not going to be the CEO of Intel, so knock it off he told us.
Prakash Sangam:Yeah. Yeah. He particularly made it clear Yeah. And that
Leonard Lee:wasn't NDA, you just sat there and said, I don't know where the hell this came from. So there you go, you, you heard it from us, not really, he told everybody in the room. So, uh, anyways, what do you expect to see from Marvell industry analyst day 2025? What's your guess? Let's just take a guess here.
Prakash Sangam:So, I would expect more progress on it, especially, I think what they clearly presented is, when you say custom, it is custom processing with XPU, custom interconnect. And now with custom memory, which was not the case. So I think now, which means everything is kind of custom customized, right? For these things. So next year will be interesting to see, okay for the operational use cases, it's perfect. How hyperscalers on the cloud providers are using this to offer differentiated service. As a service as well. As a service provider, they have to provide general purpose service. I compute right for the, you know, for the models to run even when they're in the model so that anybody in the industry wants to get, and they want to use, AWS or Google cloud or Microsoft. Should have kind of a similar interface, infrastructure and performance so that, it's more common than economies of scale and it's easy to switch between the players. But if you're offering very custom solutions to on the back end, which enables the cloud price to offer. customized infrastructure to their customers. It is differentiated, but if that differentiation is something of value to the users of this infrastructure, then it will be a challenge, right? You can also not scale it across the board because you're offering some specific differentiated service. So it will be interesting how that turns out and how Marvel will play into that. You know what I mean? I don't know. It was very sweet.
Leonard Lee:I think that will play out and like, for example, yeah,
Prakash Sangam:like, for example, if I'm if I want to use, infrastructure, then I know what I get from main media GPU cluster or AMD GPU cluster. So. I can go to player one for that cluster, or I can go to hyperscaler two for the same cluster. Then I don't need to change a lot of stuff on my side because I expect both of them to run similarly. Right. Instead, now you're offering customized infrastructure for a specific use case, or we'll see how the implementation is. Then that may be very attractive for my workload, but at the same time, it may not be for some people. Right. So then, you know what I mean?
Leonard Lee:I don't think they're going to care. I think they're pursuing it. As I mentioned before, a different opportunity. And I think it's something that's a little bit more sustainable that addresses a much larger real. Challenge, which is cost down of inference and maybe even model training and less about general this is optimizing for their particular business. And so we'll probably see apple and others that are looking forward to having to deal with really large workloads and demand. Which they're not going to be able to monetize. Yeah, it's going to be an operational function, and they want the lowest cost per token possible. Correct.
Prakash Sangam:And perfectly optimized and tuned for that specific workload. They know the workload, and you can basically Make wonders with your hardware. If you know the workload you're trying to run on a lot of efficiency inefficiencies in the compute is you. If you don't know, you basically optimize for everything, which means you don't optimize for anything, right? Then it's. That's why it doesn't like compute. But if you know the workload, you can optimize the heck of it.
Leonard Lee:Yeah, I mean, and that's why the monetization part of all this is really tough. You see it here across the board. No one can really break this stuff out. It's very, very difficult to attribute the benefit of generative AI to anything. Even the call center, it's tough. Because what's the difference, you know, how much of an uplift are you getting from a generative AI agent? Versus just a non generative agent is 90 isn't 90 percent of the benefit of just having the agent do a basic conversational. Interaction how much more benefit is there and let's say, just a slightly better or better. A conversational interaction, and a lot of that benefit is going to depend on. The task or what are the case types that the generative AI is handling? And then also, there's always the concern about, because these things continue to will always hallucinate and are very difficult to tune for accuracy, reliability. And relevance, what is the risk to the business of scaling that 5%, you know, anywhere from 30 to 5 percent error rate across your enterprise and, that's going to be a issue. But when you look at the investments that are happening, there's still that big question mark. And I think next year, the, the question and challenge I post to. Chris, I was going to ask this to Matt, but you know, I didn't have time. He, or he took off, so I didn't have a chance to ask him. It's like, what are you guys going to do? And bringing some of all the goodness that you have in the interconnect and the networking and the custom silicon in the, advanced packaging, which is really killer stuff that they're doing. Yes, I see how you can bring this stuff down to the edge and then maybe hydrate or enhance automotive the networking side of things because we did talk about a high grand jokingly, by the way, but. Is there anything there that can practically happen where they can bring these technologies and these innovations down to these edge cases that they have also as part of the diversification strategy to either differentiate, catalyze or drive, like, for instance, automotive, this shift toward SDV. So that's, uh, and I asked them, you know, they're going to be forced to do that. Yeah.
Prakash Sangam:So I asked that question on automotive. I mean, you have all the IP blocks to make a SoC for automotive. He said they looked at it and I mean, they said they did not see huge opportunity in terms of customization right now, but they explore. And I also asked about the edge as well. They're not really looked very closely, but I think his overall reaction was it's not at the stage where the economies of scale for a optimized silicon are there. The energy is still pretty small and it's very diverse, right? So I think yeah, if, in this course of this year, If the edge becomes big and we'll see, that happens or not, I think one year is too short a period for that to happen. If the edge opportunity becomes so big, right now it's a very, diverse application and use cases and each of these Markets are pretty small, so I don't think customization makes a lot of sense there right now. But if they become big enough and in terms of skill exists for customization, maybe they'll bring they look at that market. So we'll see.
Leonard Lee:Okay. All right. Hey, Prakash, thanks for joining. This episode of the podcast and everyone thanks for tuning in. We really appreciate your viewership and you can contact Prakash Sangam at www. tantraanalyst. com. That's 1 word. And you can also hit him up. On LinkedIn, follow his research. He has great stuff. I appear on his podcast quite often for some reason. I don't know. and also hit me up on LinkedIn. I'm there as well. I drop a lot of stuff. In fact, I have some coverage of the event and as well Prakash, you do as well on Twitter, as well as LinkedIn and take Part in our industry and tech insights that matter and precaution. Want to take a moment to tell people how to get in touch with you other than your website.
Prakash Sangam:Yeah, sure. So I have a podcast, as you mentioned, Tantra's Mantra, and a lot of insights in the blogs, articles on the website. And I sent a newsletter that goes to around 35, 000 people. If you're interested, just sign up on the webpage. The central part for all of that content is the website, which is www. tantra. com. Analyst, dental analyst, all one word. com. Okay.
Leonard Lee:Wonderful. And, uh, please subscribe to our podcast, which will be featured on the next curve YouTube channel, as well as buzzsprout. That's where you can get the audio version. And alternatively, you can find us on your favorite podcast platform and also subscribe to the next curve research portal at www. next curve. com for the tech and industry insights that matter. And until next time, happy holidays.
Prakash Sangam:Exactly. Happy holidays to you and all of your audiences. Yeah. All right. Thanks a lot, Prakash. All right.