
The neXt Curve reThink Podcast
The official podcast channel of neXt Curve, a research and advisory firm based in San Diego founded by Leonard Lee focused on the frontier markets and business opportunities forming at the intersect of transformative technologies and industry trends. This podcast channel features audio programming from our reThink podcast bringing our listeners the tech and industry insights that matter across the greater technology, media, and telecommunications (TMT) sector.
Topics we cover include:
-> Artificial Intelligence
-> Cloud & Edge Computing
-> Semiconductor Tech & Industry Trends
-> Digital Transformation
-> Consumer Electronics
-> New Media & Communications
-> Consumer & Industrial IoT
-> Telecommunications (5G, Open RAN, 6G)
-> Security, Privacy & Trust
-> Immersive Reality & XR
-> Emerging & Advanced ICT Technologies
Check out our research at www.next-curve.com.
The neXt Curve reThink Podcast
Making Enterprise GenAI Work (with Cal Al-Dhubaib and Joseph Enochs)
Why are enterprise GenAI and agentic AI solutions challenging to design, deploy, and maintain? What are organizations doing wrong, and what can they do to get ROI out of their GenAI investments against the feverish pace of innovation and technological advancements?
This episode of the neXt Curve Rethink podcast addresses the challenges, opportunities, and the elusive nature of enterprise-grade generative AI and agentic AI. Leonard Lee, Executive Analyst at neXt Curve, is joined by Cal Al-Dhubaib, Head of AI and Data Science at Further, and Joseph Enochs, Chief AI Officer at Enterprise Vision Technologies, who share their perspective and stories from the frontlines of the struggle to make GenAI and agentic AI enterprise and industrial-grade.
00:00 Introduction and Episode Overview
00:52 Meet the Guests: Cal Al-Dhubaib and Joseph Enochs
01:39 The Challenges of Enterprise Generative AI
03:54 Key Questions on Generative AI and AgTech
05:02 Understanding the Adoption Barriers
10:52 Valuable Gen AI Use Cases and Limitations
20:52 Ensuring Safe and Enterprise-Grade AI
26:44 Recommendations for Starting or Course-correcting AI Initiatives
32:14 Closing Remarks and Contact Information
Next curve.
Leonard Lee:Welcome everyone. This is, uh, the next Curve Rethink podcast. And in this episode we break down, uh, once again the latest tech and industry events and happenings into the insights that matter. And I'm Leonard Lee, executive Analyst at Next Curve. In this particular episode, we're gonna be talking about, this really elusive thing called enterprise Grade. Generative ai and, we're gonna bring to you tales from the front lines where practitioners are accumulating scar tissue, making generative AI and agentic AI work for businesses. this should be a really exciting topic for many of you listeners out there and viewers. I am joined by two very special guests, Cal Al dva. He is the head of AI and data science. At Further and we have Joseph Enoch, who is Chief AI Officer. At Enterprise Vision Technologies and gentlemen, thank you both for coming on. I really appreciate it, and in fact, I'm quite honored to tell you the truth. Thanks for having us. Thank you so much for having us, Leonard. Absolutely. And before we get started, please remember to like, share, react, and comment on this episode. Also subscribe here on YouTube and on buzzsprout. To listen to us on your favorite podcast platform. Opinions and statements by my guests are their own and don't reflect mine or those of next curve. And so, gentlemen, it's great again to have you here. It was also wonderful to meet you both in Deer Valley and we were there attending, key bank's, capital Market Technology Leadership Forum. that was my first time. It wasn't yours, right, Joe, you've been there. No. Right. No, you're a veteran. I know, because everyone loves you there.
Cal Al-dhubaib:It's our first time in Deer Valley, but we've been to, they had it in prior years in Vail.
Leonard Lee:Yeah. And you know what, the reason why I wanted you guys on really bad, quite honestly, is that, as I mentioned, before we pressed the record button here, you guys are in a rare category of folks that I think really get it right, it's simply because. You guys are in the trenches or working with clients to make generative AI work for them as well as AgTech, right? Because, and we can talk about AgTech as well. But you guys were up on stage, on a panel hosted by Trevor Upton. Great job Trevor, by the way. Props to you and you guys were joined by, Leo, mayor Vic, And then Lee Penn, right?
Cal Al-dhubaib:Yeah, we were
Leonard Lee:also wonderful panelists, but you guys just blew me away. you know, I've been looking at generative AI for quite some time, simply because everybody in the semiconductor industry were talking about it long before chat, GPT came onto the scene and, when it did break and people were talking about how it's gonna be transformative for the enterprise. I had to, take a double take at those claims and basically warn people, wait a minute. Right. this is quite the experimental technology at the moment and that, we need to be thoughtful in terms of how. We applied the technology and I saw a lot of the thoughtfulness that's requisite to getting to enterprise industrial grade applications, through the insights that you guys shared. what I really want to try to do is capture some of that lightning in a bottle that came out of that discussion because, it was fantastic and you guys rock.
Joseph Enochs:Hopefully we don't disappoint Leonard. looking forward to the conversation
Cal Al-dhubaib:I told, make act two just as good.
Leonard Lee:guys are like pros. Don't even act like you, know that. Okay, please. You guys are awesome. So, this is what we're gonna do. We're gonna talk about. three things here actually. Simple questions that I think a lot of the audience is asking themselves, especially if they're trying to delve into gender. AI and AgTech is., Number one. why has enterprise generative AI and AgTech been so difficult to realize? There's a lot of talk about it. We hear about POCs that struggle. I'm sure you guys have experienced a lot of those as well. But, what really are the sticking points, that get in the way of actually getting these things out into. Production and scale even. That sounds so cliche, but you know what I'm talking about. The other thing is, what are the keys to safe and enterprise, grade generative AI and AG agentic today? Because this ball is constantly moving, but there's things that. enterprises need to tune into as they, try to frame their thinking around what a safe enterprise grid, is. And then finally, what are your recommendations, for, organizations to get started. or maybe even course correcting their gender initiatives as they look forward, Because we're still in pretty early days in experimentation. at least that's my sense. So who wants to start with that first one,
Cal Al-dhubaib:Well, I'm happy to kick it off. this is top of mind for me actually. I'm in the process of putting together a keynote, where I'm really diving into exactly why it is, we're seeing such a long time. For enterprise gen AI in particular to get adopted. And it's not that different from machine learning making, it's premier in the enterprise. I, found this great report from Informatica published earlier this year. And in that report, there's some pretty sobering numbers. you might've heard the MIT study where it's like 95% of AI projects fail. I think there's a lot of issues with that particular study, sample size, highly defined success. But in this one where they're really surveying thousands of executives, 97% of these leaders have actually faced difficulties demonstrating the business value of gen ai and about two thirds of enterprises. Really get stuck in particular on transitioning from the pilot to production. And it's some of the usual suspects. the leading cause no surprise is that a lot of industry stakeholders, they have misunderstandings or misconceptions of what are the right problems to be solved with ai. Right? And so it starts from there. And then you have issues with, oh, actually, if you're going to apply generative AI on top of any enterprise content data, well, it turns out that content data has to be mastered and governed, and there's a whole set of issues with permissions and privacy. And so the flywheel just starts from there. Once you get to that initial pilot where it's like, cool, we built something. And then you realize there's all these systemic issues that need to be tackled. It's not surprising then that it takes a lot of time for these pilots to go to production. Or you go back to the drawing board saying, Hmm, maybe we picked the wrong problem to start with.
Leonard Lee:You know what's really interesting, is everything you just outlined is completely opposite of the notion of what people think in terms of how do you arrive at a world model, which I think is a common understanding or conception of what it means to build a. Generative ai, right? They project that onto the enterprise. But anyways, I thought that's just something that really clicked for me as you were going through your, description there.
Joseph Enochs:what are your thoughts? I mean, resonating with Cal. I think you guys saw in Q1 that there was a CIO playbook study that came out and there were four main primitives that they talked about. Were the challenges, the first one being of education, like what is this new invention? I think, people when they were talking about AI before had really seen narrow ai where it was,, Google Maps or something giving me a playlist, and that was well understood. When the chat GPT moment, I think that was, most people's first date, with ai. But when that study was published. in Q1, it said education, and I like to key on that because when I say, ai, I think a lot of, practitioners seem to if I don't understand what it is, I'm just gonna call it ai, right? I'm just gonna fill that in. And the definition varies dependent on who you talk to. A lot of people believe that AI is this rocket ship that, does back flips and lands on this platform out on the ocean and drives itself home when the reality of it is that it's really more like a fast car. it can get you there faster, but if we don't have really an education and understanding of what the limitations of this new invention are, then the sky's the limit. I think there's a lot of credence to that because if we're not really have a good, clear understanding of the capabilities of the technology and then we're trying to choose high ROI use cases, we're not really grounded and level set on what the parameters and capabilities of this are. We're choosing use cases based on a limited understanding. So once we get to that point to what exactly what Cal's saying, did we choose the right use case? Well, maybe we thought it could do more than it could. So now we're in the prototyping phase and we thought this thing could do back flips. And the reality of it is it can't. And so moving from prototype into MVP, we really have to have that ROI. for me, I think the first step is that education. With that education. There's some primitives associated with that. And it can't be, just like in the book reading for us, we try to enable, like OJT, we want training on it in a safe environment that's been somewhat approved for your use case or your enterprise. Then it's more of how can I apply this to my subject matter expertise? Right. I think in that scenario you're creating a much safer space for people to be able to identify what ROI means for them. Then bring it into MVP, then to Cal's point, figure out the ins and outs of actual data, actual use cases, at that point, that's only when we can determine full TCO in my opinion, is that you can't determine TCO until you really. Get it on some live data. And then once you understand TCO, now we can talk about how do we industrialize this particular invention or application.
Leonard Lee:Speaking of limitations,'cause I think that's really important. It's something that I advise my clients on all the time. You're making some fantastic points there. Joseph is, what do you mean by ai? You unpack that, you break that down, and then you work with the client to understand what their expectations are. What are the conceptions they have of their, in particular definition of ai. Then you start to work through what are the possibilities, but just as importantly, the limitations. I think in terms of possibilities, everybody has a pretty wild imagination, so you can imagine a lot of stuff. What I think is really more grounded in technology and knowledge of technology itself are the limitations. And so what do you see as being some of those gating? Challenges, those hurdles that are common across the different scenarios that you're working with your clients on. And maybe acal if you can start and share your perspective there.
Cal Al-dhubaib:So I'll talk about what's working, and where we're seeing a lot of adoption, AI based coding assistance is hands down one of the fastest growing categories of use. Yeah. and near extension of that, you've probably heard the term vibe coding. we're now seeing the adoption of the term vibe analytics or vibe analysis. equipping data analysts with the ability or analysts with the ability. To talk to data sets, and enterprise, documents. Mm-hmm. to be able to ask questions. I've seen examples of this in the legal space, for example. being able to ask questions about specific contracts or terms. I've seen examples, and just trying to interpret user adoption of, digital products, for example. Right. And providing a layer for those business analysts to be able to ask questions. We're also seeing a lot of interest and maturity in creative content. so at further, we work pretty closely with Adobe, and Adobe's Firefly Studio, makes it really easy to take a creative asset and then ret texture something. Yeah. we're talking to a few retailers now about taking there. Thousand plus inventory of, catalogs and being able to animate and rotate and create dimension to what was a static image. And so those are some use cases that are relatively safe to fail in because there's still a lot of human oversight involved in. Going from that asset to what ultimately gets published. And in the case of coding, you still have somebody who is interpreting the result or critiquing the code. as far as limitations go, where we start to see organizations struggle, it's when they haven't yet gotten to maturity with using the out the box tools. They haven't yet mastered copilot and GitHub copilot and the basics. And now they're like, all right, we're now going to build our own LLM, that can be used to solve this particular document process automation. And then they realize very quickly, oh, they don't have the right data for that. Or they have to be really careful in how they set up guardrails. and then very quickly they realize, oh, this is gonna be a much bigger investment than we initially thought. And so it's not that these projects aren't possible, it's that it's surprising the amount of investment it takes to get them to become performant. And then you get into this question of, well, is it worth the squeeze?
Leonard Lee:That's often the case, right? A lot of organizations might think that this is a fire and forget type of thing, but it really isn't. And, there's an entire lifecycle actually. There's deployment, but then there's a whole maintenance that a lot of folks don't. still don't talk about much because to your point, this motion toward production has been a lot slower than a lot of folks have anticipated. And we have to keep in mind, we're three years into, this post chat GPT era. this is a long, time, when you consider the speed of innovation around, generative ai. Organizations have that whole full lifecycle to look forward to. Joseph, maybe you can share your perspective.
Joseph Enochs:Yeah, so I think there's a couple paradigms. I think, we talk about the zero to hero kind of model and when you're AI product management, you want to look at a project or product and use the minimum amount of AI that you need to do something. not every. Problem needs a large language model, right? Not every problem needs a small language model. Not every problem needs ai, right? If you've got a traditional workflow that's working well, that's probably not the best place to start with ai. But if your existing workflow, like your coding workflow, is cumbersome. It's time consuming. The research to understand some libraries would take a person without a coding assistant or a research assistant, thousands of hours that could be condensed down into a short period of time. these are areas where we can make, benefits. with ai, I think to your point in that grounding exactly what Cal talked about. If you have people who are, you train on this coding assistant, but you gotta train them on the basics of prompt engineering first. If they go through. Like the RTF framework or the create framework and have them do some prompt engineering. And in our workshops we usually do a six to eight week sort of transition where we have them go through the basics of prompt engineering and then we move them right to the IDE and whether that's Cursor or GitHub copilot, and get them, situational awareness of exactly where the copilots exist exactly where. Inline versus the chat versus, all those components. ag agentic mode, get them really understanding with that. And then we build an app together, right? We'll grab some data, show them how to illustrate that data, how to, build JSON structure, how to put some middleware on that, how to put a front end on it, and how to do that all within the context of the RTF or create framework. So we build upon those things, but then we give them license and agency to go out. And build these things. And I think that really grounding helps them establish the limitations on what these things are, what they can do with data. Cal, you hit a great, point about data. If I give an LLM, 20 tables and expect it to reverse through all those things, it's gonna struggle with that. Right? We need to figure out. Give it some sort of semantic layer in between that to make it easy for the AI to interact with that. That's a limitation that I don't think people naturally see. Right. I think that. That's one limitation, but I think there's many others that I don't think people really grasp, like tool calling the folks. Maybe just a brief point on that, right? When we want to have an LLM, when we talked just to an LLM, you guys remember back in the day, it would say, oh, my training hasn't been 2023. I can't tell you anything about it. Now if you ask Chad GPT something, it, has tools in the background that it can go pull these things, but it has limitations, right? It can maybe do 25, 26 tools. You give it the 27th or 28th tool, and it's gonna get confused. And then if you look at the smaller models, when we're trying to do smaller models offline, maybe it can handle eight or nine tools and that the 10th tool right? It gets confused. so those are some limitations. I know there's things like CPS and things of that nature that we can do to help them, but I think it's that stick time, right? And that grounding first that really helps people understand those limitations.
Cal Al-dhubaib:And frankly, all alums don't have a really good sense of chronology either. And so, I often even in my own prompting, not even using it for programmatic reasons or just looking up information, I have to say, use information or sources only from 2025 or only from 2024 and beyond. And if you don't do that, we saw an in, an example where a client was interested in setting up their own, custom GPT on top of their HR policies. Well, unless they have somebody navigate to what is the most. Up-to-date version of the PTO policy and where to find it, it's gonna give conflicting information from whichever policy ends up being the most convenient to answer the solution. And so there's a lot of these little gotchas that enterprises realize, oh wait, we have to actually master this stuff to be ready for ai.
Leonard Lee:Wow, that's some deep stuff. That's a lot of complexity. as I'm listening to you, both of you talk, it almost seems like you need, to make the application generative ai, enterprise gray. You really do need to look at. Specific applications and then maybe even assess how that generative AI needs to be applied in order to support, all the requirements. Of that particular application. And that could differ from application to application. So we're not looking at a generic set of scenarios and that are supported by, homogeneous or horizontal architecture. You really need to start working some of these stove pipes and working through the complexities and nuances of making these things. purpose, oriented for a particular process, function or what have you.
Joseph Enochs:yeah. Yeah. And where and when to apply generative AI and when not to. Right. When we look at, like, as an example, there's this universal disconnect, I think in enterprise. A lot of times from the business and the business applications to the technology stack. And even in the technology stack, you have server compute cloud, you have all of these teams, that are independent'cause they're dealing with things at massive scale. But then you have the business applications that are driving the business. There's a lot of telemetry. The stack that exists on the infrastructure, but there's a gap oftentimes between those business applications and the infrastructure stack and like trying to rebuild all of the telemetry with the generative ai. Probably not a good thing, but maybe something on a core application where you can take indications from that core application and. Elevated alerts from your infrastructure platform correlate those things, which would probably be very challenging for you to write some script to be able to do that. But the LLMs can really take those things in and with the right context and the right prompting and maybe some fine tuning, Can correlate these things and find signal and that scenario, our token counts are down, right? Because we're not giving it everything. We're only giving it a limited subset. It's really targeted at a particular high value business application. And in that scenario, maybe it does make sense for us to have an LLM. Whereas if we're just expecting it to do something general purpose, the token ons may not work out for us to do that.
Leonard Lee:That's a great point.'cause one of the things that I am starting to raise is the importance of, you guys are familiar with finops, right? But having that for generative ai because the economics and the nuances of, what you might consider finops for generative ai, very different. On the surface it looks like it's the same, it's not right. just because the underlying mechanisms, financial mechanisms, if you will, tend to be different. I wanted to shift the conversation to, the keys to safe and. Enterprise grade generative ai. if you can maybe share with our audience, what are those keys, and there is a particular word that I haven't heard yet, but I'm hoping I'm gonna hear it from one of you guys. You go first, okay. And then Cal, you do cleanup.
Joseph Enochs:Okay. Sounds good. So I think from my perspective, we start actually with compliance, with the enterprise, let's say enterprise, right? And we try to look at the controls that apply to our enterprise customers. And if we look at like two main frameworks, we have the NIST RMF or ISO four, 2K one, depending on what organization you align to. But we usually use that as our baseline. ISO four 2K one has a really good well written. Standard with a lot of controls in it. So we start with those sort of elements. And it does include logging and cataloging and red teaming and some very significant components, guardrails and things of that nature. So we start with that. Like we call it the compliance first sort of AI foundations, if you will. No, not, we build that compliance first AI mindset. So that we can then step back to that education that we talked about. And have a compliant sandbox that looks like what production would look like. But now when we're training people and we're onboarding them for that particular training, now they're training in a sandbox that eventually could be promoted from our development sandbox area into our. What we call the hourglass. I have to give some credit to some of my other colleagues who came up with that. But, the hourglass and shaping it onto basically a platform that is again, this compliance first AI platform. And so I think that's how we look at it, is we look at the end first and all of those controls, we step that back for the platform. As we're educating and onboarding those things, we want that sort of fast track so that eventually you have intelligent infrastructure. So if I'm building an agent, I want to drop that agent into an environment. It's got all the guardrails, it's got all the memory management. Tool management, semantic layers, all those things that I don't wanna have to build. Every time I build an agent or buy an agent, I wanna put those primitives in place, right? And so for us, I think that's just the basis of a foundation for a safe and secure type of agentic AI framework for an enterprise.
Leonard Lee:Man, it's gonna be tough to follow that, right, Cal?
Cal Al-dhubaib:I mean, there's very little to add on the compliance side, but I'll raise a couple of things. one is that there's this, very fascinating world of ethics when it comes to agentic AI that we're starting to confront now. Adobe and Google recently were a part of a new, standard release called ap. K two or something like that. And it's basically a payments protocol for agentic solutions. let's say you have an agent, which is nothing more than A-A-L-L-M powered piece of software that has the ability to act autonomously, and you wanna use it to book a flight. Hey, agent, I'm trying to go to New York. I wanna keep it under$400 round trip. I need to travel by Tuesday morning and be back by Thursday evening the week of the 28th. Great. So now there's this fascinating ethics of this agent has to be able to access your payment method. it has to have your consent, it has to act on that and be able to act with fiduciary responsibility and in fact purchase within the constraints that you set and report back. As enterprises start building more and more agentic solutions, the space of what is considered consent, how can it be managed, how is it withdrawn, how do you securely share payment information, this is the next wave of challenges that enterprises are going to need to address. And it's non-trivial. And so I don't think that this is gonna be a. Six to 12 month road to agents everywhere. I think we're gonna see another three to five year development cycle. Very much like what it took to go from the chat GPT moment to where we are today.
Leonard Lee:those are great points because, things like, trust framework for AgTech transactions, a lot of people are talking about. Agent to agent and MCP and all this stuff, but the security layer is clearly not there. I was at. RSA, 2025. That was one of the biggest concerns, and even more recently at Cisco Live where that topic came up and we're still very long way, and it literally is in and of itself an infrastructure build out. Right. And for an enterprise, it's likely going to be a similar kind of build out, because right now the conversation's more about like a public registry or a set of registries or setting up a. Mt. P servers to, serve as a web server or. But it, it's not that easy. And we have to remember, the internet in its early days was not exactly the most secure thing in the world. Right. And there was a lot of innovation that had to go in, that and engineering that had to go into, to making it a bit more secure. I think it's tough to these days to argue that it's, mission accomplished. It, it seems to. I have it challenges on a day by day basis. Right. So, but that's great sharing. So, for those. Organizations that are looking to kickstart their generative AI or AgTech, AI initiatives or those that are trying to course correct because they're bumping into, these walls that you guys have, described. what are some of your salient recommendations to these folks in terms of first steps or pivots?
Cal Al-dhubaib:I think the biggest trap I've seen is the efficiency trap. and it's this, we're enamored with, we can save, 10 x the time on doing X. Okay. Um, and then a lot of enterprises are surprised when, oh wait, the cost of compute relative to what we're getting, or, the token usage for a use case just doesn't make sense. And you still need humans in the loop. And so I'd say just from a fundamentals perspective, prioritize the use cases that lead to growth. And so a lot of our clients that are experiencing success with homegrown generative AI experiences are doing it to do things like helping customers navigate their cart and select products and shorten the time to making a purchase, for example, or, being able to navigate and enterprises resources more effectively. Hmm. To be able to drive decision making faster. I would say that from where we've seen a lot of success, there's this fallacy of efficiency is the best use of gen ai when I think growth is really the better use.
Leonard Lee:What about you, Joseph?
Joseph Enochs:I think from my perspective, it really depends on what your standards are in the organization. So if you already are on copilot, or you're already, on Databricks, or you're already on AWS bedrock, right? If those are your standards, let's work within those standards that you have and try to sort of. Establish those as the baseline. Then from that, I wanna have some real education with your people, Get some group together, a group of your architects, some cross-functional groups, and get those folks together and get them enabled on this sort of platform that you have. the regulations are changing. You won't know everything day one, but let's start with what your policies are and what your standards are, and let's get some OJT. Then what I really want to do is I want to get those cross-functional people together now that they've been empowered with these tools, and I want them to come up with these ideas. Now we can put some guardrails in place on how we define ROI. We mostly wanna align with our clients on what their definition of ROI and their definition of value is. But let's set that primitive up and get these cross-functional teams together, because what we've seen is two parts of the business right now. A lot of times they're not talking to each other, they're in silos. But if you get these cross-functional teams together, the studies have shown it's significant, like 80% improvement on ideation. And right now, this invention that we have with ai. Can make this ideation to reality much smaller now. It's not a, you wave a magic wand and it's there, but getting these cross-functional teams that have now been enabled on this safe platform, those is where, for me, you're gonna start getting the signal. I mean, we just did one of these, it's a eight week type of engagement with an organization and one part of the business was offering a solution. And another part of the business was actually doing the billing of this system. And those two, knew that there was some benefit that could be garnered, but every time they tried to come together to get a prototype, they would reach out to their partners and it was a significant cost, like, hundreds of thousands of dollars for them to figure out, whether there was gonna be high ROI there. And that would always get deprioritized. But by putting these teams together and empowering them to be able to prototype, they were able to find tens of millions of dollars in cost savings just by those parts of the value stream that had never been allowed to talk together. Now they can talk together. Now they're empowered. Now they have a safe space to do these things. I would say start with that. See what's gonna happen out of that. And now we can move into like, what we talked about, your compliance. Are you going to do nist, RMF, that sort of thing. let's find those guardrails. let's fill the gaps in those things. That's gonna allow us to bring those into MVP. But let's start with getting people excited and getting people trained and bringing these cross-functional teams together. They know where the, where the gold is in the organization. We've just gotta give'em the license to come together.
Leonard Lee:Man, you guys are too good. This is quite ridiculous. but hey, I wanted to circle back really quickly the comment you made earlier, Cal, about growth. And I think it is an important one. You're right. this fixation on efficiency and oftentimes that leads to this notion that you can get rid of people, but growth means also, force multiplication. So the folks that you have, If you're gonna grow your business, you wanna force multiply your resources, including your people. that really, resonated with me, Cal. So it is important to have that mindset as well because when we look at how AI is being applied in the semiconductor industry is not that you need less engineers, you need to force multiplier engineers, otherwise, Moore's law. Is dead. Right. Um, exactly. These chip designs are becoming so complex, advanced packaging, all these things are just creating additional factors of complexity that. a human engineer is going to have trouble dealing with, and, the ai, generative AI has a role in force multiplying things so that, technology can continue to advance. Right. wow. You guys just really blow me away. We could probably go on for like ever, right?
Joseph Enochs:we have to do this again For sure. Then for sure. We'll have to do it again. Then
Leonard Lee:once again. I deeply appreciate the, ground truths that you both surface in the work that you do. I'll reiterate, just, supremely impressed by, both of your participation on that panel in Deer Valley. And I definitely look forward to continuing our conversation, but also hanging out with you guys, once again and just, having a great. Exchange of, ideas, experiences, and perspectives. So, both of you, thank you so much. Um, thank
Cal Al-dhubaib:you Leonard. This, this is a pleasure. I love hanging out with the both of you. Yeah, you guys fantastic.
Leonard Lee:Uh, love you guys. and I'm really excited to share both of you guys to the next curve audience. why don't you share with our audience how they can get in touch with you? Cal, why don't you start off?
Cal Al-dhubaib:I pretty much live and breathe on LinkedIn, so if any of this sounds interesting or exciting to you, hit me up there. But yeah, I'd love to hear any questions that this conversation sparked.
Joseph Enochs:Yeah, please hit me up on LinkedIn as well. I just recently, co-authored, a document with it Revolution Press, called the Revenge of qa. So please check that out, give it a like, and later this year I will have a book with O'Reilly coming out. It's going to be in pre-release here, probably, before the end of the year for, vision models and multimodal learning. So please, when that book comes out, check it out.
Leonard Lee:Everyone, these guys are a real deal. I'm serious. after three years of this like hype, you guys are a breath of fresh air and, you bring a lot of realness and I highly recommend everyone. get in touch with these two gentlemen and their organizations. It's, very apparent that they're doing some very instrumental work in advancing enterprise ai. Enterprise generative ai and AI as well. I just, I'm on LinkedIn a lot too, but I wanna warn you guys, I come off as a grouchy ai, I'm not Okay. I am a pragmatist. Okay. And that is the only reason why you guys are on the show because you guys bring that practical angle, to a rather hype technology at the moment. And so again, thank you. Very
Cal Al-dhubaib:aligned. Thank you.
Leonard Lee:you
Cal Al-dhubaib:much guys.
Leonard Lee:Yeah. Okay. Before we leave, please just, subscribe to our podcast. It'll be featured on the next Curve YouTube channel. check out the audio version on Buzz Brow or find us on your favorite podcast platform. Also subscribe to Next Curve Research portal at www.next-curve.com for the tech and industry insights that matter.