
The neXt Curve reThink Podcast
The official podcast channel of neXt Curve, a research and advisory firm based in San Diego founded by Leonard Lee focused on the frontier markets and business opportunities forming at the intersect of transformative technologies and industry trends. This podcast channel features audio programming from our reThink podcast bringing our listeners the tech and industry insights that matter across the greater technology, media, and telecommunications (TMT) sector.
Topics we cover include:
-> Artificial Intelligence
-> Cloud & Edge Computing
-> Semiconductor Tech & Industry Trends
-> Digital Transformation
-> Consumer Electronics
-> New Media & Communications
-> Consumer & Industrial IoT
-> Telecommunications (5G, Open RAN, 6G)
-> Security, Privacy & Trust
-> Immersive Reality & XR
-> Emerging & Advanced ICT Technologies
Check out our research at www.next-curve.com.
The neXt Curve reThink Podcast
Sensors Converge 2025: Leaders' Roundtable - AI and Sensors: Exploring a Special Symbiosis
AI is transforming the world. From predictive maintenance to content creation, AI is set to push the world into a new era, enabling industries to maximize efficiency, productivity and profitability. Sensors are the core technology for AI. Responsible for the capture and processing of the real-world data used to power AI models, without sensors, AI would not be possible. This roundtable will explore the role of sensors in fueling AI capabilities, and in turn, will examine how AI itself can be utilized in the design and optimization of advanced sensor technologies. Speakers will explore the combined potential of AI and sensor technology in building intelligent and responsive systems across industry, as well as focus on practical applications, recent breakthroughs and future trends that will shape the next generation of smart sensor solutions.
Leonard Lee of neXt Curve moderated a Leaders' Roundtable at Sensors Converge 2025 with:
- Simon Ford, President at Blecon
- Azita Arvani, Founder of Arvani Group
- Manuel Cantone, Sr. Director, Product Marketing at STMicroelectronics
- Rob Watts, AI Architect at Intel
- Yuhan Long, Co-founder and CTO of Infermove
rounding out this, our mornings, general session is a dynamic conversation at the intersection of two game changing technologies, AI and sensors. And in this leader's round table, we will explore how sensors power ai, and how AI is now shaping the next generation of sense inter innovation. So get ready for insights into real world applications, industry breakthroughs, and what's ahead for this powerful tech symbiosis. Our moderator is Leonard Lee. He's executive analyst and founder of Next Curve. He has over 25 years of experience advising global 500 companies on technology and business strategy. Leonard is a recognized thought leader, former Gartner partner and contributor to Forbes, Bloomberg, and more. And Leonard, I'll let you take it from here
Leonard Lee:Hey, thank you. Good morning everyone. And, before we get started here, I really wanted to congratulate, the team At Questex as well as sensors converge for 40 years. I think that's quite incredible. So if you don't mind, why don't we give every team my hand and, now I'm really excited to be moderating this panel. I think everybody in this room has, been on the exhibit floor. And has undoubtedly seen a lot of sensors and probably heard a lot about AI over the past few years. At sensor converge, we've seen AI sort of make its way into the sensor discussion, especially as it pertains to smart sensors. And, you know, as many of you who have been in the industry know this isn't anything particularly new, but it has been come very exciting in the past two years, especially with the advent of, chat, GPT, generative AI and the widespread recognition of, transformer. Based models. And so I have a distinguished panel here and if. Everyone will take a moment, a very brief moment because we have a number of panels here, that I'm sure will wanna spend their time talking about how the convergence of AI and sensors are gonna create new possibilities. If you don't mind introducing yourselves really quickly, and then we'll get started with the conversation. Episode three.
Manuel Cantone:Manuel Can product marketing manager with Steam Micro Electronics. We do provide to the market components, sensors. So, MAMs, for motion, for altitude with pressures and the temperature and, and the like.
Yuhan Long:Alright, uh, you how long? I'm the co-founder and CTO of the info move. we build robots that deliver grocery and packages to your door. besides the one that you may already see on the road, we also equip a robot arm on top of it so it can interact with the environment a lot.
Rob Watts:my name is Rob Watts. I'm the lead architect of a software framework called Intel Seascape. I'm with, Intel Corporation. Intel Seascape is a platform for bringing AI and sensors together to create four D digital twins. spaces.
Azita Arvani:I'm Azita Arvani. I'm former CEO of Rockton Symphony Americas. And also, I'm, I've been working a lot on AI and networking and I think AI networking and sensors provide a very fertile ground for the future of this, area, and I'm really excited about the panel discussion.
Simon Ford:Hi there, Simon Ford from, Blinken. we do a Bluetooth, based network, so, for very low power things like, sensors, and for that spent 17 years at arm, doing things like, introducing, the Cortex head, micro controller, range. So very interested in microcontrollers computing software services.
Leonard Lee:See, I told you. Amazing panel. So, thank you all and why don't we get started here And I'd like to kick things off with a question, of what's really gotten you excited coming into sensors Converge as a. Pertains to or relates to AI and sensors? You know, every year we come to the conference, there's always something new and it seems like the velocity of innovation is just, overwhelming. but what are the things that have you excited that you think are going to influence, where this convergence of AI and, sensors will take us? So Simon, why don't you start?
Simon Ford:Yeah, yeah. Okay. So, obviously I've spent a lot of time, seeing, you know, processing technology, radio technology, sensor technology, improve. it's just like a constant, chipping away, that sort of, roadmap. But I think for me, the thing that's, changed is actually the perception of what's possible. So a lot of this technology has been getting better and better, but sometimes. You don't notice how much it's improved. actually, the, advancements in things like, the AI services that people have become more familiar with, I think that's actually unlocked imagination in a way that I haven't seen for a while. surprisingly not directly related to the technology, more how people are approaching problems.
Azita Arvani:if you look at sensor, the four main parts of sensor, the sensing element, the processing unit, the networking and the packaging and the convergence of all that, that could make it possible for things that we haven't seen before. So for me, since I work a lot on the networking side and the processing side. You could see that this, continuum of intelligence, right? You could put the processing on the sensor, you could put it on the network, you could put it in the cloud, depending on your requirements, and. You could even then later change it dynamically. Like let's say you decide that for whatever reason you wanna do the AI in the cloud, so your sensor collects the information, does some processing, and then sends it to the cloud, but then there's a security breach and then you can't, you don't wanna send everything to the cloud, so you wanna do it on the edge. But being able to change things dynamically, that almost like a intelligence supply chain, that to me is, super exciting.
Rob Watts:Yeah, I think a little bit about how we're all born with a lot of sensors, right? We have our ears and eyes and touch and you know, five senses apparently. But actually there are more than that. You know, we have an IMU built in, but when we're born, we don't, understand how to use those senses, right? And so our brains make sense of that. That's the intelligence. It's the thing That takes those sensor inputs and actually grows based on those sensor inputs. And I really think that's how AI and sensors come together is the ability for the AI to learn from the sensing, to become embodied, to understand the world, to understand our bodies, and to be able to act within that world. the sensors have been very difficult to use and very specialized like. Every single modality has a great deal of specialization around it. And it makes it difficult to start bringing those sensors together. And I think that's where AI really comes in to be able to embody the collection of all those sensors together into something more than just the, the, some of the parts. It's like much more than that. It's, it's really giving, our applications and software, the ability to interact with the world in new ways based on the plethora and the collection of that sensor data.
Yuhan Long:in today's technology we can really fuse a lot of sensor, together with the transformer, architecture and. Basically build a very holistic view of the work. besides that, another trend that I see is, we actually having this activating sensors now, because when we talking about sensor before, we thinking about cameras putting on a building, which always pointing at that same direction like A-C-C-T-V camera. But when you're thinking about a sensor on a robot. It has the way to, kind of moving around and point to the direction it's interested in. So the capability of, extracting data from the world has been increased a lot by the capability of ai.
Manuel Cantone:I connected to what you said and what the Zika said, You talk about the robot. So we're talking about battery operated things, things that are independent. So power consumption is key and towards edge networking. So distributing, the computing across the cloud, the, gateway or whatever the meat object is, and then a sensor node. I come from a semiconductor company, so for us, the sensor node is built with multiple ingredients. So there is a sensor itself, that's my product, and there might be a micro controller or a mean of connectivity and you can distribute the computing even further down to the sensor itself where the data is acquired. That's really our goal here, is to make the sensor. aware of the context. Contextual awareness is a light motiva of our booth. You will see it in all our demos so that we can basically stream all the significant data or decide when to stream significant data to avoid the constant, stream of, raw data that in the end, consume power. And back to the independent robot consumed battery and, reduce the time of this device. So really moving the intelligence all the way down to in sensors. So we said in edge, and then the in edge capabilities is the common term accepted by the industry in sensors is what we said in our company. And we can do classification of movement. We can do, things like FFD directly on the sensor so that we can not stream raw data, but already pre-processed on.
Leonard Lee:Okay, great. So, Manuel, since you, have set us up to talk about AI's influence on sensors or sensor solutions. what are some of these new developments in AI technologies, AI capabilities, how have they shifted your thinking? about how sensor solutions come together, whether it's from a, device perspective. overall sensor or, perception system, architecture perspective, what are those things that you could share with the audience?
Manuel Cantone:So, our mind shifting the, when we talk about AI is really providing our developers the tools to, take advantage of all the computing that you have in all the various pieces of the architecture. So tools to take advantage of the intelligence in the sensor tool. That might be a different tool taking advantage of the processing capability of the microcontrollers. Even the most, common microcontrollers can run some AI with the right tools, synthesizing the light library. So it's really a mindset going more of a, software and programmability oriented mind shift, rather than providing a sensor that just is registerable, it's configurable to register and then just send data away.
Yuhan Long:Yeah. So, for this question I was thinking about like, when I was doing self-driving in around 2020s at that time we first built a mega pixel camera, for automotive usage. but one feature that we make sure that we have in that sensor is we have to have a capability to bring full pixel together. So that the camera is now using in eight megapixel, but two megapixel, because at that time, that is the maximum capability that we can extract information from our, AI model. But today that becomes different, like the model capability, the semiconductors capabilities being increased a lot, so we can actually extract useful information from every single pixel in that camera. I want to, really, go back to the activating sensor, part, that like, again, like back in around 2020s, we are talking to several lidar companies who has the capability of, directing this laser beam to a specific area to, have a dancer form a dancer point cloud for a specific object. For example, if you see a car, driving towards you, the sensor can actually. Kind of being, focusing on that area to see much more detail on that, on that car. But unfortunately, none of, the LIDAR companies, customer can take advantage of this feature because at that time what we have is only probably a rule-based, at best, a simple model for, changing this configuration of a lidar. now with. the ability of AI especially, reinforcement with learning with very favorable result. this kinda algorithm, you can actually pointing your sensor at a specific location that gives you the best result if you, formulate your goal, very specifically. For example, if you ask a robot to pick up a bottle on the table. You don't need to build a rule based system like, say the robot hat has to point him to the table for manipulation. You can train all these joints altogether, including the knife, including the head position to make sure the end result of the task has a maximum successful rate. I think this is how AI changed the usage of sensor.
Rob Watts:Well, I think also about calibration, right? I got a new pair of glasses the other day and, when I put'em on, I started getting a headache because they were a little different than my previous pair, So, it took a little while for my brain to recalibrate when I got those new glasses. and I thought about that because when we calibrate cameras for doing, multi-camera tracking or whatever, it can often be a pretty onerous process for a person to go in there And get the pose and intrinsics and extrinsics of that camera. And we try to make that as easy as possible in our systems. Right? But ultimately, the AI needs to learn how to integrate those sensors with the sensor input itself and bringing, like you said, AI down into the sensor itself in order to better understand how that works, to reduce the friction of integrating new sensors into these platforms. And I think that, being able to process that data and use the data with AI for, getting, a better understanding of where the sensor is, what the sensor parameters are without having, an entire test bench for doing that. I think, that's a really critical direction to be able to bring the sensors together in a more seamless way using ai.
Azita Arvani:a little bit following on what Rob said here. I think adaptive sensing also can be very useful. not just being able to look at the data afterwards and calibrate to things, but even the way we sense, for example, if the temperature goes high, you might want to, get the. readings more frequently. and so just the sensing itself could change. I'm on the board of a company called Tenant Company and, we do industrial and commercial cleaning machines, the big sweepers and things that you see at airports and so forth. And we have a, line of, robotic machines now. So the first generation. Was when it saw some barriers or some kids coming in front of it or something, it had to wait for an operator to come in and clear things. But now it's much easier for them to, when the camera sees it, it takes more pictures. When there's a barrier, it can go around it and it becomes much more. autonomous and adaptive to taking care of error situations. So, I think that's very exciting. But I also wanna talk about AI in design of the sensors, which I'm sure, Mattel knows a lot about because that's when, generative design comes in, right? So when a designer. just does design on its own. It might take much longer to come up with all kinds of permutations and configurations of past design and being able to take advantage of that to meet the requirements that they have. So this is almost like creating, new molecules and stuff that, that for a person it'd be much harder. But if you have like a super creative AI assistant, they can do that for you much faster and easier. And being able to look at. Many prior designs, not just from your company, but other companies, then that makes that, much easier to design other sensors.
Simon Ford:So you were saying about changes in thinking. I think there's some things that are quite counterintuitive. If you, don't stop and think about it. so processing on a micro controller that might be in a sensor has, you know, be getting pretty powerful, but it's quite counterintuitive that you can apply that to reduce cost or reduce power. So, most people would think, oh, if it's more powerful processor, well that's more expense. And that's more power. But actually you know, in physics, things like radio scales a lot worse than processing. so if you're on A latest generation processor, actually, you can do a huge amount of processing for the cost of sending a bit of information. when you start looking like that, actually applying, advanced processing and AI in a sensor has the opportunity to reduce the cost of sensors, reduce the power consumption, which has a knock on effect on cost, how long they can run on batteries and things like that. So, I don't think people track. How advanced the processing is available in what you would consider a low cost micro controller now. and how that can change your design decisions about, if you can turn information from a high bandwidth sensor into a low bandwidth, higher order information, that can really impact, the cost structure of your product, and your solution. Yeah.
Leonard Lee:it's interesting, that, you bring that up. But then also, what I'm hearing on Exhibit four, quite a bit is, the topic of memory being important, especially for, these AI applications. So, thanks for sharing, everyone. Now, one of the things that. We started to hear a lot about is, how data has become gold Again, I don't know when it has become un golden, but apparently it is gold again and it is a very important. Input, let's call it fuel for this new variety or this new generation of AI that we're looking forward to. Right. and this is interesting because we're talking about the recognition about the importance of sensors from, folks that are doing all the. AI supercomputing, right? I mean, it comes into those conversations even whether it's for the data center or even for the application. So I was wondering if, the panelists can share their thoughts on how sensors and how the. Way that they're evolving and advancing, are enabling and impacting what's possible with ai. And what are some of those things that new sensor capabilities are going to enable, for, the future of AI as you see it in the next year, concerning how fast everything is moving? Do you wanna start off?
Simon Ford:Yeah, sure. so I think, well certainly from our perspective, if you can bring the cost of sensing down, it can go much further, if you reduce the cost, the amount of, places it could be applied explodes, right? so for us, seeing that, curve, allowing people to get insight into, you know, physical environments, physical products, often where it's not a first order. so most people are thinking about, sensors as a first, or, you know, it's the first order thing. It's the thing that defines the product. Actually, what's quite exciting is when data sources, aren't the thing that define the product, they're what gives you insight into how, let's say a product is being used and therefore you can improve the product. so in the web world, it's been standard for many years to have analytics coming from, I dunno, your SaaS. product, you know exactly how your customers are using it. you know exactly when things are going wrong. and, there's no reason why that doesn't apply in companies working in physical environments. these curves, allow, that kind of technology to be applied in physical products, physical environments where if you have insights, now you have the data sources to be able to, check the big AI at it. And that, that's quite an exciting combination. I think.
Azita Arvani:So to build on that, the, the sort of human machine interface and the information that you get from your sensors to see if you are on the right track or not, that could help shape the future of our products, and AI can help it. Analyzing that. So, in the digital world, we have all these sentiments, right? Like we look at social media and somebody saying, I like it, I don't like it, whatever, the reviews and so forth. So this could be another way for us to get information. it goes from being sensor design for the requirements to the interface to the system. Right. So I think, that's super important. But also I would like to go to this notion of the connectivity and sensors.'cause you said data is gold. It was oil now is gold. But maybe be Bitcoin in the future. But, if oil is data is obviously a big pillar of, ai and then there is the compute. and then there is, to me, the networking is the third pillar, right? So you want the data from the sensors to meet the compute at wherever is the best place to meet, right? So that best place could be on device. Some people call that edge ai, but for me, there are multiple edges, or it could be at the edge of the network and network has many edges, or it could be in the central cloud. But that decision of where the. Data and, and compute meet, and how the networking shapes that is hugely important. And, and that could sort of impact how we, get information and use it for our future use of ai.
Rob Watts:Yeah, I think about what sensors are for, right? Sensors measure something in the world. They help us digitize the world, right? And, if in the context of ai. it gives the grounds the AI in reality, right? if you think about an AI as being, entirely synthetic, you have hallucinations that start coming up because it's just operating on its own without any grounding in reality. And, I tend to think about. Like dreaming, right? we go to sleep and our brains are just going crazy, right? they're just like coming up with all sorts of weird stuff and it's not grounded in reality. You can fly, you can do whatever it is, right? and that's what the AI does without sensors, right? It just hallucinates like mad. the idea here is that we use our sensors to inform reality. and sometimes that's not comfortable, right? It's not what we want. But it's reality, right? So that's what sensors do. And the second thing is, as we use, the sensing to create this world, this mirror world, this digital world that lives in the compute and in the ai, the sensing inform like how well you can incent sense and how well you can compute informs the fidelity of that digital world. Right. And I work at Intel. So I think a little bit about Moore's Law, right? So I think about how the sensing informs the fidelity of that digital world as we go over time that Fidelity is going to improve more and more like robotics. Like you're gonna be able to map the world and track where things are. Maybe you can do it within a few centimeters today. But if you wanna take this into industrial and manufacturing, it needs to be down at the millimeter and even more, right? With better sensing, better compute, and better ai, we perform this virtuous cycle of value that can help improve over time and it's all informed by the data that comes off of those sensors, right? and so that virtuous cycle is really where we're going. And I don't think it's gonna end, you know, it expands Both to smaller and to grander scales all the way to the universe if you think about it. So when they say, well, where I saw this morning, there's this tapering off of the amount of data, no, this data is just going to keep growing and keep, informing that fidelity, sure, it may be true based on the things that are created by humans, but the universe is vast and, sensing is what gives us the access to that.
Yuhan Long:Yeah, absolutely agree with what Ralph said. Like for. Robot builder. I do need a lot of data. I do need a lot of high fidelity data, for sure. and also back to what Rob talked about, sensor is the grounding of physical ai. for better grounding, we need deep, multiple modality of. sensor data, coming all together to the robot. And today, that is actually possible. I think everybody in the CS has already used A VLM Avis, a visual language model, which is a kind of sensor fusion on its own right? If you the visual data and the text data altogether, with the same architecture, we can fuse. visual audio even touch sensor together. And this already being researched and being verified by a lot of research today. by fusing all these sensor modality together, we have the capability of increasing all the successful rate for specific task. and I would say like not only the high fidelity data, but also more modality of data, coming from more modality of sensor, that is gonna be, used for building better ai.
Manuel Cantone:following up on what you had said, and also Rob, the fusion really resonates with me because sometimes you need to confirm what the sensor is actually sensing with other type of sensors, and that's what the technology is advancing. So if, for example, motor control, so you can sense the motion, but you also can sense the current of the motor and if something happened, you have these two things they need to agree. Otherwise it's an imagination to your point. the data multiplying and then increasing and having multiple sourcing data that, can be fused together. I really see it as a market trend. That's where we're going and multiple modalities, the motion, the current, but the vision and the vision that not only is the vision cameras, but also vision, distance or vision, like temperature in a space setting. That these are all things that our sensors do enable. And I think that it will definitely increase the awareness of the. These objects that, that move the real world did.
Leonard Lee:Okay. wonderful. So Rob, you said something really interesting, this whole notion of the virtuous cycle here. and I assume that you're talking about the virtuous cycle, and that was formed by ai, for sensors and sensors for ai, right? Yeah. And so, This final question that I'd like to ask you panel is, this is, you know, for the audience who probably, are wondering, okay, so this is great, you know, visionary stuff, but what are some of the takeaways that they can go back to their teams with in terms of, you know, opportunities near term? Near term takes, that you would recommend, based on this idea of the virtuous cycle that you outlined, because I think it's really important because someone mentioned smush. This is like those two themes smushing together, right? So what are some of those salient points that you would like to share with the audience or recommend to the audience? So maybe Simon, do you wanna talk about Smushing?
Simon Ford:Yeah. So I think, from what I see when we are talking to customers, it's, the underestimation of, the amount of, intelligence that can be put really into what would be considered a sensor like Manuel, talks about, I think, when you start like that and you're thinking about how much intelligence can be put in what would be considered often a done peripheral or something like that. and that might recalibrate how you're thinking about a problem. this will extend down to, the way, sensors are manufactured. So not, don't think of them as discreet things. Think of them as embedded things. printed things, printed batteries and all that sort of stuff. So I think, resetting, how sensors are considered, as rather than being peripherals, but being autonomous. devices, even when they're, very low cost. very tiny things. there's autonomous systems. I think, that recalibration is often very useful for a team.
Azita Arvani:Yeah. So for me, the mindset of going from traditional sensors that were kind of dumb and just sensed and then going into smart sensors that are connected sensors, you know, the IOT, sort of world, and now we're on the cusp of AI sensors. So then. the sensing and the analysis, maybe some of the decision making can be done on the sensors, and that's something definitely in the next 12 months. the other thing I wanna talk about is Synthetic data. That's something that could also, that's also here and people can use. So instead of taking all of the data that's being sensed and sending it to the cloud or processing it, you could sort of freeze dry it and, then decide what you wanna do with it. And then later you add water and fall out. You have. talking to some companies that are doing this in various areas and that's, something that could be useful for folks. and that's in short term,
Rob Watts:yeah. Smosh, right? When you say Smosh, I think Fuse, right? Fuse.
Leonard Lee:Yeah. that's probably a better word.
Rob Watts:Yeah, that's fusion. you know, what do you need to do in robotics? like Yohan was talking about, what do you need in order for those sensors to be able to work together? Of course, to Theta's point, you need to be able to connect to them, and you need to be able to, take multiple sources of data and maybe feed it into a multimodal foundational model or something like that. to me, I have a background in physics, So I really want to bring everything into the same context. And I talked about this yesterday in my talk, but, it's about. Every sensor being able to be, localized in both space and time, don't think about your sensors as being living in their own domain, right? Everything lives in space and time, and so how you can bring all of that together. Precision timestamping, things like PTP, precision time protocol, being able to integrate that with the network and be able to understand exactly when the photon hit that sensor or when that sound wave hit. That the MAM's microphone, right? And being able to precisely timestamp that, as well as to localize where that camera or that sensor, that microphone is, in a common way. Now, if we can bring the space and time elements together onto the data, now the AI has full context for what to do with that, and now we can reconstruct the world. in a higher fidelity way based on the synthesis of all of the different modalities. Whereas if you don't have that synchronization and space and time, then, it's noise, right? You don't know what to do with our data. So those fundamentals of space and time I think are really near and dear to me. And I think that if we could do a better job of our sensors all working together like that, then we can enable that virtuous cycle.
Yuhan Long:That's absolutely very important as a, like, not only in self-driving smart cars or robotics for like, have all the sensors to a single timestamp is very important, to have a better understanding of what is happening. if you are a sensor builder. Today, if your company is building sensor today, I think it's time to open up more controls, to your customer, about how to do this sensor. for example, if you are a camera builder, you can leave the interfaces of ISP training to your customer right now because the AI capability is strong enough to do that on the fly, for better sensing. I think that is gonna be, one thing. change a lot in the coming few years.
Manuel Cantone:Back to the original question of pushing things together, right? It seems like a force, the, marriage between the, incent, I think what, and really to make our customers understand that the potentiality is really the importance of tools and open, examples. So open source examples that are available and that all our company needs to be. to do their own, work of development with our own sensor and then publicize it as that, socialize it so that people can have starting points and really grasp the real potentiality. It's not a new technology for us, but we really see now the adoption that more things are available, that tools are easier to use, and it's really easy to access. So I think that the open source in the community is really important in accepting the marriage that seems forced to begin with.
Leonard Lee:Okay, wonderful. Well, thanks for that. And you know what, I think we invented a new term. It's called sensor smooshing, so it's incredible. panelists, thank you so much. it's been a wonderful discussion and appreciate the sharing. everyone, you know, please reach out to them. They're obviously, very knowledgeable, experienced, and have some great thought leadership that they can, lend to you. if you don't mind, Ron Applause for the panelists and thank you so much for attending and enjoy the rest of the conference. congratulations to the folks at Sensors Converge as well as Quest Questex. For a wonderful 40th year episode of the event. Thank you.