Podcasts
    ANDOR
  • (44)
  • (23)
  • (16)
  • (19)
  • (26)
    ANDOR
  • (7)
  • (10)
  • (6)
Subscribe
98 podcasts
Sort by:
EP075 - Ethical hacking to secure IoT systems - Ted Harrington, Executive Partner, Independent Security Evaluators
Tuesday, Nov 24, 2020

In this episode, we discuss the ethical hacking IoT cybersecurity attack service and the best practices for securing IoT products. Steps system operators and end users can take to ensure system security as they progress through digital transformation. 

 

Ted Harrington is an Executive Partner of Independent Security Evaluators. ISE is an ethical hacking firm that identifies and resolves cybersecurity vulnerabilities. ISE is dedicated to securing high value assets for global enterprises and performing groundbreaking security research. Using an adversary-centric perspective, ISE improves overall security posture, protect digital assets, harden existing technologies, secure infrastructures, and work with development teams to ensure product security prior to deployment. ise.io/research

 

Contact Ted:

ted@ise.io

https://www.linkedin.com/in/securityted/

 

Ted’s new book: hackablebook.com 

 

Read More
EP074 - Industrial edge computing - Martin Thunman, CEO, Crosser
Friday, Nov 06, 2020

In this episode, we discuss trends in edge computing, relationship between the cloud and the edge in on premise environments, and low code development environments to allow operations to participate directly in applications development to reduce lifetime cost of a solution.

Martin is the CEO and co-founder of Crosser. Crosser designs and develops real-time software solutions for Edge Computing and next-generation real-time Enterprise Integration. The Crosser Edge Computing solution offloads Cloud services, provides real-time data processing and decision-making capabilities close to IoT sensors and IoT devices. https://crosser.io/

 

 

Read More
EP073 - Evolution of cellular IoT chipsets empowering business model innovation - Dima Feldman, VP Product & Marketing, Sony Semiconductor Israel
Wednesday, Oct 28, 2020

In this episode, we discuss the evolution of cellular IoT chipsets from 1G to CAT-M1 and NB-IoT, the rollout of 5g and the implications of low power, low cost, high reliability connectivity for business model innovation.

 

Dima Feldman is the VP of Product Management and Marketing at Sony Semiconductor Israel. Altair Semiconductor (Sony Semiconductor Israel), a Sony Group Company, is a leading provider of Cellular IoT chipsets, playing a pivotal role in realizing the vision of the Internet of Things (IoT). Altair’s ultra-low-power and ultra-small chipset solutions are turning Cellular IoT into reality. Altair chipsets can be found in wearables, vehicle telematics, smart utility meters, personal & logistics trackers, home appliances, consumer electronics, and many other IoT devices.

Website: altair-semi.com

LinkedIn: https://www.linkedin.com/company/altair-semiconductor/

Read More
EP072 - Connectivity and bandwidth enablement as a service - Brian Watkins, VP Connectivity Solutions, Federated Wireless
Wednesday, Oct 21, 2020

In this episode, we discuss the value of connectivity as a service for improving performance and cost structure of industrial use cases, and the rollout of 5G and implications for edge computing use cases, from AGVs to large scale device management. 

Brian Watkins is the Vice President of Connectivity Solutions at Federated Wireless. Brian leads business development and sales for cloud-based private LTE and 5G connectivity solutions. He creates and leads GTM sales through cloud channel partners and builds GTM efforts with ecosystem partners including OEMs and ISVs. Federated Wireless leads the industry in development of shared spectrum CBRS capabilities.

 

Read More
EP071 - Building the 'Bloomberg' for IoT data - David Knight, CEO, Terbine
Monday, Oct 12, 2020

In this episode, we discuss Terbine’s approach to contextualising and indexing data feeds to increase usability, the challenges of monetising IoT data, and how colocating servers with cell towers can enable edge computing use cases. 

David Knight is the CEO of Terbine. Terbine is the digital marketplace for IoT/sensor data about the physical world – things like ocean salinity, railcar movements, carbon emissions, and crop density. terbine.com

 

 

Read More
EP 070 - Enabling smart manufacturing at the edge - John Younes, COO, Litmus Automation
Thursday, Oct 01, 2020

In this episode, we discuss the state of edge computing adoption in manufacturing. We also explore the most common edge computing use cases in OEE optimization, predictive maintenance, asset condition monitoring. 

 

John Younes is Co-Founder and Chief Operating Officer at Litmus Automation. He is in charge of operations and growth for the company and draws on considerable experience working with start-ups and early stage companies. Litmus enables out-of-the-box data collection, analytics, and management with an Intelligent Edge Computing Platform for IIoT. Litmus provides the solution to transform critical edge data into actionable intelligence that can power predictive maintenance, machine learning, and AI. litmus.io

Read More
EP 069 - Building business models instead of technology - Ron Rock, CEO, Microshare
Thursday, Sep 24, 2020

In this episode, we discuss how to monetize data for multiple user groups with different needs and simplify IoT deployments with end to end productised solutions.

 

Ron is the CEO of Microshare. Microshare provides Data Strategy as a Service, enabling our clients to quickly capture previously hidden data insights that produce cost savings, sustainability metrics and business opportunities. Our solutions create a Digital Twin of your physical assets, providing comprehensive picture of their performance, the risks they face going forward, and the steps required to produce maximum returns from these assets. https://www.microshare.io/

Read More
EP 068 - Device locations and communication technologies - Kipp Jones, Chief Technology Evangelist, Skyhook
Monday, Aug 24, 2020

In this episode, we discuss the key variables that determine locations of devices and guide technological architecture decisions. We explore the advantages and limitations of new IoT communication technologies like 5G and LoRa.

 

Kipp Jones is the Chief Technology Evangelist at Skyhook, works to guide and shape the company's innovative location intelligence technology. Skyhook, a Liberty Broadband company, is a pioneer in location technology and intelligence. Skyhooks provides customers with real-time services and analytical insights via a combination of precise device location and actionable venues. Skyhook's products are built on the pillars of trust and respect for individual privacy. skyhook.com 

Read More
EP 067 - Data streaming for mission critical field assets - Matt Harrison, CEO, WellAware
Tuesday, Aug 04, 2020

In this episode, we explain how data streaming mission critical data for remotely monitor and control physical assets works. We learn how to simplify business model and connectivity for asset management, all with the backdrop of Covid-19’s impacts on industrial digitalization trends.

Matt is Co-Founder, Chief Executive Officer and a member of the Board of Directors of WellAware Holdings, Inc. At WellAware, Matt drives the overall business and product strategy while also leading the day-to-day execution of the company’s vision of connecting people to the things that matter.

WellAware empowers organizations to be efficient, safe and sustainable by streaming mission critical data to their employees, so they can remotely monitor and control physical assets.

_________

Automated Transcript

[Intro]

Welcome to the industrial IOT spotlight your number one spot for insight from industrial IOT thought leaders who are transforming businesses today with your host Erik Walenza.

Welcome back to the industrial IOT spotlight podcast. I'm your host. Erik Walenza CEO of IOT one. And our guest today is Matt Harrison, co founder and CEO of WellAware WellAware empowers organizations by streaming mission critical data to their employees and partners. So they can remotely monitor control and automate physical assets. In this talk we discussed well-aware as opposed to simplifying the business model and technology deployment process for edge connectivity and asset management. We also explored the impact of COVID-19 on industrial digitalization trends. If you find these conversations valuable, please leave us a comment and a five star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@iotone.com. Thank you.

[Erik]

Matt. Thank you so much for joining us today,

[Matt]

Erik. Thanks for having us. We're excited to be here.

[Erik]

And so Matt, I'm really interested in kind of getting into the business and the technology behind well-aware.You know, I talked to a lot of companies that are very, very horizontal, and so it's very interesting to speak with a company that has more of a vertical focus, but before we get into the business and the technology, I want to understand a bit of your background cause you actually have, I think, a very interesting background quite, quite varied across healthcare and the energy sector. Also you're an investor. Can you just give us a quick run through of your background and then what led you to found or co-found well aware in 2012? Yeah, you bet. I'm happy to share a little bit of, of myself. I hate talking about myself, but we'll we'll at least provide a little bit of the, the breadcrumb history on, on how we got to where we are. It's probably tied back to my red hair more than anything.

[Matt]

I think that just kind of made me come out of the womb with a little bit of a fiery DNA, but I've always just been a problem solver naturally drawn to problems, got an electrical engineering degree at Texas a and M never necessarily practiced in engineering. Always wanted to apply that from a business perspective and using technology to solve problems. And so that's really what my career has been made up of is understanding problems and you know, trying to really understand those better than most. And I think that's that's been a huge feather in our hat as we're understanding well aware and understanding what true industrial IOT convergence looks like and then trying apply, you know, some of the latest it technologies to help our customers solve problems. So I've been very fortunate to work with some amazing people here at well-aware and in my past roles, I've had a lot of incredible mentors and investors and board members.

So I really count myself blessed to just be kind of leveraged up by a lot of people along the way. But you know, if I could kind of characterize myself, I had to say, you know, we, we love to compete. My my whiteboard has one quote on it from pretty decent baseball player named babe Ruth. And it says it's really hard to beat people who never give up. And I think that just, that really kind of characterizes well-aware and the people that we try to bring into the company so that we can work really hard to help solve problems for our customers.

[Erik]

It's a good model for the time right now, right? This is a challenging time, but there's always opportunities and challenge. If you just keep moving forward, there are. And, and you know, one of the things I, you know, just to add to that, that I would encourage the listeners that are in IOT to, to really hold onto that we found to be true is, you know, a lot of times, even our customers don't know exactly how big of the problem they're trying to get addressed is how difficult it is to solve. And so we really have had to lean in and, and work a lot with our customers to not give up on them. When they've said, Hey, look, I'm this IOT stuff is too tough. It doesn't work. Like I thought it would, we had bumps out of the gate. You know, we really have, have had a lot of success just locking arms with our customers and, you know, really kind of dragging them to the outcome. And we've had a lot of customer outcomes now that they've appreciated that journey. And so this is hard. This is not an easy space. It is the future, you know, giving machines a voice is obviously one of the most exciting things we could be working on, but it's not easy. And I think the twenties are going to be the decade for IOT. For sure.

[Matt]

Yeah. And you could say right now because of the COVID-19 background, it's on the one hand never been more important than ever. And on the other hand, it's maybe never been more challenging just because getting people out there to actually deploy sensors getting budgets allocated at a time when companies are seeing revenue go down, this is all more challenging, but on the other hand, the need to be able to remotely monitor and control and understand your operations is increasingly important. I think it's very interesting for people to have a bit of a snapshot on what the industry looks like, what growth looks like, what, you know, what deployments look like at this time, how is, let's say Q two and maybe the, for the Ford outlook for Q3, how are these looking for a well-aware, you know, especially if you're in an industry that's quite impacted.

[Erik]

Yeah, absolutely. Absolutely. Well, we're in a global economic situation that none of us have really faced before with the, you know, a pandemic that, that created a lot of the uncertainty. And, and so, you know, to your original point, we've got tremendous amount of jobs that have been lost, lots of top line revenue at our customers that has been lost. And so immediately they've got to go in and cut expenses or, you know, find ways to drive additional profitability into their businesses. Well guess what, the, one of the things that didn't change is the number of machines out there. There's still the same number of machines, even though there's fewer people. And so there's just a huge need and we've actually seen it come through in our commercial revenue growth and in our pipeline opportunities for the rest of the 2020, this is the time where people are really going to start placing huge bets on, you know, moving full stock to industrial IOT.

[Matt]

Yes, you have to have a platform that's reliable. Yes. You have to show your customers outcomes. They're not interested in tools or widgets. They want to actually help you. They want you to help them understand what the data translates into in terms of outcomes for them. But this is the time and, and, you know, for well-aware I, you know, we'll probably grow over a thousand X from last year to this year in terms of just our recurring revenue base. And it's just it's fascinating because I think a lot of people need help. Part of that growth is coming from our, our own globalization, both outside of the U S but also, you know, we really built the company in oil and gas. And so we've begun to now naturally expand into other industrial markets and applications. And so that's really helped our company see new opportunities and, and grow. And lastly, we've got a really incredible partner channel that's starting to build. So there's big companies, big telecommunication companies that need help with IOT solutions, big cloud companies that need help with IOT solutions. And so we've been really fortunate to to really begin to be on the front end of a lot of those relationships driving some growth,

[Erik]

Right? Well, let's get into the business then. So as you said, you, you started in oil and gas, but now you're expanding outside of that vertical into other areas where there's, I suppose, heavy, heavy assets, a complex operational situations, what would you be, or your value proposition. And maybe you can also talk to whether that's shifting as you're moving into new verticals or whether the same value proposition applies.

[Matt]

Sure. Yeah. Well, our, our mission, which is really our value proposition, as well as we exist to connect people to the things that matter. And so things are clearly machines and sensors, but as we've talked about already, there are obviously outcomes as well. And the way those outcomes usually translate for our customers, three big buckets. One is operational efficiency, usually cost savings. And we get those, those metrics accomplished in a number of different ways. The other is improved safety. So we're just eliminating a lot of mileage and trucks and we're telling workers before they go onsite, what to experience what they're going to experience. And so it helps them do their jobs in a much more safe way. And then the last one is environmental and regulatory compliance. And so we're, you know, we're really helping our customers reduce their carbon footprint, deliver some ESG wins, which at the board level is always, you know, very interesting and something that people want to tout and just help them with, you know, a better economic or a better environmental footprint.

[Erik]

And so those are kind of the main value propositions that we're, we're delivering to our customer. And our mission statement is we're just, we're here to connect them to their critical infrastructure, their machines and their assets. And we make that easier. We work across pretty much any application. We work across lots of different manufacturers. I would dare to say most if not all manufacturers. So it really doesn't matter what kind of pump you have or gen set you have, or sensor you have well-aware as a unifying platform that can allow you to collect data from very hard to reach places and get it into a actionable format that you can really move the needle on business outcomes. And so that that's, that's been hard to build. It's a, it's a very full stack solution. It's taken us seven years to do it. We started in oil and gas, which I would say was, was really kind of the best and the worst decision for us.

[Matt]

It was the best decision because it presented the most difficult environment for us to forge the platform. And so very remote, you know, very hazardous environments, power is a luxury communications is a luxury. And we also had a very highly varying user base. So a lot of our users had very little technical competency. And so that presented a very, very difficult set of circumstances for us to build our technology and our platform and our user experience in a way that we just made it easy, easy to install. And so I think it was the best from that perspective. It was difficult because, you know, it's an industry that's very slow to change. And so there's a lot of headwinds on existing installs, existing infrastructure. Hey, we've done it this way forever. So existing operational patterns, and it's, it's been fun when we begin to see the great shift change, you know, some more technology oriented folks taking on some higher level positions where they say, Hey, you know what? I should be able to operate my business the same way I operate my thermostat in my house. I can change the temperature land from bed in my house. Why can't I optimize my remote gen set or my pump in my business operation? And the answer is you can, and the old legacy operational technology approach that's been owned and built by some very large industrial players over the years is very right for disruption.

[Erik]

Yeah. And before we get into the technology, I want to do out here a bit on this topic of of the people that are involved in these technology adoption decisions, who are you typically working with? Cause I guess you have, you have headquarters that might sit, you know, if we're talking about Total, it sits in Paris maybe, but then they have operations around the world. So are you talking to headquarters or are you talking to the, let's say the local general manager of, of some facility that has specific pain points that they understand, and then are you talking to the, you know, you have the, also the split between the OT guys and then maybe the it infrastructure and before those were maybe kind of separate technology to mains, they didn't interface too much, but you're right at the crux of that, right. You're, you're kind of bringing those two ends together. And so I suppose you have, you have decision makers, you know, people that might allocate budget, but also people that might want to control the deployment or the operation of a system sitting in both of those organizations. Can you talk to us a little bit about, you know, on a, let's say local versus headquarters, and then it versus OT, who are you typically talking with? Who are the decision makers, who are the influencers? What is this perspective look like for well-aware?

[Matt]

That's a great question, Erik. So, you know, the way it works or has worked for us is with a few exceptions, I'll say, so we do have some large enterprise deals where the customers have decided that they are going to define and fund a very large IOT implementation that has typically C suite and budget approval behind it. And so it is very much a different pursuit than when you're actually building the opportunities themselves. So when you're knocking on the door, trying to create leads, so the enterprise level deals, a lot of times, those are RFP based. Again, you know, the, the stakeholders attached to those are usually your PNL managers. They may have some internal technology or internal internal technology guidelines that they want you to adhere to. They may want you to leverage some of the CapEx and the infrastructure that's already been put in place.

We love that because again, we work across anything that's out there. And so well-aware is really ideal to help customers leverage the existing investment they've already made. And we just make it so much better. We also, a lot of times we'll, we'll collaborate with existing SCADA systems existing SCADA teams, it teams that handle historians and handle data analytics, predictive analytics, et cetera. So the set of stakeholders, if you will, is really kind of across the board, I could list all of them for you across our, our opportunities when we're creating opportunities. When we're, you know, sending messages out and creating leads, we're looking for people who have a P and L budget and responsibility, and usually need to save money or improve safety, or, you know, their environmental regulatory compliance. And when we find those opportunities and we show people, those are typically the stakeholders that have budget and can move fast. And a lot of times the business owners, they don't necessarily mandate a specific technology approach. They just mandate that you get it right. And they do like you to check the boxes with the CIO and the CSO and, you know, make sure that the field teams all agree that yes, in fact, this is a great, you know, secure, safe technology implementation, but those are our most successful opportunities are when we're, we're working with P and L managers, business managers that need to solve a very specific problem.

[Erik]

And then you mentioned that you're moving into new verticals. I see in the, your company introduction that healthcare is one of those. And although you can think of them both as as industrial, I think the, let's say the, you know, you would also say, well, energy and healthcare are somewhat dramatically different in terms of the operating environments, right? As you mentioned, energy is remote healthcare. You're in a controlled facility, you have different con in both, you have safety and privacy concerns, but from very different perspectives, have you had to evolve your business significantly in order to move to this new vertical? Or do you find that the basic challenges that companies addressing are fundamentally the same? What, what, what, what are the adopt the, the evolutions that are necessary for you to move into a new vertical,

[Matt]

The healthcare space? Erik has a little bit of a misnomer. It makes it sound like we're, we're getting involved in medicine. We're really not. We're really doing exactly what you just suggested, which is we're taking our proven platform that works across pumps, motors, compressors, power equipment, and we're applying it in the facilities management or building management aspect of a big hospital. And so what we're basically doing is we're ensuring that that hospital systems, that their hospital buildings are ready to operate if you will, right? So we're not doing anything that's taken care of patients inside. We're not in the or suites or the ICU, or even the patient rooms. We're sitting below the, usually below the actual ground level in the chillers and the basement rooms. And we're controlling the HVAC systems, the pumps, the lights, the power equipment. And we're ensuring that all of those very expensive assets are optimized with how they're being run, that they are extending their useful life because healthcare an industry right now obviously is in very, very difficult position.

It's historically been an industry that runs in very, very tight margins and the global pandemic has pushed it over the edge. And so they are very much looking for ways to save money. And so they're building management facility management expenses are high, and that we're helping our customers dramatically reduce costs associated with maintenance of a lot of these critical assets. So they either don't have visibility into these assets, or they have very limited visibility from some very old antiquated enterprise software solutions that are, you know, like a building management system or a building automation system, just legacy, expensive software platforms. And those are typically very OEM specific. So they work great for train or they work great for, you know, Emerson, but as you know, all of these, these installations in these systems that are made up of many different manufacturers, pieces of equipment OEMs.

And so having a single platform that has visibility across everything is really being well received by our customers. And so that's, that's how we're doing it. And in healthcare we're doing the same thing and public utilities, the same thing, and manufacturing logistics shipping. We're starting to really get the opportunity to, to leverage the common platform that we've built and get it installed on lots of new machines. And then every time we bring a new machine on that's, that's really a new skew. It's part of the well-aware family. At that point. We just keep adding to that base every day, every week. So it's a lot of fun.

[Erik]

Okay, great. So that, that makes sense. So in all instances, you're dealing with legacy asset basis, somewhat conservative industries with a lot of also legacy software that it's not very well suited to modern needs. So I can see why this would not be really a critical change moving into these other, other markets let's then go into your your tech stack here. So, you know, at least what I've seen, you're covering, I didn't know if you would frame it as a pass or a sass, but you're providing this kind of data management layer. You're doing the data and gesture, and I believe you had your own hardware and then you also do managed services. And it seems like it's a fairly full stack solution. Can you kind of walk us through what the architecture would look like and then what would be the elements that are central and which, which are the elements that are maybe optional or a case by case basis? Yeah.

[Matt]

You bet happy to do that. Yeah. Let me start with the business model. Cause I think that's the most important one and it took us some time to kind of finally iterate to the place where it seems to be making the most sense for our customers, which is all that matters. So we're in this with our customers, Erik, meaning we only charge a monthly fee for the data service, that's it. So we do have our own hardware. We do have our own software. That's all included as part of our monthly implementation for our customers, no CapEx required. We handle everything, hardware, software warranty. We ensure that those data collection platforms work for the life of the contract. The easiest way I would, I would kind of draw a parallel or analogy to it is as like subscribing to direct TV. You know, they send you a hopper, you don't necessarily pay for the hopper.

You just pay for the monthly, you know, TV subscription to direct TV or to dish network, whichever one it is. And so well-aware is doing the same. We're providing intelligence edge equipment that goes out and is installed on the machine and that, that equipment we retain ownership to and we manage it, we provision it, it's an extendable platform. So it's got intelligence on the edge. We handle all the, what we would call the OT protocol interfaces. So how, how our edge equipment actually talks to the machine, the ability for us to kind of get x-ray vision into the machine. So we look at any IO opportunity we have with that machine or that sensor. We also look at, you know, the OT legacy protocols, we support pretty much all of them, mod bus, canned bus J 1939, number of other, you know, backnet and some other things as well.

And so where that Rosetta stone that kind of connects to that legacy OT protocol infrastructure that machines might talk to, we unify it on the edge. We add intelligence on the edge, which is processing capability right there, where we can run local ML and digital twin technologies and things like that. We've already begun to do some of that. We have storage on the edge and then this is probably most, most cool. We put a very rich user experience on the edge. And so our, our edge hardware wirelessly communicates to mobile devices and we use our customers, mobile devices, whatever they may have an iPhone or Android. And that becomes the user experience for our customers. And that's a very, very rich user experience platform that we didn't have to build, but we can leverage over time. So that's just the edge. The second layer is pulling the data back from the last mile, wherever that might be right out in the middle of nowhere or in a basement or, you know, on a large factory floor, we bring that data back either via cellular satellite.

And in some cases, wifi, once the data is brought back, it's stored into both of our cloud partners, which are AWS and Azure, and then we have a normalized dataset. And so, you know, for those interested what a normalized data set means is it's time synchronized and it's, it is aware it's data aware. So meaning we contextualize what that asset is that we're, that we're monitoring in the cloud. So it's not just a, you know, call it a data tag with a voltage or a current it's actually a company or it's a Cummings gen set, or it's a, you know, Baker Hughes, ESP pump, or, you know, whatever the case may be. And so it's contextualized as a thing. And the data is normalized, meaning it's time sinked and it's high resolution, and that's all landed in the cloud. And then once it's in the cloud, we have kind of a fork of how the data can move.

And so the data can obviously be consumed through our user experience. We talked about the mobile before, but the well-aware mobile app, which is extremely handy if you're in the field with devices or if you're at home and you want to take a look at any critical infrastructure or receive alarms your mobile platforms, a nice way to do that. And then of course, we have a web based platform. That's got charts and a much more rich user experience for notifications, reports, dashboards, and some things like that. The data though is also made available through API APIs for our customers to consume in any additional user experiences. So some of our customers use Spotfire some use Tablo, somewhat the data pushed into another historian like OSI PI. We do that for many of our customers today. So it's, we've just really tried to make it simple.

If I think about what well-aware is excellent at, it's really the data collection, the provisioning and the data orchestration from the edge. We really solved the last mile across any machine where a little weaker on the predictive analytics and a lot of the really, really advanced machine learning. What we do provide is excellent data sets for those engines. And, you know, it's like anything in life, Eric, if you're, if you've got a predictive analytics or an AI platform and you feed it bad data, yeah. It's going to give you a bad result. And so well-aware is a great partner for really advanced AI ML platforms. It's that particular area of the stack is not something we've chosen to invest in. And we really look to partner there more than anything. So that's kind of how that's a little bit of a walk through the, the full stack. Obviously it is a huge part of every element of that. And we have, we believe one of the best security platforms out there. And so we can make it easy to deploy, very simple to our customers from a business model perspective you can get going for as little as 50 bucks a month for a machine and it's secure.

[Erik]

Okay, great. So, so thanks for the very comprehensive walkthrough. Let me follow up with a few, a few questions on some of these elements of your business model and your tech stack. So going back to the business model, you just mentioned as low as 50 bucks per machine. So I suppose this is a, a per machine per month fee, but then you would have different tiers maybe based on the complexity, the number of connections for a machine or the amount of data. Is that the case? Is there any aspect that's related to the volume of data usage system integration? You mentioned ML. So if there's a need to develop a predictive maintenance application for a particular machine, you know, I guess somebody has to actually put some labor into that, whether that's well aware or a system integrator or a customer. So maybe you can, you can just walk through a little bit more in detail, the the business model, what would be the, the elements that would determine what this monthly VR

[Matt]

Yeah. You bet. Well, the 50 bucks a month is really based on an entry layer solution and it's all inclusive. So it includes all of our edge hardware. It might include some some sensors depending on what kind of application it is. And it's, you know, it's going to be a fit for purpose solution for the value that we're creating. And, and so yes, there are scenarios where we have to, you know, rapidly develop some apps if you will, or some, some customization tailoring station for our customers. Everything that well-aware does is platform based. And so we don't, we don't really ever build anything for customers that the customer owns. We will tailor our solution though for customers. And so, you know, depending on the application, the machine and the value attached to that machine, that's really what determines you know, the ultimate price point for our customers.

And so, you know, we, we have applications where we get 50 bucks a month for our solutions. We have application where we, where we get hundreds of dollars a month for far solutions. And, you know, those, those include different layers of sensors and hardware. That's all included under a very simple flat, you know, subscription model. So it, it really is just application dependent. You know, if it's tank level monitoring, it costs one thing. If it's pump monitoring and control, it costs another, if we're controlling a, you know, $350,000 train HVHC system with a lot more data, a lot more complexity, a lot more value than it might cost a little, a little more so that that's, that's how it's set up. It's it's really value based. We don't waste a lot of time with customers in every case. Our customers ROI is typically orders of magnitude more than what they're paying well aware. And we're in a growth mode as a company where we don't want to be stingy and focus on extracting every pound of flesh possible for our customers, from our customers. That's not our goal. Our goal is to keep delivering value and case studies and keep getting the word out because, you know, look, there's 25 30 billion machines that need to be connected out there. And we're just beginning to scratch the surface.

[Erik]

Yeah, well, and I was going to ask also about connectivity, you know, satellite costs. And I assume those are, those are all wrapped in. And I like this model because, you know, as opposed to maybe the more traditional model of pushing either a fixed license or a, a, you know, kind of large CapEx asset investment which is kind of one time, and then you can walk away and then the customer gets to deal with the operations here. You're really well aligned, right? You need to be delivering value consistently. Otherwise the customer at some point will cancel the contract, right? And you've invested a lot of time in building the solution.

[Matt]

That's exactly it, 100%. I mean, we are, we are being paid to deliver TV service. So if I want to watch ESPN, I subscribed to a TV TV service, well, awareness business model for IOT is the exact same. We are aligned with our customers on business outcomes. And what we find is that they really like to reward us when we work with them and we prove and show them the value that's been delivered, they will stand on the mountaintop and proclaim the value that we've helped them to capture and use more of us. And so that's, that's a win, win scenario,

[Erik]

A mobile application. So this is an area where at least in my experience, the requirements can shift. You know, whereas the, let's say the fixed, maybe you're using satellite, you have some sensors, you have a particular kind of connectivity environment around the edge. Those can be fairly fixed in the longer term, but the requirements around how users are using that data different. So we see this kind of shift towards more of a low code development environment, trying to allow users that are nontechnical to, to make some modifications. How do you approach this right now in terms of the, the end user application?

[Matt]

Yeah, it's a great question. So, you know, we, we've standardized on some, you know, some pretty baseline functionality that I think gets our customers. Like I said, 80, 90% of the way there, there's just a lot of you know, core feature functionality that exists that is all configurable, but it's configurable, you know, in the platform by our customers. So they can, they can set up assets, they can set up taxonomy, they can set up names and, you know, they can configure charts and they can set up notifications and alarms. And all those things are, are, are meant to be very user managed and manipulated. And so that took us a while to get that dialed in appropriately. Probably one of the things I'm most excited about as our our, our true edge intelligence platform. We are building an open developer environment, Lennox based environment on the edge, and it is going to allow our customers to, you know, write their own apps in low code Python, scripting, whatever the case may be, and, you know, really make it very easy for them to write new apps, deploy new apps and manage it on the well-aware edge platform.

And so the, the example that I'll, I'll very humbly use here, Eric is, is, you know, we really love what Apple did for, you know, building an end to end developer community and the app store controlling the edge platform, which was the iPhone or the iPad or the iMac, and really owning that into end user experience. And I think Apple's obviously been very successful and doing that well aware is doing the same thing, but for machines. And so we're, we're taking that edge platform and, and in the 2020s over the next decade, it's, it's our intent to continue to build that out and provide a full environment where customers can contribute to what they're doing on the edge. Third party application developers and communities can be built around it. It's all an open platform. It's safe, it's secure and it's proven and reliable. And so, yeah, we're, we couldn't be more excited about, you know, what the future holds in terms of, you know putting control algorithms or, you know, enabling new widgets on the edge. So it's, it's, it's already happening today. We're just in the front end of it. But it's something that we're all very excited about here at malware.

[Erik]

Yeah. Fascinating. Fascinating. I mean, this is one of our beliefs also is that the companies that are going to be very successful in the longterm are the companies that figure out how to create value by incentivizing other individuals or organizations to to engage in development of their platform. You know, in some way, right, as you said, this is, this is kind of the Apple model, but going it alone and trying to do the full stack yourself. That's probably not a winning proposition for the complexity of the industrial environment that a company like well-aware is trying to serve. So we wish you the best in executing that next stage of building out your platform.

[Matt]

There's a lot of foundational infrastructure that's been built that we want to make available to the market and allow for much rapid, much more rapid deployment of solutions. And so, you know, we have a limited set of developers and our customers in the market doesn't have a limited sense. So we're, we are in the process of opening that up and making it a full, you know, shared open community platform, which is exciting.

[Erik]

Let's say if customer a build something for a particular type of asset, you would then be able to say asset assess somehow that the algorithm or the source code in order to deploy that for another customer, is that the case

[Matt]

We'd be able to certify it and make sure that it is functional. And then, you know, there's, there's an opportunity to potentially leverage that, that code environment, if that customer contributed contributes it to the open environment, then yes. There's other customers that could absolutely leverage that. Yes.

[Erik]

Okay. Great. One final question here. Is there also a monetization aspect for the developers? So I guess if it's a, if it's an oil and gas company, they're maybe not worried about monetizing this, but if it's a, an ML company that maybe wants to use well-aware to build an algorithm, what do you have an aspect where somebody that develops code could then monetize through well-aware

[Matt]

That is the ultimate intent. Yes, we are not doing that today, but I don't think we're very far off at all. That would be a 20, 21 milestone achievement for us.

[Erik]

Okay, cool. Let's go into a one or two case studies here. So it'd be, it'd be great to have a end to end perspective from, you know, who did you first start talking to at the client? What were the, you know, did you do a pilot for them and then, and then kind of walk us through two operations,

[Matt]

A couple that come to mind that are, that are some of my favorite ones. One of them is with the largest steel manufacturer in the U S so I'll start starting a non oil and gas application. So we're installed on us steel. They've got a very large, very historical plant in Gary, Indiana. It's 10 square miles. So it's a very large plant. And it's been around for over a hundred years is actually originally a Carnegie site. So it's, it's again, it has a lot of historical reference and relevance to it. We were contacted by us steel through one of our partners. And they basically were really struggling because the infrastructure of that plant had become pretty aged, pretty antiquated, a lot of the original sensors and infrastructure they had put out there had also become antiquated. And so they were having some pretty catastrophic negative outcomes associated with gas, distribution and power substations across that 10 square mile plant of which there's a lot of, both of them.

And so what they wanted to do is, is get a much more high resolution monitoring and control capability across all their gas distribution, and also across all our power substations. And so we began talking to them, we went out there, we installed on a, on a pilot location for gas distribution. We were hooking into an existing gas sensor. And so I'm going to get into a little bit of detail here cause it's fun. And it'll show you how the platform works, Eric. But we were told when we showed up on site, that the sensor we were going to connect to was a pressure sensor. Okay. That would be very straightforward for well-aware. You go through all the certifications to get on site. You go through the onsite orientation to actually physically step foot on a, you know, an industrial location like us steel. Our team goes out to this first proof of concept and we walk up to the sensor and we realize, gosh, that's not a pressure sensor.

That's actually a gas flow sensor. So any of our competitors, any of the old legacy players that are out there today, they would have packed their bags up at that point and headed home. They would have kind of probably yelled at everybody and said, Hey, you gave us bad information. Well, that's just what real life looks like out in the field. And so while we're used to that, we have a lot of scar tissue. And so we made a mobile configuration right on, on the spot. We changed our edge device to be able to accept gas flow instead of pressure. So we made the install, got the mechanical and the electrical connection. And all within a matter of five to seven minutes showed our customer standing behind us. We had a little bit of an audience. We showed them the gas application. So we were monitoring real time gas flow.

And unfortunately one of the guys behind us said, Oh, that's not what we wanted to see. We're not interested in realtime gas. We're interested in accumulated gas flow. So you see right there on the display on that old sensor, it actually accumulates the gas flow for us. Well, the problem is there's not an electrical interface or there's not an electrical output from that legacy sensor that would give us an accumulated gas flow. So right there on the spot, we had our team in the cloud build a gas accumulator. So within a couple of minutes, we were taking the real time gas flow that we had just tooked into. We had built a gas accumulator in the cloud that gas accumulator was then giving them the customer complete totalized via the guests. They loved it. Now we had one problem in order to get that level of resolution.

We had to back haul data every second. And so that would be very expensive, obviously. So within two days, our team wrote an algorithm that could reside on our edge intelligence, pushed it over the air and updated our unit, which was left behind with a localized gas analyzer. That's pretty cool. So us steel loved it. They rolled it out across all their guests distribution. We did something very similar for their power substations, which they had very little visibility into. They were having leaks into their power substations, which was causing the substations to go down. And then they were having very expensive downtimes in their, in their factory floors. And so that's one example that I, I love it kind of speaks to a little bit of the versatility and the flexibility of the well-aware platform. And now we've got the opportunity to continue to expand that across many, many more of not just that customer sites, but customers very similar.

Another application, another case study, this one just actually came in Saturday night. I'll just share it with you. I got a panic call from one of our companies said, Hey, look, we're in the city of Houston, there's a water management water treatment facility here. We believe that we're potentially going to be dealing with chlorine gas emission. So we need real time monitoring of air filtration platforms associated with ensuring that we don't get chlorine gas releases. And when he well-aware to come help monitor the equipment, monitor temperatures, et cetera. So literally took that phone call on the 4th of July Saturday night. And then that was successfully installed on multiple locations this morning. So all within 72 hours, there just aren't people who can do that. Eric. So that's, that's the platform being built and being forged and being tested, being highly configurable and being remotely provisional that just allows us to move very quickly with our customers.

So, you know, Houston is the fourth largest city here in the U S will now have the opportunity to expand out across a lot of their public utility infrastructure. And so just, just another example, outside of oil and gas that you know, that we just recently did over the weekend. One other one that, that I, I love to point to is we do work with the largest upstream midstream and downstream companies, helping them optimize their asset integrity programs. And so we work on pretty much every major upstream, midstream and downstream operator. In some capacity, we work in partnership with a variety of different service providers and we're ensuring for those companies that they are getting the right amount of chemical treatment to ensure that they're not experiencing corrosion, that they're not experiencing scale buildup, we're controlling pumps, we're monitoring tanks. We're doing real time control algorithms on the edge in the field on changing variables because that's, what's required to get to a, an ideal treatment solution.

And you just can't get that accomplished by sending a person out there in any frequency, which is the current state of the industry today. And so well-aware does that across thousands and thousands of thousands of sites here in the U S and increasingly now and in other countries. And so, you know, we're, we're learning as we go. It's not perfect. We've always had issues. And I like to say we have a lot of scar tissue along the way, but we've been very fortunate to have earned the trust of some of the largest fortune 100 fortune 500 companies in the U S and I think we've got the opportunity to expand that now both States side and internationally as well.

[Erik]

So it sounds like the deployments are quite quick, actually they're pulling the timelines and these two examples very, very quick, but I assume that there are also, I mean, there's a fair amount of customized need. What would be the, maybe for the two examples that you just gave, or maybe more generally, what would be a typical timeline from, let's say a first site visit towards having operational deployment across the facility.

[Matt]

There are absolutely exceptions some faster, some slower doing multiple sites and less than 72 hours as fast. And so that was something that was very, very fresh on my mind and something, I was very proud of our team for doing it. It was a safety issue and one that I was really proud that we responded to, but I would say typically, particularly for larger installations, hundreds of sites, it takes us, you know, between four to six weeks to, you know, once we receive an order, once the customer has provided us with a general idea of what the infrastructure and the machines and the sensors look like that we're going to be installing on, which by the way is many times absent of some detail. And so we get out there and when we start installing for them, one of the value adds for our customers is just really getting the well-aware inventory of what they've got out there.

And so it's an asset inventory solution in addition to everything else, but typically it takes four to four to eight weeks. We carry, you know, buffer inventory and stock Aero electronics, which is a very large company, is our manufacturing partner. And so we can scale up very quickly with them and, you know, they build units for us edge equipment and we get it provisioned and we get it out there. So that's a typical timeline. It really just depends on the machine, the application, the customer, the location, but you know, very seldom does it take longer than eight weeks.

[Erik]

Gotcha. And you're doing all of the hardware deployment by yourself. The integration work.

[Matt]

We have authorized technicians. We also like to train our customers. And so a lot of our customers become their own installers. We have a client success team that has incredible training and tools and videos on how to set everything up, how to configure everything. Our customers download our mobile app, which has set up wizards and provisioning wizards built into it. So we've really made it pretty straightforward and simple, a simple for customers. And that's how you get to those, you know, weeks of install, time versus, you know, what the industry is used to seeing, which is months and sometimes years. You know, so again, Eric, it's, we're tired of watching these, this very painful bloated, you know, value chain that exists of legacy automation, equipment customers, having to feel like they have to build their own telemetry networks and manage those and maintain them and buy enterprise software for SCADA and nother enterprise for workflow and ticket management, another enterprise software for historians, we're just compressing that very legacy and bloated value chain and, and our customers really appreciate it. Now we're also working with whatever existing installs they have. So they don't have to throw the baby out with the bath water. We're going to come in and work with what they've got. And, and usually when we show them how easy it is and what it looks like, then they like to give us a little bit more opportunity to expand

[Erik]

Great. Matt, I really appreciate taking the time to walk us through the business. I think this is a fascinating company that you're running this, this trend towards, let's say away from kind of isolated, you know, functional software towards more flexible software is, is a trend that we're paying very close attention to because I mean, you've just kind of given us a good walk through of the challenges that companies have in managing the cost structure and the complexity of these isolated products. Is there anything else that you wanted to quickly share with the audience or, or discuss before we call it a day? Look, I appreciate you guys

[Matt]

As having us on. And, you know, I am in this with every single person that's stuck with us this far to listen to the podcast. And if you're working in the IOT space, building solutions, I'm going to encourage you to tell you to do that. I think the opportunity is substantial. It has been hard for, for well-aware. And we're starting to see really the rewards from a lot of the work that we've put in. So I just encourage you to, to hang with it for the customers that are out there, partner with your vendors, partner, with your suppliers, share your outcome information that you're trying to get to upfront. So that together you guys can work on projects that are successful. You know, I always walk around well-aware, which by the way, is the third name of the company. Erik, it's the only one I didn't name, but I love the name.

So it just means informed. It works extremely well for oil and gas, obviously, but it just means informed. And so it really fits our, you know, our ethos and our strategy, our mission, but, you know, we're here to connect people to the things that matter. We are here to make it easy. And I'll tell you, it's, it's funny if I just think through the history and the last seven years of well-aware, you know, I used to talk about our hardware and our software, a lot, our widgets and our things. And it's taken me a while to realize our customers really don't care about that. They really just want, they want the outcomes. And, and I've also learned that it's very complex to make things simple. And so, you know, it takes time. And so we've been working on this and we've, we've been learning through, you know, doing some things, right, doing some things wrong.

And you know, I think now we're getting to the point where it's it's, it's just simple for customers. And even the business model, as you've heard is, is getting much more highly simplified. And I think that's what we're going to need to really get the industrial IOT market adoption to the place it needs to be. I hear people talking about IOT in a state of pilot purgatory that's because people are selling tools and they're saying, Hey, good luck. You know, go, go implement, go figure it out. Our experience has been when you partner with your customer and you're in it for the long haul and you're incentivized along with them on the outcomes, it's a much more successful experience. And so I think we're all working in areas that are very exciting. And I just want to encourage everybody to keep pulling and keep developing and really want to thank you again for, for having us join you. It's been a wonderful conversation. You've asked some great questions. Great. Well, Matt really appreciate you taking the time and I wish you, and WellAware the best in the future. Thank you, Eric. I appreciate it. Take care.

[Outro]

Thanks for tuning in to another edition of the industrial IOT spotlight. Don't forget to follow us on Twitter at IoTONEHQ and to check out our database of case studies on iotone.com/casestudies. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at team@iotone.com.

Read More
EP066 - Event streaming architectures enabling IoT applications beyond messaging - Kai Waehner, Enterprise Architect, Confluent
Tuesday, Jul 14, 2020

In this episode, we discuss event streaming technologies, hybrid edge-cloud strategies, and real time machine learning infrastructure. We also apply these technologies to Audi, Bosch, and Eon. 

 

Kai Waehner is an Enterprise Architect and Global Field Engineer at Confluent. Kai’s main area of expertise lies within the fields of Big Data Analytics, Machine Learning, Hybrid Cloud Architectures, Event Stream Processing and Internet of Things. References: www.kai-waehner.de

Confluent, founded by the original creators of Apache Kafka®, pioneered the enterprise-ready event streaming platform. To learn more, please visit www.confluent.io. Download Confluent Platform and Confluent Cloud at www.confluent.io/download.

_________

Automated Transcript

[Intro]

Welcome to the industrial IOT spotlight your number one spot for insight from industrial IOT thought leaders who are transforming businesses today with your host Erik Walenza.

Welcome back to the industrial IOT spotlight podcast. I'm your host Erik Walenza, CEO of IOT one. And our guest today will be Kai Vernor, enterprise architect and global field engineer with confluent. Confluent is an enterprise event streaming platform built by the original creators of Apache Kafka for analyzing high data volumes. In real time in this talk, we discussed event streaming at the edge and in the cloud and why hybrid deployments are typically the best solution. We also explored how to monitor machine learning infrastructure in real time. And we discuss case studies from ADI Bosch and Ian, if you find these conversations valuable, please leave us a comment and a five star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@iotone.com. Thank you.

[Erik]

Kai. Thank you so much for joining me today.

[Kai]

Thanks for having me, Erik. Great to be here.

[Erik]

So Kai, before we kick off the discussion here is going to be a little bit more technical than usual, which I'm looking forward to. But before we get into the details, I want to learn a little bit more where you're coming from. I think you've, you've had some interesting roles. You're currently an enterprise architect and global field engineer. So I'd actually like to learn what exactly that means. And then previously you were technology evangelist with both your, your current company conflict, but also with Tipco software. So I also want to understand a bit more about what does that actually mean in terms of how you engage with companies, but can you just give a quick brief on, on what is it that you do with conflict?

[Kai]

Yeah, sure. So I'm actually working in an overlay role, so that means I speak to really hundred, hundred 50 customers a year. And if there is no travel band and really I travel all over the world and IOT and industrial IOT is a big topic of that. And I talked to these customers really to solve their problem. So it's really, while it's technology under the hood, we try to solve problems. Otherwise there is no business value out of that. And I think that's what we will also discuss today. And therefore what I really do is I analyze the scenarios that our customers have challenges and problems and how we can help them with even screaming. That's what we are going to talk about today and my history and my background is really, I've worked for different integration vendors in the past. And therefore this is also very similar to what I do today with and with event streaming. The key challenge typically is to integrate with many different systems and technologies. This is machines and real time sensors. And so on only one side, but also the traditional enterprise software systems, both for IOT like an ERP System, but also a customer relationship management or big data analytics on the other side. And that's really where I see the overview of these architectures and how event streaming fits into.

[Erik]

Okay, so you have kind of a, a technical business interface role where you're, you're trying to understand the problem and then determine what architecture might be right to support that.

[Kai]

So I'm really exactly in this middle point, I taught both. And even to the executive level, but then also to the engineers on the other side, which need to employ.

[Erik]

During those initial conversations, how much do you get into kind of the completely nontechnical technical topics about how an end user might potentially put in bad data, or, you know, these, these kinds of they're almost HR topics or topics related to, let's say the completely human aspect of how a solution might be is, do you get into those early on, or is it more, once you get into implementation, you figure out what, what those other challenges would be and you address them as you go?

[Kai]

No, it's really early stage. So, I mean, we talk to our customers on different levels. It's both on the business side and on the technical side. So, and before we really have something like a pilot project or proof of concept, and we really already talked to many different people from every level from very technical, but also to management and so on to understand the problem. So we plan this ahead of time. So it's not just about the technology and how to integrate to machines and software, but really how to process data. And what's the value out of that.

[Erik]

And then do you have a very specific vertical focus or are you quite horizontal in terms of the industries that you cover

[Kai]

We are not industry related. So event streaming to continuously process data is used in any industry. However, having said that with the nature of what machines are in industrial IOT is with producing continuous sensor data all the time and also of the big data and more and more of that with that, of course, industrial IOT is one of the biggest industries, but it's really not related to that. So we also working with banks, insurance companies on telcos in the end, they have very different use cases, but under the hood from a technology perspective, it's often very similar.

[Erik]

Yeah. One of the issues that's both interesting, but I suppose also challenging is that there's almost an infinite variety of things that you could analyze in the real world. Right. I suppose there's also some kind of 80 20 rule. Is it the case where you see, like there's a short list of five or 10 use cases that constitute 80% of the work you do, or is it actually much more varied than that?

[Kai]

It really varies, and it depends on how you use it and that's what we will discuss later today. But in some use cases, really all the data's processed for analytics, for example, the traditional use cases like predictive maintenance or quality assurance, but as more and more of these industrial solutions propose so much data. Sometimes the use case is more technical so that you just deploy the solution at the edge and the factory to pre-filter because it's so much data and you don't process all of that. And therefore the event streaming is getting the sensor data pre-filters and preprocessing, and then in chest, maybe 10% of that into an analytics tool for more use cases. So it's really many different use cases, but in the end, typically it's really to get some kind of value out of the daytime. I think that's really the key challenge today that most of these factories and plants, they produce more and more data, but people cannot use it today. And that's where we typically help to connect these different systems.

[Erik]

And I know confluent is, is it right to say that it's built on Apache Kafka or that's the solution that you use? Can you just describe to everybody, what is Apache Kafka?

[Kai]

That's a good point. And that's also explains how this is related. So Apache Kafka was created at LinkedIn. So the tech company in the U S around 10 years ago, they built this technology because there was nothing else on the market, which could process big data sets in real time. So we have integration middleware for 20 years on the one side for big data. And we have real time messaging systems for 20 years. But we didn't have technologies which could combine both, and that's what LinkedIn built 10 years ago. And then after they had it in production, they opened sources. And this is exactly what Apache Kafka is. So it can continuously process really millions of data sets per second at scale reliably. Then when they open sourced it. And the first few years only the other tech companies used it like Netflix or Uber or eBay.

And however, because there was nothing else on the market and there was a need for this kind of data processing all over the world in all industries. So they really, most of the fortune 2000 was a Patrick half kind of different projects. And with that in mind, five years ago, confluence was created by the founders, which were the creators of a Petrik half cuddling. So they got venture capital from LinkedIn and from some Silicon Valley investors and found that confluent with the idea of making CAFCA production ready, which means the tech times often can run things by themselves, but conference really helps to improve Kafka and build an ecosystem and tooling around it. But of course also had the services and support so that the traditional company, I always say also can run mission critical workloads with Kafka because they need help from a software vendor.

[Erik]

Okay. Very interesting. So this a little bit of the red hat business model, right, is like building enterprise solutions on top of open source software. It seems like that's, yeah, that's kind of a trend, right? Cause I guess an open source has a lot of benefits in terms of being able to debug and so forth, but at some point, right, people don't want to, they don't want to figure it out for themselves. They need a service provider.

[Kai]

Yes. And that's exactly how it works. So it's exactly like red hat. And the idea is really that everybody can use Kafka and, and many people use it even mission critical without any other vendor because they have to expertise by themselves. And on the other side, also these tech companies like LinkedIn, they also contribute to this framework because it's an open framework. So everybody contributes and can leverage it. And that's exactly also what we are doing. So we are doing most of the contributions to Kafka. So we many, many full time committers just for this project. But then in addition to that in the real world, like in industrial IOT, you also get questions for example, about compliance and security and operations, 24 seven and guarantees. And there's a place for that. And this is where the traditional companies like in, in industrial IOT have simply different requirements for something then a tech company, which runs everything in the cloud. And this is exactly where conflict comes in to provide not just a framework and support, but also the tooling and expertise so that you can deploy it regarding the USLS and your environment, which can be anywhere just in a factory or hybrid or in the cloud.

[Erik]

Okay. Very interesting. Well, let's get into the topic then a little bit here. So maybe a starting point is just the question, what is event streaming? So we have a lot of different terminologies around analytics, I guess people use realtime analytics a lot. And I think you also use the terminology on your, on your website to some extent, but how would you compare real time analytics to event streaming or what,

[Kai]

Just those two, two terms that's really very important because there are so many terms which are overlapping and often different vendors and projects use the same bird for different things. So this is really one of the key lessons learned in all my customer meetings define the terms in the beginning. And so I explain when I talk about events, streaming is really to continuously process data. That's the short version. So this means some data sources produce data, and this can be sensors for real time data, but this can also be a mobile, a very, you get a request from a click from the user button. So it's an event which is created and then you consume these events and then you continuously process them. And that's mainly the main idea. And other terms for this is realtime analytics or stream processing or streaming analytics. But the real important point is that it's not just messaging because that's what I really sometimes get upset when people say half guys, a messaging framework. And that's really the key point here. It's not, yes, you can send data from a, to B, with Kafka and people use it for that a lot, but it's much more because you can also process the data and you can build stateless and stateful applications with Apache Kafka. And that's really the key difference. And so in summary, half cars built continuously integrate with different systems, realtime batch and other communication signals and process the data in real time at scale, highly reliable. And that's in the end. What I mean with event streaming.

[Erik]

Okay, great. That's, that's very clear. And then there's another term, which is maybe not as common, but event driven architecture. Are you familiar with this? Would you say that's another thing that overlaps heavily with event streaming or is it a particular flavor or what would be the difference there?

[Kai]

Yeah, so it totally overlaps. So event streaming is more a concert and event driven architecture as the name says, is the architecture behind it. But how that works in the end is that you really think about events and that even can be a tender thing, like a lot, even from a machine or it can be a customer interaction from the user interface and all of these things are events and then you process them even phase. And this is really also key of this foundation. And that's definitely important to understand, because no matter if you come more from a software business or more from the industrial IOT and OT business in the past 20 years, typically your stored information in that database. So in the beginning it was like an Oracle database or file system. And you talk more about big data analytics or cloud services, but the big point here is you always start the data in a database and then it is addressed and you wait until someone consumes it with [inaudible] or with another client. And this is really more or less a too late architecture for many use cases. And what events, streaming and event driven architectures do. They allow you to consume the data while it's in motion while it's hot. And this is especially relevant for industrial IOT, where you want to continuously process and monitor and act on sensor data and other interactions. And this is really the key foundation and difference from event-based architectures. So traditional architectures with databases and vet services and all these other technologies, you know, from the past.

[Erik]

And then that maybe brings us to the, let's say the first deep dive topic of the conversation, which is event streaming at the edge versus hybrid versus cloud deployments. Because, you know, you just mentioned the certainly there's unique requirements around, for example, an autonomous vehicle, right, where a 10th of a second can be quite impactful. And in the real world, my assumption is, well, obviously you can, you can deploy this across, but of course it was initially developed for primarily cloud deployment. So I assume that the edge deployments are significantly more challenging just given the architecture of limited compute capacity and so forth. How do you evaluate deployments across these that say edge, edge cloud and then hybrid options?

[Kai]

Yeah, so that's a very important discussion. And so actually in the beginning yes, Kafka was designed for the cloud because I'm LinkedIn build it and LinkedIn, and that's the big advantage of all these tech companies. So they build new services completely in the cloud, and most of them just focus on information, right? So this is not a physical thing like in industrial IOT and therefore it's very different. But even at that time, cloud 10 years ago was very different than today. So even at that time, you had to spin up your machines in the cloud, like on AWS, you spin up a Linux instance, for example, and therefore it's not that different from on premise deployment. And with that in mind today, of course you have all the options. I mean, confluent only one side, we have contraband cloud, which is a fully managed service in the cloud, but you only use in a serverless way, so you don't manage it.

You just use it. However, having said that 90% or so of CAFCA today are self managed and they're not just mentioning cloud, but really on premise either in the data centers or at the edge. And this is especially for industrial IOT where you want and need to do edge processing also directly in the factory and with that all in mind. So there's all these different deployment options. We have use cases and just edge analytics and processing and a factory for use cases like quality assurance in real time. But then also we see in industrial IOT, many hybrid use cases where on the one side you do edge processing. As I mentioned before, either just for preprocessing and filtering, or maybe even building business applications at the edge, but then you also replicate data for another data center or the cloud for doing the analytics. And this is really all very complimentary. And especially in industrial IOT, it's really a, comment's a use case we'll have hybrid architecture because you need edge processing for some things. This is not just for latency, but this is also for costs. People often learn the hard way, how expensive it is to ingest all the data to the cloud and process it there, especially if you really want to see all the sensor data before you deleted it again. And therefore these hybrid use cases are the most common deployments we see in industrial IOT.

[Erik]

Yeah. I was actually a lets see. Was it last week or two weeks ago? I was on the, on the line with the CTO of Foghorn. Are you familiar with the company Foghorn?

[Kai]

I even listened to.

[Erik]

Oh, okay. Okay, great. And so I guess, you know, one of the things that they were emphasizing was the, let's say the challenge of doing a machine learning at the edge, right. Just do the compute power there. How do you view this? Or let's say when you're in a conversation with a client and the clients, it kind of discussing their business requirements, how do you assess what is actually possible to do at the edge? And, and then, you know, where at the edge are we talking about? You know, actually, I mean, let's say at the sensor, right, which is maybe very limited compute or at the gateway or at the, at the local server, how do you kind of drive that conversation to understand what is possible from a technical perspective based on their business?

[Kai]

Yeah, that's a good question. And this discussion of cross always has to be done. And therefore we really start this also from the business perspective, what's your problem and what we want to solve. And then we can dive in dive deep into what might be a possible solution, or maybe there are different options for you. I'm not just one thing that you have to do, but more like if you, if you do want to do predictions with machine learning and AI and all this, these passwords, and typically it's a separation between model training, which means taking a look at this historical data to find insights and patterns, and then deploying this model somewhere for doing predictions. And this is the most common scenario. We see that this is separated from each other. And therefore only one side you ingest typically at the center of better into a bigger data Lake or store where you want to do the training to find insights.

[Erik]

And this can be in a bigger data center, right? You need more compute power. And this often than is in the cloud, and this is the one part, but, and until you need really the more infrastructure, so you cannot do this often, shouldn't do this directly at HD wise, which is smaller, but when you have done the training in a bigger, with more compute power, then the model scoring or the predictions, this really depends on the use case, but this can be deployed much closer to the edge. And here, when we see different scenarios, depending on what the use cases, but you can either do also the predictions in the cloud or in the data center, or even ambit and bet this model into our lightweight application. So just from a technology perspective, I do have an understanding the model training has done, for example, in a big data Lake like tube or spark or cloud machine learning services, there's many options they're meant for the model deployment.

[Kai]

And this can also either be a Java application for example, and be really scalable in the distributed system or on the other side, you can also use, for example, the C or C plus plus with a Kafka client from confluence and deploy this really at the edge, like in a microcontroller, if it's very lightweight. And this of course, then also depends on the machine learning technologies you use, but most modern frameworks also have options here, like to give you one example, we see a lot of demand for TensorFlow, which is one of these cutting edge, deep learning frameworks, and which was released by Google. And here you also have different options. You can train a model and deploy it and it's then too big and it really has to be deployed in a data center or on the other side, you can use TensorFlow Lite and export it. Then for example, Rhonda model, like in a mobile client with Java script, or really in an embedded device with C and therefore you have all these options. And it depends on the use.

[Erik]

And I guess right now we have, let's say from a, just a fundamental technology perspective, we have trends that are kind of moving in both directions that are making it a little bit easier maybe to do compute on, let's say both levels of the architecture. So you have improving hardware at the edge, you know, so greater, greater compute power at the edge. You also have potentially 5g making it. Maybe people would disagree with this, but potentially making it more cost effective also to move data to the cloud, or if not more cost effective, at least, you know, better latency and bandwidth to move data to the cloud, which would, which would allow you to do more of those kind of real real time solutions without connecting to the edge. Do you see any trend based on just the underlying technology development dynamics that would drive us towards doing more work at the edge or more work in the cloud? I mean, obviously it's still going to be a hybrid, but do you see a direction one way or the other?

[Kai]

Actually, no, because it really depends on the use case. And also it's important really to define the stone terms like real time then, right? Because there's different terms on what that means, but in general, really, I can give you one example of where it will always be in this mixed state. So if you have different plants all over the world, on the one side you want to do real time analytics like predictive maintenance or like quality assurance, that's things that should happen at the edge. It doesn't make sense to replicate all this data to the cloud, to do the processing for latency and for cost even the five chins alone, it's always more expensive to first send it somewhere else and then get it back. And this is expensive from a cost and latency perspective. And so you want to do this analytics at the edge of victories.

However, having said that in this case model training or for doing other reports or for integrating with other systems, or for correlating between data from different plants to answer questions like we have one plant in China and one in Europe. So why is the same plant in China, much more problematic. And then you have to correlate information to find out what's the different temperature spikes and what's the different environments. And for this, then this doesn't make sense at the edge because you need to aggregate data from different insects, different regions and hear them typically the cloud is the key trend for this, because here you can elastically scale up and down and integrate with new interfaces. And for this, you want to do that in the cloud to replicate data in from many different other systems. So it's really I think the trend is that maybe two, three years ago, every time I talked about getting everything into the cloud and even the cloud providers of course wanted to do that.

But now the trend is to do it more in a hybrid way that it's both cloud for some use cases, but also edge for some others. And the best proof for this is if you take a look at the big cloud providers. So if you take a look at Amazon, Microsoft, Google, Alibaba, they all started with the story and just everything into the cloud and to all our IOT analytics they are. But today all of these vendors also release more and more edge processing tools because it simply makes sense to have some things that yet.

[Erik]

Okay. Okay, great. Then that's actually a good transition to the next topic here, which is event streaming for real time integration at scale. What type of integration are we talking about? You know, we're talking about integrating data. Are we talking about integrating systems?

[Kai]

That's a good question. That actually it can be both. So first of all, also to clarify here, Kafka or this conflict, doesn't whatnot. So what, what Casper really is it's about event streaming and that includes also integration and processing of data, but typically, especially in industrial IOT environments, but also compliments our solution. So if you're in a plant and want to integrate all these machines, or even directly to PLCs, you have different options and you can do direct integration to a PLC. So something like a Siemens seven or two mode bus, or you use a specific tool for that to give you one specific example. I see a lot in Germany, you, of course, people use a lot of Siemens, so they have Siemens as three as five or seven PLCs. And therefore you could use an IOT solution like Siemens MindSphere, which was exactly built for this integration for these kinds of machines.

On the other side, this is probably not the best solution also to integrate with the rest of the world, which means your customer relationship management system and with other databases and data lakes or cloud service. And therefore in most cases in industrial IOT customer really compliments other IOT platforms here. So it's more about the data integration and not so much about the direct system integration, but having said that you can do this. So we have customers which directly integrate to PLCs and, and machines. And on the other side, also dietary integrate and to any essence ERP systems like SAP, for example, this is always what you have to discuss in more deep dive. So there is all these options, and that's a great thing about caftan by people use it because it's open and flexible and you can combine it with other system. It's not a question one or the other.

And one last side note here, what might be also interesting for the listeners is that the modern European and the SSM and all of these tools, many of them also run cash under the hood because also the software vendors or these enterprise vendors, they have understood the value of CAFCA. And therefore also build their systems on that because these systems also have the same needs. So the legacy approach of storing everything in a database with web services, like rest or soap web services, this is not working for this new data sets, which are more realtime and bigger. And that's the earliest approach we see everywhere,

[Erik]

I guess, at the, at the it level integration is typically quite feasible at the OT level. At least my understanding is that we still have some challenges around kind of data silos that companies put up in order to protect market share. Do you see any trend here in terms of opening up the OT level to make integration say across vendors easier or, or let me ask when you're looking at a deployment, how significant of a challenge is this? Is it something where you can always find a solution? It's a, it's just a matter of putting in a bit of extra time or, or is it a significant challenge?

[Kai]

It's definitely one of the biggest challenges. And that's why people want to do this. As I said in the beginning today, when we talk to customers too big to not have access to that data because of the proprietary, because it's not accessible so far newer infrastructures that the vendors in the end are forced to use standards like OPC UA or [inaudible], they don't want to do that, but otherwise I'm, the customers would really get in trouble. And so they, the software vendors have to go into this direction a little bit on the other side. Also, as I said, there is technologies to directly integrate with PLCs too. For example, if you want to get a quick man. So if you want to see, I have all these machines in my plant, and I just want to get data out of it to monitor it, to get reports. And then you can also connect to the PLCs. So something like a Siemens seven. So having said this, this is definitely the biggest challenge to get all this data out. And however, this is also often why people then come to us because they say it's okay to me to do it to the last mile with a proprietary solution, like Siemens MindSphere but we are a global vendor all over the world, many different technologies. We cannot use every proprietary vendor everywhere because that's executive status seals and what CAFCA is. So it makes us strong as that on the one side you can integrate with all the systems, but you also decouple all of them from each other. So this means on the one side, you might have some Siemens on the other side, you might have some GE or whatever. And on the other side, you have direct integrations. You can integrate with all of that, and then also correlate all these different information systems and also combine it if your MES of your European system, or if your data Lake, and this is what makes Kafka so strong, so that it's open and flexible how you integrate it and what you use for the integration either directly or with a complimentary other tool. And this is why we see CAFCA used in IOT, but also in general for these use cases, because you can integrate with everything, but still you're open flexible.

[Erik]

Yeah, I suppose that's the, the real value of open source here is that you have a, a large community that's problem solving and sharing, sharing the learnings, right. Which you don't have with the next topic. And we've already touched on this a little bit, but is the machine learning element here, and we've already discussed, you know, model training in a data Lake that might be better stored on the cloud and so forth. But maybe the interesting topic here is when you're implementing machine learning and you're kind of segmenting this between different areas of your architecture, how do you view, let's say the future of machine learning for live data.

[Kai]

Yeah, that's a very good question. And that's really often why we talk to people about that because what we clearly see, and this is true for any industry is that there is an impedance mismatch between the data science teams, which wants to analyze daytime build models and do predictions and between the operations teams, which can either be in the cloud or which can be in a factory very, really deployed at scale. So you have seen, I've seen too many customers where they even got all the data out of the machines into the cloud. And so the data scientists could build great models, but then they could not deploy it in production anymore. And therefore you always have to think about this from the beginning. How, if you are a data science people access to all the historical data, but then also before you even start up with this, you need to think about what's my SLA is for later deployment of this, does it have to be real time? What's the data sets is this for big data, for small data. What's my SLS. And in production lines, it's typically 24 seven mission critical for the ride and then configure CAFCA differently. Then when you run it just in the cloud for analytics where it's okay, if it's down for a few hours and with this in mind, why here also we see so much Kafka is because there's a huge advantages if you built this pipeline once with Kafka. So let's say you have Kafka DH to integrate with the machines. And then you also replicate the data to the cloud and analytics. They are, this pipeline with Kafka is mission critical and runs 24 seven. So Kafka is built. It's a, it's a, it's a system which handles problems. So even if a notice down, or if there is a network problem, CAFCA handles that.

So that's how it's built by nature is a distributed systems, or it's not like an active passive system or where you have maintenance downtime that doesn't exist in Kafka. And if you have them this cuffed up pipeline, you can use it for both. You can use it for the ingestion, into the data analytics cloud, where the data used up the data in historical mode, in batch for training or for interactive analyzes. But the same pipeline can then be used for production deployments because it runs mission critical. And therefore you can easily use that also then to do predictions and to do quality assurance because these applications run all the time without downtime, even in case of failure. And that's one of these key strengths. You can build one machine learning infrastructure for everything. Of course, I'm some parts of the, of the pipeline and use different technologies, but that's exactly the key.

So the data scientists will always use a pipeline client, right? So they typically do rapid prototyping with twelves, like Jupiter and psychic learn. And this is frameworks, which data scientists on the other side in production, on the production line, you typically don't deploy pipe and for different reasons, it doesn't scale. Well, it's not a robust and performance. They are, you typically deploy something like a Java or like a C plus plus application. And as CAFCA in the middle of which handles the back pressure and is also the decoupling system, you can use these different connecting technologies and the data scientists can use Python while the production engineers uses Java, but you use the same stream of data for that.

[Erik]

You get involved also in, in building a machine learning algorithms or, or are you focused just on managing the flow of the data and then the client would have some, some system that they're using to analyze it.

[Kai]

So this is really, we are building the real time infrastructure, including data processing integration. And then this is really where the data science teams, for example, and choose their own technology. But this is also to understand and point out here. This is exactly the advantage because here all these teams are unflexible and different. I actually had a customer call last week, and this is really the normal that different teams use different technologies in the past. Everybody tried to have one standard technology for this, but in the real world, one data science team uses this framework like TensorFlow. And the other one says, no, I'm using Google ML with some other services like terminal and here, because cuff kind of middle is the decoupling system. You're also flexible regarding what technology you choose. And therefore the reality is that most of our customers don't have one pipeline where you send all data from a to B, but you typically have many different consumers, and this can include analytics tools where you really have to spoil for choice depending on your problem and use.

[Erik]

Okay. Interesting. I think that the next topic we wanted to get into is, was use case. And I think that's pretty important here because understanding how this is actually deployed, but before I want to go into, you know, kind of some end to end use cases in detail, I have a bit of a tangent here, which is a question that a number of companies have asked me recently, and I don't have a good answer. So I'm hoping you have a better one and that's, are there any use cases for five G that kind of really makes sense in 20, 20, 20, 21? Or are we really, you know, and I've thought about this and, you know, I've talked to some people and it seems like maybe augmented reality for industrial makes sense cause of high bandwidth requirements and, you know, wireless solutions. And AGVs probably make sense once you make them more autonomous because you have that same situation of, you know, latency, bandwidth wireless, but there doesn't seem to be so many yet.

And kind of my hypothesis was that over time as 5g becomes deployed, maybe the OT architecture of factories will start to change. There'll be less, you know, maybe a less wires you'll have the option to then build Greenfield somewhat more wirelessly. So that might change the architecture and then people would develop solutions specifically for this, this new connectivity architecture. And then, and then you might say, okay, now it's providing real value. But you know, aside from AGVs and AR was a little bit at a loss to identify anything that is really highly practical in the near term, anything that you've come across where you said, yeah, this 5G would really solve a real problem for one of your customers.

[Erik]

I think yes, because one of the biggest problems today is definitely network and data communication because today when I go to a customer for factory, which exists for 20 years and typically the integration mode, how we get the data from these machines is something like a windows server where you'll get connected and then you'll get a CSV file with the data from the last hour, because there is no better connectivity to integration. So I definitely think that in general, by their networks, I'm allowed to implement better architectures also for OT at the edge. But having said this, I also see these discussions about 5G with different opinions. So there was of course not just five cheaper, but also for factories. There is other standards and possibilities, how to do a network there. And also what I think if 5g gets into this industrial IOT, I'm I guess the bigger factories and so on, they will build a private five G networks for that.

[Kai]

So that's also possible. And I think that's great because what I don't expect to see at least from my customer conversations is that that's what the cloud vendors want. But if you directly integrate all these 5G interfaces from the edge and with the cloud, but that's probably not going to happen and because of security and compliance and all these kinds of things, but for private 5G networks, I think this would be a huge step for more modern architectures in OT. And that also, of course, then is the building block for building more or for getting more value out of that, because today again, the biggest problem in factories is that people don't get the data from the machines to other systems to analyze.

[Erik]

Okay. Gotcha. And I guess in brownfield, do you still need some kind of, yeah, you'll need some sort of hardware to deploy on that, but at least if you use 5g, then you can extract the data wirelessly without you can always just lay either net, I suppose. Right. But then that, that becomes a

[Kai]

Yeah, exactly. I mean, that's just different options. I mean, then you just somehow need to get the data out of these machines and production lines into other systems and it can be with Ethan and it can be with five G what's the best solution depends on cost and scalability and NCO.

[Erik]

Okay, great. But sorry for taking it down that tangent let's go into some of these use cases. So you've, let's see. I actually, I won't mention any of these until you do. I don't want to throw out names, but there's a connected car infrastructure. Should we start there?

[Kai]

Yeah, that's a good first example. And that's also a good relation to the 5G question. I think, let me explain this. So I think we can cover three or four use cases here because what's, for me, what was important when I talk about event streaming is to really talk about different use cases so that people see, this is not just for one specific scenario and a connected car infrastructure. As one great example, we see in many customers and Audi is one of these examples. So the German automotive company, and we have started building with them upon active conference structure on four years ago. And so what they actually did is they had the need to integrate with all the cars driving on the streets. I started this with the eight, with one specific more luxury car, but they are now rolling it out to all the new cars and what's happening there is that all these cars connect in the end to a streaming Kafka cluster in the cloud so that you can do data correlation in real time of all that data from a use case perspective, there's a demand for things like after sales, right?

[Erik]

So did you always in communication with your customers for different reasons? Like on the one side I'm sending them an alert that their engine has some strange temperature spikes and maybe he gets to the next repair shop, but also then to keep the customer happy to do cross selling, or if you know it from Tesla, you can even upgrade your car to get more horsepower. And there's plenty of use cases. And then you can even integrate with partner systems. Like for example, if the restaurant on the chairmen auto Autobahn or you're driving, and then you do a recommendation. If you stop at lunchtime at this restaurant, then you get 20% off and these kinds of things, and you'll see the edit value here is really not just getting the data out of the car into other systems, but really correlate and use this data in real time at scale 24 seven. And that's exactly one of these use cases for confluent, what we are doing and from a technical perspective of what these cars are doing. I mean, I'm Dara of course, using the, in this case for cheat today. And this is a great example. If you have five cheat here you can do many more things because still data is a data transfer from the cars, the most limiting factor regarding cost and latency and all these things.

[Kai]

Okay. Okay. Very interesting. One of the topics, maybe we don't have to speak specifically to ADI here because this may be topics a little bit more sensitive, but once you get into these situations where you have aftermarket services, for example, of course, there's not just value for the OEM, but there's potentially value there for a lot of different companies that might also want to be selling services to this this driver, this this vehicle owner, for example. So, and this becomes then an issue of not just moving data, but also regulating who has access to data in, in which way to what, to what extent it's anonymized or not. So what metadata is available and so forth, do you get into these discussions about this gets smart into the legal privacy discussion about, well, what can we do to like monetize this data that we have?

[Kai]

Yeah. So, so actually this is all this part of the problem. I mean, especially in Europe term chairman you've, our privacy is really, really hard, right? It's not much different than in the U S for example, therefore we get industry introduced discussion all of the time and you have to be security compliant, which is part of the conversation of course. And you need to be, for example, GDPR compliant and Germany and Europe. So this is part of the problem. And this is also, I'm very need to think about this from the architecture perspective. So who has access to what data? And so, and that's also the point where for example, confluence comes into play because if open source Kafka, you would have to implement this by yourself with confluent yet, then you have things like role based access control and audit logs and all these kinds of features, which, which help you here with multitenancy and all these questions.

And with that in mind, also the brings up more problems and questions also for all these vendors, because as you said, not just Audi or let's get away from audio, but in general, an automotive company wants to get the added value, but also the tier one and tier two suppliers. And this is really a big discussion. And this is where all of these vendors today have a lot of challenges and nobody knows where it's going, but today everybody's also implementing its own connected car solution. So if you Google for that and you will find many automotive companies, many suppliers, and also many third party companies, which implement this today, and nobody knows where it's going. But today already I have seen a few automotive companies where the car is not just sending out data to one interface of one vendor, but to two or three different interfaces because everybody wants to get the data out.

And so this is really where the next years we'll definitely consolidate things and new business models will emerge. And in my personal opinion is really that the only realistic future is that these different vendors also partner more together. And that will happen because it's not just for the automotive company, but it's also for the suppliers. If you take a look at these innovations, they are, they are all working on software. If you go to some kind of conference, they are not talking about the hardware, they are talking about the software on top of that. And therefore this is really where the market is completely changing because in this automotive example, in some years, many people will not care about if it's an Audi or Mercedes or BMW, but how well it's integrated with your smartphone and with the rest of the technology. And therefore this is a complete shift into the market. And we see this at every automotive or at every IOT company today.

[Erik]

Okay. Very interesting. Yeah. This is a topic that comes up a lot with our customers who are sometimes automotive tier one, tier two suppliers, right. And then they face the challenge of getting data out, you know, from an OEM. And, you know, we produce the air filters something right now, the OEMs are never going to give us our data. Right. But we have these, we have these business cases. Right. So yeah, this is a very interesting discussion. Okay. Then the next one we are covering here is Bosch a track and trace for construction. I think track and trace is very interesting because it's, it's applicable to you know, basically anybody who is managing assets that are kind of in motion. What was the problem here and what did you do with Bosch?

[Kai]

So, so that's another great problems in India and just also clarifies the different use cases. The first one was getting all the attended cloud for analytics and using the data. This is normal hybrid one. Did the I don't interesting party because before I talk more about the use cases that here you see, and this is also not all real time data or big data. So in this use case, it's really about smaller data sets and about also request response communication and not just streaming data. The use case here is that Bosch has several different construction areas, but they use together with their partners and where they build new buildings, for example. And then you have a lot of devices that are in machines and only one side, of course, and the new devices and machines have sensors, which continuously give updates to the, to the backend system.

But then also they had many different problems and use cases here. Like the workers in the construction area didn't know where the machine or the devices, or when to do maintenance for the device and replace batteries or other things. And therefore in this case, it's really a track and trace system and where you monitor all the information from all the systems. And actually it's not just a machines and devices, but also track and trace information about our customer. So whenever a worker has finished something, he uses his mobile app. And in this case, so it's not streaming data about, he does a button click and then it's even, but just sent to the backend and the data is stored there and correlated and this way, Porsche chief solution, so that they really know all the right information in the right context for each contract construction area.

And just as important for the edge in the end, which is the construction area. But then in the backend, of course, this is also important for the management and for monitoring all the different projects. And also all this data goes to analytics tools because the data science team takes a look at all the construction areas and what's going on and how to improve the products or the services they offer for the new products they build. And if this solution has deployed it also to the cloud, so that they can integrate with all these different edge systems and store information and correlate it, and here, it's also important I'm in this use case that they don't just continuously process the data, but they also start the data in Kafka so that you can also consume old events. And this is a part we didn't discuss yet, but this is important.

So in Kafka or even streaming systems, everything is upended only. So it's even based guaranteed order of flocks or events. And then you can also take a little bit old data. So the data scientist doesn't consume all data in real time like others, but they say, give me all the data from this construction area from the last few months. And then they want to correlate it with the last three months, from another core construction area. And they see maybe this construction area had some specific problems and then they can find out what the problems were. And so this is a great other use case because this is hybrid and this is not big data, and this is not only real time data. But this is still about Kafka makes so much sense for the integration and processing of these events.

[Erik]

Yeah. That's a very interesting one from a, let's say an end user perspective, right? Because you really have just, even within the construction site, you have a number of different end users that have quite different requirements around the data from the, the person looking for the tool to the maintenance team, to, to management, that's making decisions about how many, how many assets do we actually need and so forth. And I suppose, yeah, again, coming back, you don't get into, you're laying down the architecture. So you cover what architecture is necessary for this, but you're not going to be advising them on these individual use cases. Is that correct? Or do you ever get involved in advising what use cases might make sense or helping to?

[Kai]

That's why we also have, I mean, because we have the experience from all the other customers. So we also do them under consulting and the help of the engagement and approach. We are not doing the project itself. So this is typically what a partner does or what they do by themselves. And we help really with the even streaming part and the infrastructure, but only from a perspective of calf, because we are not doing a bulk project on that. And that's maybe also important. So as I said before, really, even streaming is not competitive, but really complimentary, also total solution like for the management team, which has some MBI tools in the back end. So this is not CAFA, right? So this is where you connect your traditional BI tool like Tableau or power BI or a click all of these vendors and connect the two parts of the data. So this is really complimentary.

[Erik]

Okay. Okay, great. Yeah. We were working with a European construction company about a month ago on a track and trace, well, we were, we were surveying how in China, they are there. I'm able to ramp up operations at construction sites by tracking where people are. And, and, you know, our, our people grouping together are people wearing masks, et cetera. So it's kind of a track and trace for people and it's been okay, extremely effective in China. And then the question is, how do we translate this to the European market where this would probably all be highly illegal? And then the last one that we wanted to look into was energy, an energy, a distribution network for smart home and smart grid. So this, yeah. Completely different set of problems. What, what was the background with this case?

[Kai]

Yeah, so, so this is one example is Aeon, which is an energy provider. And these kinds of companies also have a completely changing business model. And that's often actually where Kafka comes into play to really reinvent a company. Often the problem for them is that in the past, they only produce their own energy like nuclear energy. And of course, this is obviously changing to more green energy and so on, but the business model also had to change because they can not just sell energy anymore, but they also see more and more customers or end users, which produce their own energy, like for solar energy on their houses. And often they produce more energy that they even used by. So they want to sell it. And therefore I'm for this example Ian has built on streaming IOT platform, which is also hybrid or some of the analytics in the cloud, but some other processing is more at the edge and what they are doing in the end.

They are no more like on distribution platform. So this means only one side, they still integrate with our own energy systems to sell their energy and do the accounting and billing and monitoring. And all of these things here, of course they have to get mentioned is it's still in real time. And even for the bigger data sets, these produced two data systems, they can handle it. But on the other side, they also know integrate directly with smart homes and smart grids and other infrastructures. So they can that they can get into the the, the system of the end user, like the customer has the smart home. And with this, they are providing now much more services. And in this case, for example, you could sell your salon, achieve to another person and they provide the platform for that. And this is really just one of the examples, or they have tens of them because these companies and energy, they have to completely change and in a way, their business models, and this is fair, CAFCA helps.

So good because again, only one side it's realtime data. So you can scale this and process data continuously, but on the other side, it also decouples the systems again. So the smart home system is completely decoupled from the AI stuff. Sometimes it sends a new update, like a sensor information to the system so that the system knows, Hey, this house has produced a lot of energy. Now we can sell it. And so please distribute it somehow. And this is again where many different characteristics come into play. So it's only one side hybrid, very do analytics in the cloud, and then also agentic ration. But on the other side, also, this is really a mission critical system. This has to run 24 seven. So it's distributed over different geo locations. And with this infrastructure, this is really the critical center of their system to integrate with their own infrastructure, but also with all the customers and end users. And also then of course, with partners like your, this it's the same strategy like in automotive and the future of these companies will not put everything by themselves, but they complimented with partner systems, which are very good in one specific niche. And they provide the distribution system for that.

[Erik]

Okay. Yeah. And this is quite a contrast of systems, right? You have a mission critical utility, and then you have your grandfather's home where, you know, I suppose you have a lot of different types, cause we're not here talking about always enterprise scale with smart grips, but we're talking about also home deployments with probably a, quite a range of different technologies, different connectivity solutions and so forth. Was, was this a challenge there, or is it already fairly standardized that when they do a, when they install a solar deployment on a home that the, the right connectivity infrastructure is already deployed there for, for an easy integration or, or is that a, is that a challenge

[Kai]

In this case? It's much easier than in plants and factories, because here you don't have to challenge that every vendor is very proprietary and doesn't really want to get the data out. And in this case, it's typically only one site and it's also not 30 year old machines, like in a production line, but it's really maybe ammonia to your old small devices. And so here are these manufacturers also a bit more modern technologies and she had a difference also is they want you to integrate it with other systems. So this typically has a standard interface or API, something like MQTT or HTTP. So this is actually pretty straightforward to integrate because here the business model and the integration idea much different than in production lines and plans. So this is really pretty straightforward. It's really more the challenge that, again, some of these interfaces are real time and sensor base. Some others are like more like pull based where you just ask the system every hour. And this is exactly what Kafka was built for with, it's not just a messaging system. It also has integration capabilities. And so it's pretty straightforward with Kafka to integrate these different technologies and communication, communication, power Dignitas, and still correlate all of this different data sets and protocols to get value out of that. And sent an X or alert or whatever the use case is on that.

[Erik]

Okay. Interesting. Yeah. I was reading a, an article maybe a month or so ago, which said that I think it was in Germany or UK, the percentage of energy on the grid from renewable spiked up to something like 33%, which was a significant high, right. And it was due to a few factors, I think like lower energy, lower air pollution, because factories were shut down and a few other factors. But I think that's something that five years ago, people were projecting that to be kind of an apocalypse, right? You couldn't, you couldn't handle that kind of swing in terms of renewable energy. But I suppose Kafka is part of the reason that energy grids are now able to handle a lot more variance in the load than they were designed to right. 10 years ago.

[Kai]

Well, it's really changing how you stay. So every year you see new innovation on that. And really Kafka is at the heart of that in many different infrastructures. Often you don't see it because it's more because it's under the hood, right. But it's really not just that these typical end user projects are using Kafka, but it's really that also these software and technology vendors under the hood and use Kafka for building new products.].

[Erik]

Okay. This has been really interesting and what, what have we missed here? What else is important for people to understand about events streaming?

[Kai]

The most important thing is really that today it's really much more than just ingesting data into data Lake. That's what people know it from in the last five years, but really today and half can event streaming is for mostly mission critical systems. So that's what 95% of our customers do. That's why they come to us because we have the expertise with Africa and build it more or less many parts of it. And therefore this is really the most critical thing. And it doesn't matter if it's just at the edge or really not global deployment. So we provide technologies that you can deploy. CAFCA globally. We have many industrial customers which run and plants all over the world and still you can replicate and integrate in real time for big data sets all over the world. There's different components here on different architectural options with different SLA is of course, but this is really the key power to take away from the session for industrial IOT.

[Erik]

Kai thank you so much for taking the time. Last question from my side is how should people reach out to you? I would be glad if you connect to me on LinkedIn or Twitter. So I'm really present. They are into a lot of updates about use cases and architect just there. And also of course, you can check out my blog. Hi, Vina, hi, minus wine or tea, or your check, the links maybe. But this is really where I blog was about IOT a lot every week or every two, and have a lot of different use cases and architectures around events streaming and different. Okay, perfect. Then we'll put those notes in the in the show notes guy. Thanks again. Yeah, you're welcome. Great to be here.

[Outro]

Thanks for tuning in to another edition of the industrial IOT spotlight. Don't forget to follow us on Twitter at IoTONEHQ and to check out our database of case studies on iotone.com/casestudies. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at team@iotone.com

Read More
test test