fbpx
Free Trial
EPISODE #7 | January 1, 2022

Edge AI and Vision-Enabled Products: The Reality Behind the Glamour

00:33:26 minutes

Listen Now

Play Pause
Subscribe on: Itunes | Spotify | Blubrry

Sign up to be notified when new episodes are posted!

[Full Transcript Available Below]
SiliconExpert Podcast Episode 7 with Jeff Bier and Phil Lapsley of Edge AI and Vision/BDTI - Transcription

Host: Eric Singer

Producer/Director: George Karalias

[00:00:00] Jeff: Think about the doorbell cam, like you've got your doorbell cam that works well at noon on a sunny day. Okay. What about Dawn? Dusk fog, rain, snow. And that's where many of the challenges tend to emerge. Like if I'm training an algorithm to say, recognize the presence of a person in front of a doorbell cam, and I only train it with scenes where it's, everything's nicely lit then guess what? It's only going to work in scenes where it's nicely lit.

[00:00:35] Eric: Welcome to the Intelligent Engine. A podcast that lives in the heart of the electronics industry brought to you by SiliconExpert. SiliconExpert is all about data-driven decisions with a human driven experience. We mitigate risk and manage compliance, from design through sustainment. The knowledge experience, and thought leadership of the team partners and those we interact with every day expose unique aspects of the electronics industry and the product life cycles that live within it.

[00:01:04] These are the stories that fuel the Intelligent Engine.

[00:01:13] Today's spotlight is on BDTI, an engineering services firm specializing in efficient implementations of algorithms on embedded processors. For the last 10 years, BDTI has focused on helping companies incorporate embedded computer vision and visual AI into products. BDTI also operates the Edge AI and Vision Alliance, an industry consortium of more than 110 companies devoted to inspiring and empowering engineers to create products that perceive and understand.

[00:01:46] We're delighted to have the firm's co-founders with us today. Jeff Bier and Phil Lapsley. Today, they'll be talking with us about embedded vision and visual AI and what it takes to add vision to products. Phil, Jeff, thanks for joining us.

[00:02:01] Phil: Thanks for having us.

[00:02:02] Jeff: Really happy to be here.

[00:02:04] Eric: The first thing I want to ask is how you two came to know each other and ended up working together.

[00:02:10] Phil: So, Jeff and I were both graduate students at UC Berkeley in electrical engineering back in the late eighties, early nineties. And we both were studying under professor Edward Lee for digital signal processing.

[00:02:23] Eric: And then did you guys start working together immediately after that? Or how do we get here.

[00:02:29] Phil: Yeah, pretty shortly after that.

[00:02:31] Jeff and I had both had some other jobs, of course, but we knew that we had at UC Berkeley, we had seen some really pretty innovative things in digital signal processing. And we felt like there was stuff that we could take from UC Berkeley out to the world. We should really start a company and do some work in DSP.

[00:02:48] And so it was just probably a couple of years after our graduation, we started BDTI.

[00:02:54] Eric: Paint us a little bit of a picture on how that fits in with BDTI and the edge AI vision side of things. I know there's a lot of cross-pollination going on here.

[00:03:04] Jeff: Phil and I early on had a really strong interest in embedded processing and how you create efficient implementations of algorithms in embedded software, often on specialized processors to enable new kinds of functionality in devices that are cost or power or size constrained, consumer electronics, like mobile phones, for example. We focused on embedded digital signal processing for about 20 years. And then around 2010, we started to get some first signals that some of our customers were interested in doing a similar thing, not with audio and wireless signals, but with images and video.

[00:03:51] In other words, finding ways to manipulate images and interpret images and video algorithmically in devices. And that's what we call computer vision. These days, we often call it visual AI, taking images and video and extracting meaning from them. Much as the human visual perception system does. So this is a capability that was really out of reach 15, 20, 30 years ago.

[00:04:26] The fundamental algorithmic techniques were known, but it was just too complex, too costly to implement, but we had an inkling around 2010 out that might be changing.

[00:04:38] Eric: Are you guys partners at BDTI?

[00:04:40] Jeff: Yeah, we're co-founders along with professor Edward Lee, our graduate advisor from UC Berkeley, who we did our original graduate work with.

[00:04:47] Eric: You kept the dream team together. I love it. What did you see back in 2010 that led you to believe that some of these things that have been more or less pipe dreams for so long, were now going to be possible? Was it the increase in processing power that was available? Was it the idea of moving things to the enterprise edge or a combination of those things?

[00:05:11] Jeff: It was a convergence of factors, but possibly the most significant. Was the increase in processing power in embedded processors. And once they become powerful enough, then the suppliers will work on making them more efficient, making them less expensive so that they can be used in more places. And we'd seen this movie before multiple times, for example, think about digital audio in the 1980s, digital audio was, was a mature, widely used technology, but it was extremely expensive.

[00:05:45] So you'd find digital audio, for example, in recording studios and broadcast studios, where there'd be million dollar pieces of equipment that were processing digital audio. And then this kind of distribution was all analog, right? Vinyl records and cassette tapes and FM broadcast and so on. But same thing happened with audio that I just described.

[00:06:07] The processors became more powerful and more efficient gradually year after year after year. And eventually so inexpensive and so powerful and so efficient that now you could go to the corner store to buy a birthday card, and there's a digital audio chip in the birthday card. Right? One time use you buy your mom a birthday card.

[00:06:29] She opens it up and plays a tune. Hopefully she's tickled by that. And that's it never used to again. So how do we go from million dollar pieces of equipment being the typical embodiment of digital audio to a $10 birthday card that's going to be used once we got there by steady improvements in the chips that implement those.

[00:06:55] So it's not just chips, it's also algorithms and development tools and converters that mediate between digital and analog and so on. Silicon fabrication process improvements might have netted us a 30% a year or something like that 10 or 20 years ago. With these architectural improvements, we're able to sustain a doubling or quadrupling of performance and efficiency each year by specializing the architecture.

[00:07:21] And so if you compound that over a few years, things that were a hundred times too expensive a few years ago now suddenly start to become feasible. For humans, visual perception is incredibly powerful. Most of the information that comes into the human mind comes in visually. We get so much information visually, and this has not been available to machines for the most part until recently. When you combine these two observations that there's an incredibly rich array of information available visually with the realization that oh, and it's becoming increasingly feasible to give machines visual perception. That's very exciting. And that's what got us excited about shifting our focus into this visual AI realm.

[00:08:07] Eric: Shifting your focus. I would see what you did there.

[00:08:11] Phil: We use the word focus a lot in this business.

[00:08:15] Eric: Let's talk a little bit about some of the applications that you all are working on for embedded vision technology.

[00:08:24] Phil: We're seeing lots of interesting applications across a pretty wide range. If I had to bucket them, I might say there's like kind of smart spaces applications. An example of that might be you hear about cashierless checkout at stores, right?

[00:08:37] The Amazon Go. So the idea of being able to go in store, pick up stuff and walk out and it automatically knows, you know, who to bill and what it is that you, what it is that you grabbed.

[00:08:47] Eric: Not to be confused with self-checkout. You're talking about just grabbing something off the shelf and then boom, you're paying for it.

[00:08:53] Phil: That's right. Grab, grab, and go type things. We're all used to retail stores, but now suddenly it's become through computer vision and artificial intelligence suddenly is much smarter. Another area is autonomous machines and autonomous car, obviously, but also drones that are going to go and do inspection of equipment.

[00:09:11] But they're going to do it at autonomously as opposed to what the human having to control them. When you think about FedEx has this delivery bot called Roxio, that can be, can be autonomously delivering packages. And then a third area is health and fitness. Imagine doing a yoga or practicing golf, but with a system that's able to watch you on camera and give you feedback about how your pose was and how you could do better.

[00:09:34] Eric: I'm not going to like any of the feedback that machine is providing

[00:09:37] Phil: Humans rarely like feedback.

[00:09:41] Eric: I'm not going to like that brand of brutal honesty.

[00:09:44] Phil: Right.

[00:09:45] Don't worry. They can tune the AI in your particular case to sugar coat everything.

[00:09:51] Eric: Wow. I love the idea of these three big categories, smart spaces, autonomous machines, and health and fitness. And when I think about smart spaces and the sort of resistance in society, especially when you get into, out into laypeople, there is often a deep rooted fear or at the very least suspicion about AI powered visual technologies.

[00:10:25] I wonder if you're working on any applications that are more in public spaces, and if you'd be able to talk about some of the challenges, particularly in the area of public perception, when it comes to those areas.

[00:10:40] Jeff: Yeah, we have. And it's interesting. I think it's, it does vary a lot by generation and by nationality or, or region in the U S.

[00:10:52] I find people that are pretty oblivious to the presence of cameras and don't seem to trouble by them for the most part. Obviously there are certain places where that wouldn't be true, like your hotel room, but we're used to seeing them everywhere in the gas station, in the hotel lobby, increasingly every automobile.

[00:11:09] I was just parking at the San Francisco airport parking garage, and there's one camera for every four or six parking spaces, the place is just crawling with cameras and they have that's because they have a system to help you find the nearest empty parking spot. There are a few ways to address this, that people are already pursuing.

[00:11:28] Obviously you, you can just say, we're not going to use the technology. But then we're not going to solve these problems that we're solving with the technology. So how can we use the technology and still preserve people's privacy? One part of that solving that dilemma is processing at the edge, right? So imagine like a retail analytics solution, where there are lots of cameras inside the store.

[00:11:51] Those cameras could just stream video up to somebody's data center. In which case it's not very private or they could do all their processing of video at the edge and they could be designed so that they are literally unable to transmit video and images, but only transmit what we sometimes call the metadata.

[00:12:13] Oh, there's a person here and we've assigned this person, this temporary ID so that we can, we can keep track of them while they're in the store. And this person just picked up this product from the shelf. That information maybe is what gets sent up to a data center to be consolidated, but no picture ever leaves the device.

[00:12:32] Eric: It does feel like the idea of processing stuff on the edge and only transmitting this anonymous data would make this so much less intimidating to cultures that value privacy more than we might here in the US.

[00:12:47] I'd like to talk a little bit more about what it takes to add vision to a product, whether that be a new product that you're developing in conjunction with someone or adding vision to something that's already out there and just doesn't have those capabilities yet.

[00:13:08] Phil: It's a challenge, or you go back to the 1990s when we were starting the company. And if you ask somebody back, then how do I add wireless to my slot machine or something? Most engineers would have no idea how to do that because wireless was just black magic. And I think we're at a similar place with vision and in terms of, okay, what do I need to do to add this to my product?

[00:13:27] So big picture, obviously you have to have some way of getting the image into your device. You need a sensor of some sort. Then you're going to need an algorithm of some sort to process it. And when we pivoted to vision in 2010, the timing was right, but there was a thing that we hadn't didn't know anything about it, which was the idea of deep learning and artificial neural networks.

[00:13:48] And it became possible to actually start running deep learning algorithms and convolutional neural networks on an embedded processor. We run a conference called the embedded vision summit. I remember going to the summit in I think 2012 and seeing a demo from a young Lacoon of, of convolutional neural networks identifying various objects.

[00:14:10] And it did something that was just simply not possible with any degree of accuracy prior to that. And just looking around the conference floor and just seeing people with their jaws on the ground, just being blown away by this. And so deep learning is not the only algorithm that one can use for computer vision, but it certainly a really popular one.

[00:14:31] Obviously you need a processor and then you're going to need a lot of data. If you're going to be training a neural network in particular, since the way neural networks work is you show them lots of images of the thing that you're interested in detecting, and that's how they learn. And they need thousands, tens of thousands, maybe millions of such images.

[00:14:51] And then of course, the final thing you're going to need is you're gonna need skills to put all that together.

[00:14:56] Jeff: And if I could add onto that regarding the processors, these algorithms are extremely demanding in terms of processing performance. So if you may have an existing product, let's say we were talking about slot machines or vending machines.

[00:15:10] It's got a processor in there, but the chances are good that if you're going to add computer vision or visual AI, you're going to need 10 or a hundred times more powerful processor to run those algorithms. The other thing is skills are needed. Most companies don't have experience building products with computer vision, just like 20, 30 years ago, most companies didn't have experience building products with wireless communications.

[00:15:35] Now it's very commonplace and getting those skills is often challenging and there's also a hazard that people get tripped up by where oftentimes creating the initial demo show. Hey, this is the capability that, that we're going to build is not too difficult. What is often really difficult is going from that initial sort of proof of concept to solution that's really robust. For example, think about the doorbell cam like you've got your doorbell cam that works well at noon on a sunny day. Okay. What about Dawn? Dusk fog, rain, snow, back-lit at sunlight. Somebody's approaching the door and they're backlit at sundown. All of these scenarios have to be contemplated and addressed if the thing's going to work reliably across the full range of use cases and locations. And that's where many of the challenges tend to emerge like data. If I'm training an algorithm to say, recognize the presence of a person in front of a doorbell cam, and I only train it with scenes where it's, everything's nicely lit, then guess what? It's only going to work in scenes where it's nicely lit. And then when it gets used at dawn or dusk or nighttime or bad weather or backlit scene or whatever. It's not going to work

[00:17:01] Eric: Something I was blown away by looking at edge AI vision.com was the, just the wealth of videos up there of people who are developing applications and are willing to share this.

[00:17:15] And I was really struck by the community spirit that seems to be woven throughout that site of people, sharing experiences, sharing ideas, a very open, free flow of information. I'd love to hear more about how you fostered that environment.

[00:17:34] Jeff: Around 2010, as Phil mentioned, we realized that lack of know-how was a key bottleneck.

[00:17:41] We could see the technology itself getting to the point where it was feasible to, to create these kinds of systems. But most people, first of all, weren't even aware that it was becoming possible to do these kinds of things. And once they became aware, didn't have the know-how to do it. Computer vision was at that point, still mostly an academic topic. We decided to form this industry group, which is now called the Edge AI and Vision Alliance. Its primary mission is raising awareness and providing practical know-how to help product developers realize what were the right kinds of problems to tackle with this technology and then how to effectively employ the technology and build systems that will be robust while also meeting the real-world constraints of cost and power consumption and size, and so on. Access to this kind of practical know-how, it's really only feasible because lots of people are willing to share. Here's what I did. Here's the difficulties I had. Here's what I learned. Here's what worked. Here's what didn't work for me in my application.

[00:18:53] If every design team has to figure everything out from scratch, not much is going to get built. It, you have to be able to build on what other people have done learn from what other people have done.

[00:19:07] Eric: I was really impressed by the, also the breadth of people and organizations who were sharing things on the site.

[00:19:16] Everything from huge multinationals to people who are maybe a step above a hobbyist, just an electrical engineer, developing their own app, just using their mobile phone. Would you all be willing to walk through development of a product?

[00:19:32] Jeff: In our consulting business, we mostly work in consumer products and industrial applications.

[00:19:38] So think big, heavy equipment that would be operated indoors and outdoors in industrial environments. And think consumer products like small appliances that you might bring into your home. There's so many of these products being developed now there's such a variety and each one has its own particular requirements and constraints and challenges, but there also is a certain amount of commonality so we can learn from lessons from one applied to another, to, to significant degree. Usually the, in some sense, the trickiest part is figuring out the right problem to solve with computer vision. There are many problems that can feasibly be solved, and there are many that can't with this kind of practical state of today's technology.

[00:20:22] But just because something can be done with a certain technology doesn't mean it makes sense to do like sometimes it's overkill. That's often where the conversation starts. Here's half a dozen things we're thinking about for computer vision, which ones make sense. Of the six, maybe there's two that really aren't feasible with today's technology at the price point desired.

[00:20:40] And maybe there's two that actually can be solved perfectly well with less expensive, less complicated, more mature technology. It's the two that are left, that are in the sweet spot. Those are the two that were interested in.

[00:20:51] Eric: It seems like one of the challenges that companies face when they are incorporating vision into their products or developing something new with you, a big piece of that has to be data, how to acquire it, how to store it, how to maintain it, how to transmit it, that sort of thing.

[00:21:08] Jeff: Data is the most common kind of bottleneck in these projects. And the reason for that is as the algorithmic techniques have transitioned from conventional techniques to deep neural networks. And it's not an all or nothing thing, but deep neural networks over time are being used more and classical or conventional algorithms gradually being used less, the sort of defining characteristics of these deep neural network algorithms is that they need large quantities of data in order to train them.

[00:21:39] They learn by example, much a small child might. So as opposed to a smart engineer sitting down and spelling out a step-by-step procedure, for example, imagine teaching a kid how to tie their shoelace, by writing down instructions or reading them instructions, step-by-step like hopeless, right? For certain tasks that works fine.

[00:21:58] But for many tasks it's hopeless. And the other hand you show the kid over and over, and then you help them initially like even holding their hands and leading them through the steps and magically, eventually clicks and they learn how to do it. Deep neural networks are a lot like that and they need a lot of examples. They need to be shown many times, many different ways.

[00:22:17] This is a yes case. This is a no case. For example, we worked on an application a year or so ago to look at a street scene and just take statistics about what percentage of people are wearing face masks. So we need lots of examples of people walking by wearing face masks and not wearing face masks.

[00:22:35] Also people walking with their backs to the camera, because that's a case where you can't tell whether the person is wearing a mask or not. You still need to count that there is a, understand that there was a person there. Getting enough data with the right diversity of cases and also things like diversity factors into other things as well, like child, adult, or different races, hairstyles, genders, body types, and so on.

[00:23:01] If whatever it is you're trying to recognize could be vehicles, people, certain food items, whatever. You need lots of examples and varied variation in them so that the algorithm can learn. Okay, yeah, bananas, for example, they aren't always yellow. Sometimes they're green. It's still a banana.

[00:23:20] Okay. Right?

[00:23:21] Phil: Can I jump in with my favorite story on that?

[00:23:24] Okay. Jeff just said, so you want to add vision to a product and you want that product to be able to recognize a particular thing. He used a very polite example of bananas. Say that your Roomba, the vacuum cleaner people, you may know one of the big problems that Roomba has is when one of their robots runs over some dog poop in your house and proceeded to make a mess.

[00:23:45] Everybody is very sad at that point. So in September they just introduced a new Roomba that has computer vision that has seen, talk about solving the right problem. It can detect, oh, that's poop. I better not run into that. And so there's a bunch of things that just, I love about this example, right?

[00:24:03] Because first off people think, oh, you're an AI engineer. That's so glamorous. Oh, let me tell you how glamorous it is. And Jeff said, you're going to need thousands, tens of thousands of images of, in this case, poop. And now you got this logistical problem of where do I even get that. And this is the reality of if you're going to build a neural network, that's going to detect that.

[00:24:25] That's what I need to know.

[00:24:27] Jeff: Getting enough data with enough variety is key and weeding out the errors. And then the data has to be labeled. This has banana. This is not a banana.

[00:24:37] Phil: I see Jeff's gone back to bananas, ok.

[00:24:39] Jeff: And it has to be maintained over time because situations change over time. For example, like if you think about the checkout, cashierless checkout at the retail store, the mix of products that the store sells changes over time, or even if the mix isn't changing packaging changes.

[00:24:57] So what a certain product looked like last week, it may look quite different this week. They may give us a brand refresh. Now it's a different color bottle or box or whatever. So getting, figuring out what data you need, getting it labeled, keeping it clean. So it isn't because if you teach the machine, teach the algorithm with incorrect examples, it will dutifuly learn the wrong thing.

[00:25:19] This turns out to be a big obstacle. And almost every project that we get involved in hits some difficulties there. Sometimes people just don't have the data and have to really think about what are we going to do here? Do we have to hire like camera crews and actors to enact scenes so we can, so we can get the data we need, or maybe they have data, but it's not well organized and maintained.

[00:25:46] And so there's a lot of incorrect, inappropriate examples or incorrectly labeled examples, and now it has to be reviewed and quality controlled. At one stage or another, there's typically a lot of work to be done, getting enough of the right kind of data and getting it correctly labeled and organized so you can train and then verify deep neural network, especially the deep neural network algorithms.

[00:26:13] So that's a common kind of pain point. This is an important element of picking the right problem. It's to ask yourself, can I get the data? The problem could be perfect for these algorithms, for these techniques in all other respects. But if you can't get enough data, then you're not going to be successful, at least not with deep neural network techniques.

[00:26:34] Eric: Are these neural networks shared. Do they have shared components that might be shared across companies or even across industries where that learning can be leveraged by other applications or use cases?

[00:26:49] Jeff: There are shared data sets or open source data sets that have mostly come out of the academic research community.

[00:26:55] And those are often very helpful for getting started down the road to a real-world solution. It's a kind of starter set. Let's say we're going to train a deep neural network to detect whether people are wearing face masks or not in a, in a street scene. We could start training our deep neural network from absolutely nothing, just complete blank slate, but often it's actually a great simplification to take a deep neural network that's already been trained and made open source.

[00:27:26] You can often take a deep neural network for some other more, much more generic set of objects. And instead of wiping it clean and starting it from zero on your dataset, you build on top of it. It's something called transfer learning. So you started from what it already knew, and then you, it learns the specialization of your particular application.

[00:27:49] And it turns out that goes often, much easier and faster than if you started it from scratch. And so that's where those academic data sets often play a role because I'm likely to take a pre-training deep neural network that's an open source and train it from its pre-trained state. Train it additionally, for my specific application, rather than starting from zero.

[00:28:14] Eric: I love the enthusiasm that you guys have for all of this stuff. What are some of the developments that are happening now that are really floating your boat?

[00:28:26] Jeff: We tend to be very drawn to these like rockstar applications, like the autonomous car or the system that automatically determines blood flow in the heart by constructing a personalized model of your heart based on catscan images, and these are amazing, impressive accomplishments, but in some sense, what to me is more exciting is to see this technology used to solve everyday problems that are not going to grab headlines, but it shows that it's proliferating. And the great example of that is I was at the Santa Clara convention center last week for a conference and exhibition.

[00:29:04] And I went to get a snack at the little concession stand in the exhibit hall and what they had was essentially self checkout that works on non packaged goods. Right? So I might've had an apple and a bowl of soup and a bottle of water and you take your tray and you put it down on this designated area and this device using cameras and computer vision in a matter of seconds shows you on a tablet, what you're taking and what the charge is, what the prices are, and then invites you to put in your credit card and pay. And this is not something that the convention center installed to be like a showcase.

[00:29:46] Eric: It certainly sounds like a gag.

[00:29:49] Jeff: It's something that they put in. It's something just to speed up people getting their lunch or their snack. And I thought that's really cool that they saw this as a solution to a problem to help them improve customer service, improve their operations. They don't even know that it's computer vision, right? The general manager of the catering company, the guy that runs the food service in this convention center.

[00:30:16] He doesn't know what computer vision is, I think. He doesn't need to know. This solves a problem. And to me, that's fantastic. The technologies become invisible and it's proliferating into every industry, every kind of application where you find electronics systems and even places where you don't normally find electronic systems.

[00:30:37] Eric: Let's talk about resources where people can learn more, whether they're an engineer, as many of our listeners are or interested in this from a, from a layman's perspective, where can folks go to, to learn more about vision and computing?

[00:30:55] Phil: You had mentioned one of them, which is our website. We're pretty there, there are lots of websites about computer vision.

[00:31:01] We're fond of this one cause it's ours, but this is the website of the Edge AI and Vision Alliance. And it's called edge-ai-vision.com. And what you'll find there is really just a whole host of material ranging from videos from previous summits and how to do things to product, new product announcements and various analyses that we've done.

[00:31:25] It really, there's something there for everybody who's interested in computer vision or Edge AI.

[00:31:31] Jeff, do you want to talk a little bit, maybe about the summit?

[00:31:34] Jeff: Yeah. So coming up in May, in Santa Clara, California, we have each year the Embedded Vision Summit, this coming year, 2022, it'll be May 17 through 19. That's a conference and expo that we've been running for about 10 years.

[00:31:49] That is entirely focused on this kind of what we've been talking about here. Practical aspects of how do we, how do I use this technology to solve real world problems.

[00:32:02] Phil: The thing that you see at our conference is development boards with ribbon cables, spilling out and power supplies. And is it looks like you're walking the trade show floor looks like you're walking through a, through a double E lab someplace, and we wouldn't have it any other way because that's where all the exciting stuff is happening.

[00:32:19] Eric: Phil, Jeff, I cannot thank you enough for the conversation today. This has been so much fun. We really appreciate your time being willing to share your expertise with us here today, and also for all you do to, to move the whole community forward and keep innovation going.

[00:32:37] Phil: Thank you so much. It's been really a pleasure.

[00:32:40] Jeff: Yeah. Thanks for having us.

[00:32:42] Eric: I'd like to thank our audience for tuning in and thank BDTI for sponsoring this episode of the SiliconExpert Intelligent Engine. Tune into new episodes that will delve into more of the electronics industry. Upcoming topics will include the changing landscape of electronic product manufacturing and peek at water coolers that fill their tank from the air around them.

[00:33:03] Be sure to share our podcast with your colleagues and friends, you can also sign up to be on our email list to receive updates and the opportunity to provide your input on future topics. Go to SiliconExpert.com/podcast to sign up. Until next time. Keep the data flowing.

Latest Episodes

December 22, 2022
Play Pause

Is Your Co-Worker Lying? It May Be More Obvious Than You Think.

Whether it’s a terrorist that’s planning a bombing or a co-worker that has created a bad situation in the workplace, both are likely lying. See how to pinpoint the lie versus the truth and reveal the shortcomings in a company that has a challenging culture. Former FBI agent, Colton Seale has had an interesting career interviewing people. He can give an inside look at the new methods he’s helped create in the questioning process of interrogation (or information-gathering-session). The truth is out there!

November 1, 2022
Play Pause

From Transforming IBM Call Centers to Creating US Navy Digital Twins

Transformational advancement is a necessity. If you’re not changing and morphing, you’re going backwards. Hear about how IBM call centers went from individual sites to an interconnected global network. What it took to convert aviation from paper manuals to digital. How two destroyer collisions launched the US Navy into using digital twins. And more.

October 1, 2022
Play Pause

Innovating Tough Tech, Crossing the Chasm, and Avoiding Analysis Paralysis

How does a start up go from innovative genius design to profitability and scalable manufacturing efficiency. This is the make or break question for most companies that want to get past the initial phases of exciting development and buckle down to the nitty gritty of having a sellable product.

September 1, 2022
Play Pause

Building Circuit Boards and Helping Cancer Patients

Kevin Devine and Liam Holt are partners, but more importantly, friends that have each other’s back. Both have had their share of success, but not without the challenges that come from running an electronics industry business. This is their story. They’ve managed to not only build a great company but also discover a way to help those in need that are fighting cancer.

July 30, 2022
Play Pause

What’s Your Colleague’s Reasoning? Engineering and Procurement Need to Sync

In the past, the phrase ‘Not my Job’ may have been the norm. But today, with shortages and disruptions happening more and more frequently, an Engineer and a Procurement Manager needs to be acutely aware of each other’s needs and justifications. The new phrase is: ‘Walk a Mile in My Shoes.’ At the end of the day, your production and manufacturing need to be uninterrupted and efficient in order to keep everyone happy.