On Saturday February 11, 2017, my daughter and her friend were driving from a basketball game in Chico back to our home in Palo Alto. Unfortunately, due to several days of heavier than normal rains, the Oroville Dam spillway broke and flooded many of the roads between Chico and Palo Alto. My daughter’s smartphone mapping application wasn’t aware of the sudden danger, and proceeded to send her into the heart of the flooding (see Figure 1).
Fortunately, courtesy of some heads up “smart” driving, she was able to navigate the shallow flooding and avoid the more dangerous, deeper flooding (always helps to see cars stalled in the water before deciding to plow in).
This incident highlights two significant challenges with respect to the application of artificial intelligence in the world of the Internet of Things (IoT), edge analytics and creating “smart” devices:
- Artificial Intelligence Challenge #1: How do the Artificial Intelligence algorithms handle the unexpected, such as flash flooding, terrorist attacks, earthquakes, tornadoes, police car chases, emergency vehicles, blown tires, a child chasing a ball onto the street, a pet darting into traffic, the Cubs winning the World Series, etc.?
- Artificial Intelligence Challenge #2: The more complex the problem state, the more data storage (to retain known state history) and CPU processing power (to find the optimal or best solution) is required in the edge devices in order to create “smart.”
The challenge for any autonomous device (car, truck, drone, washer, wind turbine, pace maker) is how to manage challenge #1 within the computational and storage limitations of #2. What’s going to happen to your autonomous car when it’s driving down the highway and comes across a semi-trailer with the following graphic?
And I’m not even sure where to begin with how an autonomous car might handle something like this (I hope your autonomous car hasn’t been watching any Transformer movies…)
We don’t need autonomous devices as much as we need “smart” devices; devices smart enough to do what my daughter did when faced with an unexpected situation requiring a real-time decision with only a limited amount of historical data and experience.
Moore’s Law: NOT to the Rescue
The physics of microprocessors and Moore’s law, which have helped us out of technology execution jams in the past, are not going to help address these two AI challenges. Historically, the rapidly declining storage costs and rapidly increasing CPU processing capabilities have allowed technologists to wait for the technology to advance in order to address the problem for them. Unfortunately, the growth in the sensor data and complexity of “smart” decisions at the edges is increasing faster than Moore’s Law can cover (see Figure 2).
IOT data is growing at a 61.5% compound annual growth, which is greater than the increases in CPU processing power.To quote:
“Until a few years ago, Intel was able to reduce the scale of its chip designs every two years. But that cycle has been lengthening. Between the introduction of 65 nm and 45 nm chips, about 23 months passed. To get from 45 nm to 32 nm took about 27 months, 28 months to go down from there to 22 nm and 30 months to shrink to the current 14 nm process. And that’s where Intel has been stuck since September 2014” (and now scheduled to ship by the end of 2017).
So Moore’s Law isn’t going to bail us out. Storage and CPU technology advances are not going to keep up with the data and the edge computing complexity demands, so we’re going to have to learn to work smarter, which means that we’re going to need to dive into the world of Artificial Intelligence.
Role of AI in Transitioning from Autonomous to Smart
Artificial intelligence is comprised of different analytic algorithms that fall into three general categories: supervised, unsupervised, semi-supervised, and reinforcement learning:
- Supervised Learning trains networks using examples where we have known outcomes (e.g., someone committed fraud, a customer attrited, a component failed, a patient got an illness, someone clicked to buy something, someone on the Titanic died). Supervised learning applies what has been learned (quantify existing relationships to the known outcomes) from the historical data to new data. Examples include facial recognition, text translation, license plate readers, or distinguishing photos of puppies from blueberry muffins.
- Unsupervised Learning is for situations where you have a data set but no known outcomes. Unsupervised learning takes the input set and tries to find patterns in the data. Unsupervised learning uncovers new or latent relationships; it draws inferences from datasets. Examples include organizing customers into groups based upon purchase and/or web browsing behaviors (clustering) or finding outliers in the performance of your edge devices (anomaly detection).
We had earlier covered Supervised Learning and Unsupervised learning. Now we need to expand the conversation into the world of Reinforcement Learning, which seems like an ideal option for the smart devices at the edge challenge.
I will use the next blog to deep dive into how reinforcement learning might help address these two challenges, but will leave this blog with the following teaser about reinforcement learning (think of it like a movie trailer).
Reinforcement Learning is for situations where you don’t have data sets with explicit known outcomes, but you do have a reward function telling you whether you are getting closer to your goal. Trial and error search and delayed reward are two key features of reinforcement learning. The “Hotter or Colder” game is a good illustration of reinforcement learning; rather than getting a specific “right/wrong” answer, with each input point you get a delayed reaction and a hint of whether you’re heading in the right direction.
Watch this space!