Edge computing moves workloads from centralized locations to remote locations and it can provide faster response from AI applications. Edge computing devices are getting deployed increasingly for monitoring and control of real world processes like people tracking, vehicle recognition, pollution monitoring etc. The data collected at the devices gets transported to centralized cloud servers over data pipelines and are used to train machine learning models. Training models needs lot of computational power and the current strategy is to train centrally and deploy on edge devices for inference. Already deep learning models are being used at the edge for critical problems like face recognition and surveillance. In addition, there exists thousands of AI applications on edge devices making use of inference from ML models. The ML models get deployed on edge devices like Raspberry pi, Smart phones, Micro-controllers on machine learning frameworks like TensorFlow Lite. The challenges in performing machine learning at the edge are:
- Training the models from large volumes of stored data
- Latency in preprocessing and data cleansing
- Dynamic online training from stream data
- Latency in real-time decision making
- Increased power consumption during training stage
Machine learning looks for patterns in data and influences decisions based on them. Intelligence on the edge aka Edge AI empowers edge devices with quick decision making capabilities to enable real time responses. As an example, let us examine a commonly used AI enabled application for identifying plants. [email protected] is an application useful for identifying plants from the picture of their leaves and flowers. This app becomes useful for identifying rare medicinal plants used in the preparation of holistic medicines used in Asian countries. This application is available on web and also in the form of a mobile app. The mobile app version makes use of ML inference at the edge. At the edge, preprocessing of images takes considerable time and it takes a long time to identify the name of the plant. Latency in data transportation to the cloud and the delay in response from APIs is driving many AI developers c to move from cloud to edge. We can use the edge computing power for training and inference in machine learning solutions like [email protected]
Cloud service providers provide APIs for Vision, Forecasting, Clustering, Classification, Speech and Natural language processing. The APIs in Vision category exposes pre-trained models for face detection, face verification, face grouping, person identification and similarity assessment. Existence of the pre-trained models in the cloud attracted AI solution developers to make use of them for inferencing and created a trend to move on premise computing to the cloud. The computing power available on modern edge devices is equal to or higher than the computing power available on high end servers. Many startups and chip manufacturers are working on specialized accelerator chips to speed up and optimize the execution of ML workloads at the edge. For example, Convolutional Neural Networks implemented on the accelerator chips helps in real-time image filtering in the pre-processing stage of ML based face recognition systems. Examples for high performance edge devices are LattePanda Alpha, Udoo Bolt, Khadas Edge-V , Jetson Nano, and Intel Neural Compute Sticks. Jetson Nano has built in GPUs enabling them to perform real-time digit recognition from video images. The Neural compute sticks can be plugged on to Raspberry Pi through USB to augment their computing power. With the evolution of these devices, edge computing mitigates the latency and bandwidth constraints of today’s Internet. Predictions show that edge computing will blow away the cloud in future and the cloud will become mere data store.
In near future, AI applications are going to be ubiquitous on devices such as smart phones, Automobiles, Cameras, and household equipments. In addition to inferencing, we will be able to train ML models at the edge from streaming data by incorporating preprocessing and normalization steps in the data pipeline. The models at the edge will be trained using selected attributes which are of interest to the main problem getting solved. For example, the trends in the variations in the pollution level, temperature, traffic density etc at selected junctions in a city. Transporting the models from the edge devices to the central servers saves huge amount of bandwidth and intermediate storage space required to handle the raw data. Predictive models predict the likelihood of target occurrences from independent variables. These models can be trained at the edges and get transferred to a centralized server in the cloud on a daily or weekly basis. Models from multiple edge devices can get consolidated at the centralized server to make globally valid predictions by taking into consideration of scenarios from multiple locations. For the purpose of consolidation, the models received from the edges can be used to reconstruct the target variables against a set of predefined independent variables. A global model can be trained using the outputs from edge models as target variables.
Models at the edge devices will be developed using different ML frameworks and to transport the model from these frameworks, we need a standard format. This is where the Predictive Model Markup Language (PMML) becomes useful. It is an XML based language that enables the definition and sharing of predictive models between applications. This markup language allows sharing of models developed through various modeling frameworks such as Spark ML, R, Pytorch, TensorFlow etc.
To summarize, machine learning at the edge is going to be the trend in this era of distributed decision making. ML models trained and deployed on the edge devices helps in de-centralizing the decision making process by providing more autonomy to the edge devices. This enables edge devices to react instantaneously to situations in which quick responses are required. Get ready to co-exist with intelligent edge devices deployed to keep track of your movements and actions.
See you next time…….
Janardhanan PS
Machine Learning Evangelist