Modern safety-critical applications like cars require failure prediction and reliability assessment. As vehicles become more complex, design reliability is often underestimated. Vehicle system reliability must be prioritized to prevent accidents.
ASIL is a reliability objective for vehicle electrical/electronic systems in ISO 26262. ASIL levels are based on hourly failure probability. Safety integrity can be improved by reliability and failure prediction.
AI has long helped predict failure and assess reliability. Simple methods like fault tree analysis, event tree analysis, and Markov models have been used to model system events and structure and predict failures. Event tree analysis describes incident and failure sequence logic. Markov models evaluate system behavior based on current state. These methods work, but they require human understanding and labor. Automatic prediction and assessment without logic or system analysis are possible with AI.
The need for predictive failure analytics in the automotive industry
The automotive industry demands reliability and safety as vehicle systems become more complex. Consumers expect flawless vehicle performance, while regulators enforce strict safety standards. In this case, predictive failure analytics is essential. Predictive analytics uses massive amounts of data from vehicle sensors and IoT devices to predict issues and recommend timely interventions.
Automotive engines have moving parts that wear out. Cracks in these parts may not fail until they reach a critical size, causing engine failure. This is especially dangerous at high speeds and loads. Thus, engine component failure detection is crucial to preventing catastrophic events. Predictive failure analysis monitors the engine’s condition to detect component failure early, allowing scheduled maintenance and avoiding unplanned downtime and component damage.
Problem statement
Recent cars use complex control systems to meet safety, emissions, and reliability standards. Many reliability and safety functions are now implemented in software that runs on distributed hardware. In automotive applications like steer-by-wire and brake-by-wire, such control systems replace mechanical or hydraulic linkages.
There are safety/reliability concerns with older control systems. Improve function without changing hardware with software. Assessing reliability and comparing to existing systems is crucial.
Software execution reliability must also be addressed. Software-implemented systems can achieve higher reliability than existing systems if the software is robust. Software reliability for safety-critical systems must be measured.
Objectives
This project aims to predict system failures and identify flaws. Improved predictive analytics determines system robustness. The new hybrid particle filtering predicts system failure patterns. It will be tested in a physical system using engine data. The method uses modified Gaussian processes and adaptive neuro-fuzzy inference systems (ANFIS). ANFIS computes GP failure predictions and improves accuracy. It models complex degradation patterns and calculates prediction confidence limits. Comprehensive fault detection and prediction are achieved through step-by-step development.
Literature review
Researchers are studying predictive failure analytics (PFA) because it predicts failures. Monitoring degradation allows timely maintenance to prevent system failures. Automotive mission-critical and safety-critical systems benefit from PFA. PFA must be developed before it can be used in automotive systems. Detecting early faults and monitoring different behaviors is difficult.
Automotive system behavior can also change due to the environment, but this does not necessarily indicate faults. PFA should distinguish normal from abnormal changes. With automotive technology changing, PFA must adapt to new systems.
Predictive failure analytics: An overview of the field
Maintenance’s emerging field of predictive failure analytics combines condition-based maintenance with reliability prediction. It develops predictive models to assess equipment conditions and prevent failure. These models are built using regression analysis and systems modeling [6, 10]. Effectiveness depends on predictive accuracy and degradation process modeling. Maintenance can be optimized, system downtime reduced, and safety and reliability improved. Failure prevention is essential in safety-critical systems. Cognitive prognostics predict failure within a given timeframe using diagnostic results.
Applications in automotive industry
The automotive industry is recognizing predictive maintenance as a cost-effective way to improve vehicle reliability. Despite high reliability, 1.3% of vehicles fail weekly, which is significant in an industry with low profit margins. Automotive safety-critical systems are prone to failure, endangering lives and equipment. Preventing such failures with predictive maintenance improves system reliability and reduces accidents. Automotive predictive maintenance meets rising vehicle reliability demands.
Current Challenges and Limitations
Material, modeling, and measurement issues arise when adopting predictive failure analysis. Future modeling must include materials science to address material issues with more accurate data. Assessment of failure modes and times requires material-specific models. Models should determine cheaper component measurement levels and types.
A new in-situ or online measurement of component behavior near failure could help predict maintenance. It takes non-disruptive sensors and sensor-supported models to determine sensor type and location. Progress in sensor technology will enable data analysis and the development of a tool to estimate useful life.
Method advancements will shift maintenance strategy from age-based to condition-based. Predicting the component’s lifespan and cost-effective maintenance age will guide maintenance. Zero maintenance occurs when a cheaper system-wide redundancy replaces the future component. Model and assess the optimal strategy and consumer surplus of buying a cheaper component.
Methodology
GPS, camera, radar, and ultrasonic sensors supply vehicle data. Driving under different conditions to induce intentional failures records GPS data using Google Maps’ “record” feature. KML to CSV conversion and extraction are done online for saved GPS data. Flowchart of methodology follows.
I tested by reading selected devices’ heart beat data with Iot Consumption and a free cloud subscription, then using Apache Kafka to separate, analyze, and send messages back to Iot Devices and mobile apps.
See high-level approach below.
The process began with simple Linear Regression.
Linear regression equation: Y = M0+M1X1 + e0.
Independent variable X1, dependent variable Y, Random Error term e
The sample test output was:
Key equations in the deep learning calculations are:
We record real-time driving video with a Bosch REC-8TL PC-based video capture system and front-facing camera. Microsoft AVI captures digital/compact video. Windows video viewers process interesting video events. The JAI CV-A1 camera and CV-A1GE frame grabber record analog video. VirtualHub creates a new AVI and event file. The new AVI and event files are stored separately as frames and events. AVI and event files are timestamped, and XML camera metadata is stored.
GPS and radar data are stored separately. AVRS obstacle data and GPS data are stored in a SQL database. Ultrasound data is processed similarly. A PC ProC++ app logs RPM, throttle position, speed, fuel level, and coolant temperature from the vehicle ECU in a CSV file. Sensor CSV files with different data, formats, and timestamps are stored in MATLAB structure format.
Accumulation of information and preparation
Predictive model data collection and preparation are crucial and time-consuming. Proceed correctly to avoid failure or an unsatisfactory model with uncertain reliability. This stage involves data collection and preparation.
Data collection: Many sources provide data. Data quality must be assumed. If data is scarce, the modeling goal may change. Redefining data collection protocols or improving data quality may require resources. Data simulation or collection may be needed.
Extract the location and heartbeat of each app. See Sudo logic below.
### Convert ‘Vehicle Location’ column to String: df[‘Vehicle Location’] = df[‘Vehicle Location’].astype(str)
### Key Extract: latitude and longitude from ‘Vehicle Location’ column
Extract coordinates(xLoc, index):
coords = re.findall(r’-?\d+\.\d+’, xLoc)
If len(coords) > 2:
coords[index] = float
else:
return None
Latitude = Vehicle Location.lambda x: extract_coordinates(xLoc, 0)
Longitude = Vehicle Location.lambda x: extract_coordinates(xLoc, 1)
df = dropna(subset=[‘latitude’, ‘longitude’])
Building a predictive model requires data pre-processing after collection. Data cleaning, error detection and correction, replacing missing or abnormal values, and data reduction depend on resources. Data consistency and outliers are checked.
Many industry tools can collect real-time data, but we used Kafka open-source.
Engineering and feature selection and selection
No direct link exists between human brain data and machine learning or predictive algorithms. It takes trial and error to turn human insight into an accurate prediction model.
Data preparation and algorithm performance depend on feature selection. Features can break algorithms and improve model efficiency and accuracy. Finding a representative feature set reduces noise and improves predictions. Problem size determines feature count, with N = 5p considered “large.”
Choosing features involves three steps. Features are ranked by importance first. Important features that affect model accuracy are kept. Researchers then examine the features’ joint and conditional dependence on the target variable. For categorical data, the weight of evidence and information value framework is employed; for continuous and categorical data, graphical models or comparable algorithms are utilized.
Finally, relevance and dependence information determines which features to include in the simplified model. An iterative algorithm evaluates and modifies the model as it searches for feature subsets. This is the most computationally and decision-intensive part of feature selection.
Methods of predictive modeling for analysis
Machine learning has advanced rapidly in the past decade. GPUs once used for video game imagery are now powerful computational resources for large data sets. This progress has spurred deep learning, which can predict. Machine learning is known for predictive modeling, finding complex patterns to predict outcomes. Predictive maintenance uses monitoring and inspections to predict equipment failure and lifespan.
Parametric and nonparametric predictive modeling exist. Parametric modeling estimates the response variable using an equation, while nonparametric modeling finds the best data fit. Parameters link the line to data. Rearranging and substituting values to solve the equation provides the best-fit function’s response variable estimate.
Metrics for the evaluation of performance
Performance evaluation metrics compare predicted values to original data to show model performance and areas for improvement. The ROC curve, precision, recall, and accuracy are examples. This paper focuses on the model’s net lift to determine if it improves the monitored condition rather than its success or failure.
A binary classification model evaluates predictions based on event outcomes as true positive, false positive, true negative, or false negative. The results are in a confusion matrix. This method works best when false positive and negative costs differ. However, determining a positive or negative result is difficult.
Calculating a binary classification test’s receiver operating characteristic (ROC) curve and area under it is another option. This is an effective measure without a discrimination threshold.
Results and discussion
We discuss automotive reliability and safety implications of predictive failure model results. These models are compared to reliability block diagrams and their pros and cons are discussed. Our goal is to improve vehicle safety-critical system reliability. AI methods like neural networks improve ABS failure prediction. Instead of using RBD component data, the neural network is trained and tested on simulation and field data to make more accurate predictions.
Statistical analysis of predictive failure models through analysis
Analogical and model-based reasoning are common in AI-based diagnostics. They predict failures using expert system heuristics and selective evaluation. These methods require system knowledge, which may be needed in real life. A data-driven classification approach can model failure and predict events. When vibration data was compared to determine bearing failure, this method worked. Although helpful, this diagnosis predicts the failure event’s timing. Evaluation using ROC curves improves prediction accuracy.
Comparison with the conventional methods to consider
Many have compared predictive analytics to traditional methods. These predictive failure analytics models match traditional methods. They propose safety and reliability measures. Predictive models prevent problems, while traditional models detect and fix them.
Both traditional and predictive methods use various tools to predict system failure. Here, reliability-centered maintenance (RCM) and system risk priority numbers are used. Both methods use planned maintenance to address high-risk items. Traditional and predictive maintenance methods agree.
Automotive high-cycle machinery and electrical equipment are like high-risk systems. Maintenance tasks maintain function and performance. It may involve replacing an electronic control unit before it fails using a PFA model. This matches PFA model maintenance tasks. It may not improve reliability and safety, but preventing performance loss is easier than fixing an FMEA failure.
Because predictive models are easier to implement, comparisons with traditional methods are often irrelevant. Instead, show an RCM plan for similar maintenance tasks. If it can improve reliability and safety in cases where actions would have been taken anyway based on failure severity and risk priority, PFA methods are progressing.
Implications for reliability and safety enhancement
Data-driven vehicle maintenance scheduling is more accurate than time-based scheduling. Since driving conditions affect part lifespan, scheduled maintenance may not be needed. Repairs are reactive, but algorithm-based diagnostics can detect degrading parts. Safer and cheaper predictive maintenance is better. Simulations show predictive maintenance is cheaper than diagnostics and preventive maintenance. This research is crucial for vehicle reliability and automotive component automated control system safety. Maintenance scheduling and design changes can be optimized using a similar method.
Conclusion
This paper discusses predictive failure analytics industry progress and AI’s impact. The paper discusses automotive safety standards and system reliability. This paper reviews past times-to-failure modeling and categorizes it by data type, such as censored or uncensored. We reviewed AI techniques and discussed how they could improve existing models. Finally, this paper discusses ‘risk priority number,’ which predicts design failures and improves the system. Based on past and present methods, techniques, and models, a conclusion has been drawn about the future scope of AI in predictive failure analytics, including new areas and types of models that can be used in the automotive industry and how they will affect system reliability and safety. This paper will greatly benefit automotive predictive failure researchers by consolidating past and present methodologies and models.
How about AI cyber analytics using ML?
End2end SOC , AI enhanced detection of adversarial activity… (etc)
Be good to add above and or similar to the automotive use cases you depict!
At the end of the day with ADAS+ , LiDAr ,CAN BUS. Interconnected ECUs , safety ,security, reliability, dependability are all inter depenendant in modern CAVs.