Digital Twin Calibration with Big Data
Advancements in numerical algorithms and computational power have significantly elevated the role of digital twins in the design and analysis of various systems. The development of digital twins frequently hinges on physics-based first principles and requires the accurate calibration of numerous parameters. However, in certain situations, these parameters cannot be precisely determined through physical laws alone, leading domain experts to rely on educated guesses. These assumptions, while well-intended, can create substantial discrepancies between the digital twin’s outputs and the actual system’s performance. To ensure that digital twins closely replicate their real-world counterparts, this research leverages the power of Big Data, addressing the research challenges posed by the vast scale and complexity of such datasets. By establishing a robust and scalable parameter calibration process, this research integrates data science theories and advanced tools within a data-driven optimization framework. This approach not only enhances the practical application of digital twins but also contributes valuable insights to the field. This research has been sponsored by NSF grant (Award number: CMMI- 2226348) and Ford Motor Company.
Uncertainty Quantification with Stochastic Simulations
The purpose of this research area is to provide computationally efficient methods to evaluate reliability or risk using stochastic simulations. As simulation models become more realistic and their degrees of freedom increase, reliability evaluation remains challenging, because each simulation replication is computationally expensive. There has been a rich body of studies to run simulations efficiently to obtain estimates of interest. These studies have been limited to the cases where all of the random components in the simulation can be controlled (or, sampled) or input variables are low-dimensional. However, controlling all of the components inside a simulator is difficult, if not impossible, when a simulator models complicated processes in high dimensional spaces. New importance sampling methods and stratified sampling approaches, which aim to minimize the estimator variance, have been devised and validated using aeroelastic simulators. This research has been sponsored by NSF grant (Award number: IIS-1741166).
Online Learning, Monitoring and Fault Diagnosis
Advances in sensor technology enable the installation of in-situ sensors in modern engineering systems for predicting system performance and/or monitoring system condition. When the baseline input-output relationship changes, existing monitoring and control methodologies are ineffective. We develop a new regularized learning method, which lays the foundation for enabling adaptive fault detection with fewer false alarms, i.e., adaptively trace the expected change in a system. Additionally, we develop a new fault diagnosis method that adapts anomaly detection decision boundaries to the underlying process change, reducing the number of false alarms and missed detections, and achieving more effective monitoring and detection strategies for system operations and maintenance. This research is in collaboration with engineers in the Department of Energy’s National Renewable Energy Laboratory (NREL). (Award number: CMMI-1362513).
Collaborative Learning, Prognostics and Health Management
The objective of this research is to develop a collaborative prognostics and health management (PHM) methodology for monitoring a massive number of units in manufacturing enterprises. Cost-effective enterprise-level PHM requires a full understanding of the degradation patterns. A common practice is to research a general degradation model by assuming the homogeneity of all units throughout their operational life. Such approaches capture average characteristics, but ignore the individual differences among the units and the different degradation paths undertaken subsequently. An alternative approach, individualizing PHM operations for each unit, seems equally intractable or costly, given the number of units involved at the enterprise level. We develop a new modeling approach to translate the heterogeneous degradation processes of individual units into enterprise-level information that can be used for cost-effective sensing and maintenance decision-making. Specifically, we model the heterogeneous degradation processes by investigating the differences and similarities among individual units; the population characteristics are represented by a manageable number of canonical models forming an enterprise-knowledge base, and the individual degradation characteristics are captured by dynamic segmentation that models the resemblance between each unit’s degradation pattern with the canonical models. The characterization of both population-level and individual-level degradation characteristics allows the model to organize large numbers of working units into a manageable structure and make the solution of optimal sensing and maintenance problems tractable. This research was supported by NSF grant (Award number: CMMI-1536924).
Assessing the Impact of Extreme Heat Scenarios on Urban Energy Consumption
Reliable and environmentally sustainable consumption of electricity is a major concern for cities experiencing climate extremes. Electricity management in densely populated urban areas during extreme heat and drought events poses unique challenges due to elevated electricity demand, primarily cooling. Analysis of urban electricity demands in climate extreme scenarios relies on modeling and data studies of the scenarios. This research analyzes the impact of long-term extreme climate scenarios on modern city-scale power systems and the inherent uncertainties using data with different fidelities, including historical data on heat and regional climate models, to develop an ensemble of heat climate scenarios that inherently account for uncertainties. Then we develop an urban electricity consumption model integrated with demand-side management (DSM) methods, with a focus on periods of extreme heat. Figure above illustrates the long-term prediction of daily peak load densities with different adoption rates of DSM in a case study of south-central region in Texas. The results suggested that the peak demand was expected to increase because of population growth and escalated temperature increments caused by climate change and urbanization, but it can be reduced by 18% in 2040, if 50% of potential households join DSM programs, compared with no DSM efforts. This research has been supported by NSF grant (Award number: CMMI-1662553)
Operations and Maintenance (O&M) Optimization under Nonstationary Operating Conditions
Several systems operate under highly variable loading conditions, and maintenance can be constrained by stochastic operating conditions that can disallow or disrupt repair activities. Using a partially observed Markov decision process, I devised condition-based maintenance models including (1) a static model where the optimality structure can be analytically attained as a closed form under the assumption that the weather conditions remain stationary over the time; (2) dynamic model where the O&M policy is dynamically adapted to season-dependent weather conditions; (3) a tractable approximation of dynamic decision-making in a large-scale wind farm. The developed models has been integrated with discrete-event simulations for validations.