Records with Subject: Numerical Methods and Statistics
Implementing a Novel Hybrid Maximum Power Point Tracking Technique in DSP via Simulink/MATLAB under Partially Shaded Conditions
Shahrooz Hajighorbani, Mohd Amran Mohd Radzi, Mohd Zainal Abidin Ab Kadir, Suhaidi Shafie, Muhammad Ammirrul Atiqi Mohd Zainuri
November 16, 2018 (v1)
Keywords: digital signal processing (DSP), global maximum power point (GMPP), O), partial shadow (PS), perturb and observation (P&, photovoltaic (PV), Simulink/MATLAB
This paper presents a hybrid maximum power point tracking (MPPT) method to detect the global maximum power point (GMPP) under partially shaded conditions (PSCs), which have more complex characteristics with multiple peak power points. The hybrid method can track the GMPP when a partial shadow occurs either before or after acquiring the MPP under uniform conditions. When PS occurs after obtaining the MPP during uniform conditions, the new operating point should be specified by the modified linear function, which reduces the searching zone of the GMPP and has a significant effect on reducing the reaching time of the GMPP. Simultaneously, the possible MPPs are scanned and stored when shifting the operating point to a new reference voltage. Finally, after determining the possible location of the GMPP, the GMPP is obtained using the modified P&O. Conversely, when PS occurs before obtaining the MPP, the referenced MPP should be specified. Thus, after recognizing the possible location of... [more]
A Novel Computational Approach for Harmonic Mitigation in PV Systems with Single-Phase Five-Level CHBMI
Rosario Miceli, Giuseppe Schettino, Fabio Viola
September 21, 2018 (v1)
Keywords: multilevel power converter, phase shifted, photovoltaic systems, selective harmonic mitigation, soft switching, voltage cancellation
In this paper, a novel approach to low order harmonic mitigation in fundamental switching frequency modulation is proposed for high power photovoltaic (PV) applications, without trying to solve the cumbersome non-linear transcendental equations. The proposed method allows for mitigation of the first-five harmonics (third, fifth, seventh, ninth, and eleventh harmonics), to reduce the complexity of the required procedure and to allocate few computational resource in the Field Programmable Gate Array (FPGA) based control board. Therefore, the voltage waveform taken into account is different respect traditional voltage waveform. The same concept, known as “voltage cancelation„, used for single-phase cascaded H-bridge inverters, has been applied at a single-phase five-level cascaded H-bridge multilevel inverter (CHBMI). Through a very basic methodology, the polynomial equations that drive the control angles were detected for a single-phase five-level CHBMI. The acquired polynomial equations... [more]
Periodic Steady State Assessment of Microgrids with Photovoltaic Generation Using Limit Cycle Extrapolation and Cubic Splines
Marcolino Díaz-Araujo, Aurelio Medina, Rafael Cisneros-Magaña, Amner Ramírez
September 21, 2018 (v1)
Keywords: cubic splines, limit cycle, numerical differentiation method, periodic steady state, photovoltaic energy sources, time domain
This paper proposes a fast and accurate time domain (TD) methodology for the assessment of the dynamic and periodic steady state operation of microgrids with photovoltaic (PV) energy sources. The proposed methodology uses the trapezoidal rule (TR) technique to integrate the set of first-order differential algebraic equations (DAE), generated by the entire electrical system. The Numerical Differentiation (ND) method is used to significantly speed-up the process of convergence of the state variables to the limit cycle with the fewest number of possible time steps per cycle. After that, the cubic spline interpolation (CSI) algorithm is used to reconstruct the steady state waveform obtained from the ND method and to increase the efficiency of the conventional TR method. This curve-fitting algorithm is used only once at the end part of the algorithm. The ND-CSI can be used to assess stability, power quality, dynamic and periodic steady state operation, fault and transient conditions, among... [more]
BiPAD: Binomial Point Process Based Energy-Aware Data Dissemination in Opportunistic D2D Networks
Seho Han, Kisong Lee, Hyun-Ho Choi, Howon Lee
September 21, 2018 (v1)
Keywords: binomial point process (BPP), data dissemination, device-to-device (D2D) communication, k-th furthest distance, relay selection
In opportunistic device-to-device (D2D) networks, the epidemic routing protocol can be used to optimize the message delivery ratio. However, it has the disadvantage that it causes excessive coverage overlaps and wastes energy in message transmissions because devices are more likely to receive duplicates from neighbors. We therefore propose an efficient data dissemination algorithm that can reduce undesired transmission overlap with little performance degradation in the message delivery ratio. The proposed algorithm allows devices further away than the k-th furthest distance from the source device to forward a message to their neighbors. These relay devices are determined by analysis based on a binomial point process (BPP). Using a set of intensive simulations, we present the resulting network performances with respect to the total number of received messages, the forwarding efficiency and the actual number of relays. In particular, we find the optimal number of relays to achieve almost... [more]
Identification of the Heat Equation Parameters for Estimation of a Bare Overhead Conductor’s Temperature by the Differential Evolution Algorithm
Mirza Sarajlić, Jože Pihler, Nermin Sarajlić, Gorazd Štumberger
September 21, 2018 (v1)
Keywords: conductor temperature, measurement, Optimization, overhead transmission line, parameter identification, Simulation
This paper deals with the Differential Evolution (DE) based method for identification of the heat equation parameters applied for the estimation of a bare overhead conductor`s temperature. The parameters are determined in the optimization process using a dynamic model of the conductor; the measured environmental temperature, solar radiation and wind velocity; the current and temperature measured on the tested overhead conductor; and the DE, which is applied as the optimization tool. The main task of the DE is to minimise the difference between the measured and model-calculated conductor temperatures. The conductor model is relevant and suitable for the prediction of the conductor temperature, as the agreement between measured and model-calculated conductor temperatures is exceptional, where the deviation between mean and maximum measured and model-calculated conductor temperatures is less than 0.03 °C.
Parameter Estimation of Electromechanical Oscillation Based on a Constrained EKF with C&I-PSO
Yonghui Sun, Yi Wang, Linquan Bai, Yinlong Hu, Denis Sidorov, Daniil Panasetsky
September 21, 2018 (v1)
Keywords: C&, constrained parameter estimation, extended Kalman filter, I particle swarm optimization, power systems, ringdown detection
By combining together the extended Kalman filter with a newly developed C&I particle swarm optimization algorithm (C&I-PSO), a novel estimation method is proposed for parameter estimation of electromechanical oscillation, in which critical physical constraints on the parameters are taken into account. Based on the extended Kalman filtering algorithm, the constrained parameter estimation problem is formulated via the projection method. Then, by utilizing the penalty function method, the obtained constrained optimization problem could be converted into an equivalent unconstrained optimization problem; finally, the C&I-PSO algorithm is developed to address the unconstrained optimization problem. Therefore, the parameters of electromechanical oscillation with physical constraints can be successfully estimated and better performed. Finally, the effectiveness of the obtained results has been illustrated by several test systems.
Choosing the Optimal Multi-Point Iterative Method for the Colebrook Flow Friction Equation
Pavel Praks, Dejan Brkić
August 28, 2018 (v1)
Keywords: Colebrook equation, Colebrook–White, explicit approximations, hydraulic resistances, iterative methods, pipes, three-point methods, turbulent flow
The Colebrook equation is implicitly given in respect to the unknown flow friction factor λ; λ = ζ ( R e , ε * , λ ) which cannot be expressed explicitly in exact way without simplifications and use of approximate calculus. A common approach to solve it is through the Newton⁻Raphson iterative procedure or through the fixed-point iterative procedure. Both require in some cases, up to seven iterations. On the other hand, numerous more powerful iterative methods such as three- or two-point methods, etc. are available. The purpose is to choose optimal iterative method in order to solve the implicit Colebrook equation for flow friction accurately using the least possible number of iterations. The methods are thoroughly tested and those which require the least possible number of iterations to reach the accurate solution are identified. The most powerful three-point methods require, in the worst case, only two iterations to reach the final solution. The recommended representativ... [more]
A Coupled Thermal-Hydraulic-Mechanical Nonlinear Model for Fault Water Inrush
Weitao Liu, Jiyuan Zhao, Ruiai Nie, Yuben Liu, Yanhui Du
August 28, 2018 (v1)
Keywords: coupled THM model, fault water inrush, nonlinear flow in fractured porous media, numerical model, warning levels of fault water inrush
A coupled thermal-nonlinear hydraulic-mechanical (THM) model for fault water inrush was carried out in this paper to study the water-rock-temperature interactions and predict the fault water inrush. First, the governing equations of the coupled THM model were established by coupling the particle transport equation, nonlinear flow equation, mechanical equation, and the heat transfer equation. Second, by setting different boundary conditions, the mechanical model, nonlinear hydraulic-mechanical (HM) coupling model, and the thermal-nonlinear hydraulic-mechanical (THM) coupling model were established, respectively. Finally, a numerical simulation of these models was established by using COMSOL Multiphysics. Results indicate that the nonlinear water flow equation could describe the nonlinear water flow process in the fractured zone of the fault. The mining stress and the water velocity had a great influence on the temperature of the fault zone. The temperature change of the fault zone can r... [more]
A High-Order Numerical Manifold Method for Darcy Flow in Heterogeneous Porous Media
Lingfeng Zhou, Yuan Wang, Di Feng
August 28, 2018 (v1)
Keywords: Darcy flow, heterogeneity, high-order, numerical manifold method, refraction law
One major challenge in modeling Darcy flow in heterogeneous porous media is simulating the material interfaces accurately. To overcome this defect, the refraction law is fully introduced into the numerical manifold method (NMM) as an a posteriori condition. To achieve a better accuracy of the Darcy velocity and continuous nodal velocity, a high-order weight function with a continuous nodal gradient is adopted. NMM is an advanced method with two independent cover systems, which can easily solve both continuous and discontinuous problems in a unified form. Moreover, a regular mathematical mesh, independent of the physical domain, is used in the NMM model. Compared to the conforming mesh of other numerical methods, it is more efficient and flexible. A number of numerical examples were simulated by the new NMM model, comparing the results with the original NMM model and the analytical solutions. Thereby, it is proven that the proposed method is accurate, efficient, and robust for modeling... [more]
Underground Risk Index Assessment and Prediction Using a Simplified Hierarchical Fuzzy Logic Model and Kalman Filter
Muhammad Fayaz, Israr Ullah, Do-Hyeun Kim
August 28, 2018 (v1)
Keywords: fuzzy inference system, hierarchical fuzzy logic (HFL), membership functions (MFs), risk assessment, simplified hierarchical fuzzy logic (SHFL), underground risk
Normally, most of the accidents that occur in underground facilities are not instantaneous; rather, hazards build up gradually behind the scenes and are invisible due to the inherent structure of these facilities. An efficient inference system is highly desirable to monitor these facilities to avoid such accidents beforehand. A fuzzy inference system is a significant risk assessment method, but there are three critical challenges associated with fuzzy inference-based systems, i.e., rules determination, membership functions (MFs) distribution determination, and rules reduction to deal with the problem of dimensionality. In this paper, a simplified hierarchical fuzzy logic (SHFL) model has been suggested to assess underground risk while addressing the associated challenges. For rule determination, two new rule-designing and determination methods are introduced, namely average rules-based (ARB) and max rules-based (MRB). To determine efficient membership functions (MFs), a module named th... [more]
A Blended Risk Index Modeling and Visualization Based on Hierarchical Fuzzy Logic for Water Supply Pipelines Assessment and Management
Muhammad Fayaz, Shabir Ahmad, Israr Ullah, DoHyeun Kim
July 31, 2018 (v1)
Keywords: blended model, hierarchical fuzzy logic, risk index, visualization, water supply pipelines
Critical infrastructure such as power and water delivery is growing rapidly in the developing world and there are developed assets that must be maintained in developed nations. One underground component that is difficult to inspect is water supply pipelines. Most of the water line accidents occur in buildings is due to pipeline damage. To minimize accidental loss, a risk assessment method is needed to continuously assess risk and report any abnormality for preventative maintenance. In this work, a blended hierarchical fuzzy logic model for water supply pipeline risk index assessment is proposed. Four important parameters are inputs to the proposed blended hierarchical fuzzy logic model. The blended hierarchical fuzzy logic model dramatically reduces the number of conditions in the rule base. Rule reduction is important because the transparency and interpretation are compromised by an overly large set. Further, it is challenging to accurately design a large number of rules because rule... [more]
The Impact of Global Sensitivities and Design Measures in Model-Based Optimal Experimental Design
René Schenkendorf, Xiangzhong Xie, Moritz Rehbein, Stephan Scholl, Ulrike Krewer
July 31, 2018 (v1)
Keywords: global parameter sensitivities, optimal design measures, optimal experimental design, point estimate method, robustification
In the field of chemical engineering, mathematical models have been proven to be an indispensable tool for process analysis, process design, and condition monitoring. To gain the most benefit from model-based approaches, the implemented mathematical models have to be based on sound principles, and they need to be calibrated to the process under study with suitable model parameter estimates. Often, the model parameters identified by experimental data, however, pose severe uncertainties leading to incorrect or biased inferences. This applies in particular in the field of pharmaceutical manufacturing, where usually the measurement data are limited in quantity and quality when analyzing novel active pharmaceutical ingredients. Optimally designed experiments, in turn, aim to increase the quality of the gathered data in the most efficient way. Any improvement in data quality results in more precise parameter estimates and more reliable model candidates. The applied methods for parameter sens... [more]
Predicting the Operating States of Grinding Circuits by Use of Recurrence Texture Analysis of Time Series Data
Jason P. Bardinas, Chris Aldrich, Lara F. A. Napier
July 31, 2018 (v1)
Keywords: AlexNet, comminution, grinding, multivariate image analysis, nonlinear time series analysis, textons, texture analysis, VGG16
Grinding circuits typically contribute disproportionately to the overall cost of ore beneficiation and their optimal operation is therefore of critical importance in the cost-effective operation of mineral processing plants. This can be challenging, as these circuits can also exhibit complex, nonlinear behavior that can be difficult to model. In this paper, it is shown that key time series variables of grinding circuits can be recast into sets of descriptor variables that can be used in advanced modelling and control of the mill. Two real-world case studies are considered. In the first, it is shown that the controller states of an autogenous mill can be identified from the load measurements of the mill by using a support vector machine and the abovementioned descriptor variables as predictors. In the second case study, it is shown that power and temperature measurements in a horizontally stirred mill can be used for online estimation of the particle size of the mill product.
RadViz Deluxe: An Attribute-Aware Display for Multivariate Data
Shenghui Cheng, Wei Xu, Klaus Mueller
July 31, 2018 (v1)
Keywords: generalized barycentric interpolation, multi-objective layout, multivariate data, RadViz
Modern data, such as occurring in chemical engineering, typically entail large collections of samples with numerous dimensional components (or attributes). Visualizing the samples in relation of these components can bring valuable insight. For example, one may be able to see how a certain chemical property is expressed in the samples taken. This could reveal if there are clusters and outliers that have specific distinguishing properties. Current multivariate visualization methods lack the ability to reveal these types of information at a sufficient degree of fidelity since they are not optimized to simultaneously present the relations of the samples as well as the relations of the samples to their attributes. We propose a display that is designed to reveal these multiple relations. Our scheme is based on the concept of RadViz, but enhances the layout with three stages of iterative refinement. These refinements reduce the layout error in terms of three essential relationships—sample to... [more]
How to Generate Economic and Sustainability Reports from Big Data? Qualifications of Process Industry
Esa Hämäläinen, Tommi Inkinen
July 31, 2018 (v1)
Keywords: Big Data, economic efficiency, economic geography, process industry, sustainability
Big Data may introduce new opportunities, and for this reason it has become a mantra among most industries. This paper focuses on examining how to develop cost and sustainable reporting by utilizing Big Data that covers economic values, production volumes, and emission information. We assume strongly that this use supports cleaner production, while at the same time offers more information for revenue and profitability development. We argue that Big Data brings company-wide business benefits if data queries and interfaces are built to be interactive, intuitive, and user-friendly. The amount of information related to operations, costs, emissions, and the supply chain would increase enormously if Big Data was used in various manufacturing industries. It is essential to expose the relevant correlations between different attributes and data fields. Proper algorithm design and programming are key to making the most of Big Data. This paper introduces ideas on how to refine raw data into valua... [more]
Numerical Aspects of Data Reconciliation in Industrial Applications
Maurício M. Câmara, Rafael M. Soares, Thiago Feital, Thiago K. Anzai, Fabio C. Diehl, Pedro H. Thompson, José Carlos Pinto
July 31, 2018 (v1)
Keywords: industrial data reconciliation, nonlinear programming, offshore oil production, process monitoring
Data reconciliation is a model-based technique that reduces measurement errors by making use of redundancies in process data. It is largely applied in modern process industries, being commercially available in software tools. Based on industrial applications reported in the literature, we have identified and tested different configuration settings providing a numerical assessment on the performance of several important aspects involved in the solution of nonlinear steady-state data reconciliation that are generally overlooked. The discussed items are comprised of problem formulation, regarding the presence of estimated parameters in the objective function; solution approach when applying nonlinear programming solvers; methods for estimating objective function gradients; initial guess; and optimization algorithm. The study is based on simulations of a rigorous and validated model of a real offshore oil production system. The assessment includes evaluations of solution robustness, constr... [more]
Data Visualization and Visualization-Based Fault Detection for Chemical Processes
Ray C. Wang, Michael Baldea, Thomas F. Edgar
July 31, 2018 (v1)
Keywords: data visualization, multivariate fault detection, time series data
Over the years, there has been a consistent increase in the amount of data collected by systems and processes in many different industries and fields. Simultaneously, there is a growing push towards revealing and exploiting of the information contained therein. The chemical processes industry is one such field, with high volume and high-dimensional time series data. In this paper, we present a unified overview of the application of recently-developed data visualization concepts to fault detection in the chemical industry. We consider three common types of processes and compare visualization-based fault detection performance to methods used currently.
Big Data Analytics for Smart Manufacturing: Case Studies in Semiconductor Manufacturing
James Moyne, Jimmy Iskandar
July 31, 2018 (v1)
Keywords: anomaly detection, Big Data, predictive analytics, predictive maintenance, process control, semiconductor manufacturing, smart manufacturing
Smart manufacturing (SM) is a term generally applied to the improvement in manufacturing operations through integration of systems, linking of physical and cyber capabilities, and taking advantage of information including leveraging the big data evolution. SM adoption has been occurring unevenly across industries, thus there is an opportunity to look to other industries to determine solution and roadmap paths for industries such as biochemistry or biology. The big data evolution affords an opportunity for managing significantly larger amounts of information and acting on it with analytics for improved diagnostics and prognostics. The analytics approaches can be defined in terms of dimensions to understand their requirements and capabilities, and to determine technology gaps. The semiconductor manufacturing industry has been taking advantage of the big data and analytics evolution by improving existing capabilities such as fault detection, and supporting new capabilities such as predict... [more]
Principal Component Analysis of Process Datasets with Missing Values
Kristen A. Severson, Mark C. Molaro, Richard D. Braatz
July 31, 2018 (v1)
Keywords: chemometrics, Machine Learning, missing data, multivariable statistical process control, principal component analysis, process data analytics, process monitoring, Tennessee Eastman problem
Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA), which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based... [more]
On the Use of Multivariate Methods for Analysis of Data from Biological Networks
Troy Vargason, Daniel P. Howsmon, Deborah L. McGuinness, Juergen Hahn
July 31, 2018 (v1)
Keywords: autism spectrum disorder, classification, Fisher discriminant analysis, Machine Learning, Multivariate Statistics, one carbon metabolism, probability density function, transsulfuration, urine toxic metals
Data analysis used for biomedical research, particularly analysis involving metabolic or signaling pathways, is often based upon univariate statistical analysis. One common approach is to compute means and standard deviations individually for each variable or to determine where each variable falls between upper and lower bounds. Additionally, p-values are often computed to determine if there are differences between data taken from two groups. However, these approaches ignore that the collected data are often correlated in some form, which may be due to these measurements describing quantities that are connected by biological networks. Multivariate analysis approaches are more appropriate in these scenarios, as they can detect differences in datasets that the traditional univariate approaches may miss. This work presents three case studies that involve data from clinical studies of autism spectrum disorder that illustrate the need for and demonstrate the potential impact of multivariate... [more]
Outlier Detection in Dynamic Systems with Multiple Operating Points and Application to Improve Industrial Flare Monitoring
Shu Xu, Bo Lu, Noel Bell, Mark Nixon
July 31, 2018 (v1)
Keywords: dynamic system, flare monitoring, multiple operating points, outlier detection, PLS-DA, pruned exact linear time (PELT), time series Kalman filter (TSKF)
In chemical industries, process operations are usually comprised of several discrete operating regions with distributions that drift over time. These complexities complicate outlier detection in the presence of intrinsic process dynamics. In this article, we consider the problem of detecting univariate outliers in dynamic systems with multiple operating points. A novel method combining the time series Kalman filter (TSKF) with the pruned exact linear time (PELT) approach to detect outliers is proposed. The proposed method outperformed benchmark methods in outlier removal performance using simulated data sets of dynamic systems with mean shifts. The method was also able to maintain the integrity of the original data set after performing outlier removal. In addition, the methodology was tested on industrial flaring data to pre-process the flare data for discriminant analysis. The industrial test case shows that performing outlier removal dramatically improves flare monitoring results thr... [more]
A Study of Explorative Moves during Modifier Adaptation with Quadratic Approximation
Weihua Gao, Reinaldo Hernández, Sebastian Engell
July 30, 2018 (v1)
Keywords: modifier adaptation, quadratic approximation, real-time optimization
Modifier adaptation with quadratic approximation (in short MAWQA) can adapt the operating condition of a process to its economic optimum by combining the use of a theoretical process model and of the collected data during process operation. The efficiency of the MAWQA algorithm can be attributed to a well-designed mechanism which ensures the improvement of the economic performance by taking necessary explorative moves. This paper gives a detailed study of the mechanism of performing explorative moves during modifier adaptation with quadratic approximation. The necessity of the explorative moves is theoretically analyzed. Simulation results for the optimization of a hydroformylation process are used to illustrate the efficiency of the MAWQA algorithm over the finite difference based modifier adaptation algorithm.
[Show All Subjects]