Browse
Recent Submissions
New records verified within the last 240 days
226. LAPSE:2025.0391
A Modelling and Simulation Software for Polymerization with Microscopic Resolution
June 27, 2025 (v1)
Subject: Modelling and Simulations
Keywords: Modular Modelling, Polymerization Process, Software Development
In the domain of process systems engineering, developing software embedded with advanced computational methods is in great demand to enhance the kinetic comprehension and facilitate industrial applications. Polymer production, characterized by complex reaction mechanisms, represents a particularly intricate process industry. In this work, a scientific software is developed for polymerization modelling and simulation with insight on microscopic resolution. From a software architecture perspective, the software is built on a self-developed process modelling platform that allows flexible user customization. A specific design for polymer species with microscopic chain structure information is conducted. From an algorithm perspective, the software offers high-performance solution strategies for polymerization process modelling by utilizing advanced computation approaches. A Ziegler-Natta copolymerization is presented to demonstrate the softwares capability in capturing the microscopic stru... [more]
227. LAPSE:2025.0390
Development of anomaly detection models independent of noise and missing values using graph Laplacian regularization
June 27, 2025 (v1)
Subject: Uncategorized
Keywords: Anomaly detection, Autoencoder, Graph Laplacian regularization, vinyl acetate monomer process
Anomaly detection is a key technique for maintaining process suitability and safety; however, the quality of process data often deteriorates due to missing or noisy values caused by sensor malfunctions. Such data imperfections may obscure real faults. If anomaly detection models are too sensitive to such abnormal data, they may cause false positives resulting in unnecessary alarms, which may obstruct detection of true process faults. Thus, deterioration of the quality of process data may affect process performance and safety. We propose a new anomaly detection method that utilizes graph Laplacian regularization as a loss function considering data-specific temporal relationships. Graph Laplacian regularization is a mathematical tool used in image processing and denoising to smooth data. We assume that successive process data temporally close to each other have similar values and maintain temporal dependencies among variables. In this study, Laplacian regularization imposes significant p... [more]
228. LAPSE:2025.0389
A Superstructure Approach for Optimization of Simulated Moving Bed (SMB) Chromatography
June 27, 2025 (v1)
Subject: Process Design
Keywords: Chromatography, gProms, Modelling and Simulations, Optimization, Particle Swarm Optimization, Process Design, Simulated Moving Bed, Superstructure
One of the most successful continuous high-performance liquid chromatography (HPLC) processes for drug manufacturing is the Simulated Moving Bed (SMB). SMB is a multi-column, continuous, chromatographic process that can handle much higher throughputs than regular batch chromatographic processes. The process is initially transient, but eventually arrives at a cyclic steady state, which makes optimization very challenging, even more so when superstructure optimization is involved. To simplify the optimization problem, many researchers fixed the SMB structure, optimizing only the continuous variables, so they cannot be considered superstructure optimization. In this work, an SMB superstructure that can simultaneously optimize column structure and operation is proposed. The results showed that the superstructure proposed is reliable, and it is more efficient compared to current optimization approaches if the optimal column structure has to be identified.
229. LAPSE:2025.0388
Tune Decomposition Schemes for Large-Scale Mixed-Integer Programs by Bayesian Optimization
June 27, 2025 (v1)
Subject: Optimization
Keywords: Derivative Free Optimization, Machine Learning, Mixed-Integer Programming
Heuristic decomposition schemes like moving horizon schemes are a common approach to approximately solve large-scale mixed-integer programs. The authors propose Bayesian optimization as a methodological approach to systematically tune parameters of decomposition schemes for mixed-integer programs. This paper discusses detailed results of three studies of the Bayesian optimization-based approach using hoist scheduling as a case study: Firstly, two objectives of the tuning problem are examined considering sequences of incumbent solutions found by the Bayesian optimization. Secondly, the Bayesian optimization is applied to a set of test instances of the hoist scheduling problem using four types of acquisition functions; they are compared with respect to the convergence of the tuning problem solutions. Thirdly, the scaling behaviour of the Bayesian optimization is studied with respect to the dimension of the space of tuning parameters. The results of the three studies show that the solutio... [more]
230. LAPSE:2025.0387
Applying Quality by Design to Digital Twin Supported Scale-Up of Methyl Acetate Synthesis
June 27, 2025 (v1)
Subject: Modelling and Simulations
Keywords: digital twin, quality by design, scale-up
A new method for efficient process development is the direct scale-up from laboratory scale to production scale using mechanistic models [1]. The integration of the Quality by Design approach into this scale-up concept may prove beneficial for a variety of product- and process-related aspects. This paper presents a workflow for the digital twin-supported direct scale-up of processes and process plants, which integrates elements of the Quality by Design methodology. To illustrate the concept, the workflow is implemented for the example of an esterification reaction in a stirred tank reactor. Finally, benefits of the implementation of Quality by Design in the direct scale-up using digital twins regarding the product quality and the process development are discussed as well as its limitations.
231. LAPSE:2025.0386
A Novel Objective Reduction Algorithm for Nonlinear Many-Objective Optimization Problems
June 27, 2025 (v1)
Subject: Optimization
Keywords: Multi-Objective Optimization, Nonlinear Optimization, Outer Approximation
Sustainability is increasingly recognized as a critical global issue. Multi-objective optimization is an important approach for sustainable decision-making, but problems with four or more objectives are hard to interpret due to its high dimensions. In our groups previous work, an algorithm capable of systematically reducing objective dimensionality for (mixed integer) linear Problem has been developed. In this work, we will extend the algorithm to tackle nonlinear many-objective problems. An outer approximation-like method is employed to systematically replace nonlinear objectives and constraints. After converting the original nonlinear problem to linear one, previous linear algorithm can be applied to reduce the dimensionality. The benchmark DTLZ5(I, M) problem set is used to evaluate the effectiveness of this approach. Our algorithm demonstrates the ability to identify appropriate objective groupings on benchmark problems of up to 20 objectives when algorithm hyperparameters are app... [more]
232. LAPSE:2025.0385
Flexibility Assessment via Affine Bounds Evaluation
June 27, 2025 (v1)
Subject: Process Design
Keywords: Flexibility, Multiparametric Programming, Process Design
Process design deals with the problem of finding the best process set-up, subject to a set of constraints defining the design space (DSp). This selection is guided primarily by economic considerations. Flexibility may also play an important factor in process design, since it embodies how far from the design spaces bounds are the candidate optimal designs, which in some cases may lead to off-spec products. This work proposes a novel approach for flexibility assessment. In design problems where the design space is constrained by a set of affine bounds, flexibility may be expressed either as the minimum or the maximum distance with respect to the feasible (design) space bounds. For any point in the DSp, the minimum distance provides a good indicator on the minimum flexibility, as the direction that represents the highest risk of violating the constraints. An analogous conclusion can be drawn between the maximum distance and maximum flexibility. These distances can be computed exactly v... [more]
233. LAPSE:2025.0384
A combined approach to optimization of soft sensor architecture and physical sensor configuration
June 27, 2025 (v1)
Subject: Process Control
Keywords: Digraph, Sensor Configuration, Soft Sensor, Uncertainty Analysis
In the chemical industry, soft sensors are deployed to reduce equipment cost or allow for a continuous measurement of process variables. Soft sensors monitor parameters not via physical sensors but infer them from other process variables. On the one hand, the precision of a soft sensors is affected by its architecture, the choice of parametric equations like balances and thermodynamic or kinetic dependencies in the soft sensor model. On the other hand, uncertainty that is inherent to the input variable values propagates through the soft sensor model and impacts the output uncertainty. The latter is affected by the configuration of physical sensors in the chemical process. This paper proposes an approach for the combined optimization of soft sensor architecture and physical sensor configuration. For this purpose, the method combines an automatic extraction of all possible soft sensor architectures from a set of system equations with an uncertainty-based evaluation of sensor configuratio... [more]
234. LAPSE:2025.0383
An Efficient Convex Training Algorithm for Artificial Neural Networks by Utilizing Piecewise Linear Approximations and Semi-Continuous Formulations
June 27, 2025 (v1)
Subject: Optimization
Keywords: Artificial Neural Network, computational complexity, convex formulation, mixed-integer linear programming, piecewise linear functions
Artificial neural networks are widely used as data-driven models for capturing complex, nonlinear systems. However, suboptimal training remains a significant challenge due to the nonlinearity of activation functions and the reliance on local solvers, which makes achieving global solutions difficult. One solution involves reformulating activation functions as piecewise linear approximations to convexify the problem, though this approach often requires substantial CPU time. This study demonstrates that a tailored branch-and-bound algorithm can effectively address these challenges by efficiently navigating the solution space using linear relaxations. The proposed method achieves minimal training error, offering a robust solution to the training bottleneck. Unlike traditional mixed-integer programming approaches, which often struggle to converge within reasonable CPU times, the SOSX algorithm shows superior scalability, with computational demand growing almost linearly rather than exponent... [more]
235. LAPSE:2025.0382
Sensitivity Analysis of Key Parameters in LES-DEM Simulations of Fluidized Bed Systems Using Generalized Polynomial Chaos
June 27, 2025 (v1)
Subject: Modelling and Simulations
Keywords: CFD-DEM, gas-solid fluidization, global sensitivity, gPC, linear spring-dashpot model, spring stiffness
In applications involving fine powders and small particles, the accuracy of numerical simulations, particularly those employing the Discrete Element Method (DEM) to predict granular material behavior, can be significantly affected by uncertainties in critical parameters. These uncertainties include the coefficients of restitution for particle-particle and particle-wall collisions, viscous damping coefficients, and other related factors. In this study, we use stochastic expansions based on point-collocation non-intrusive polynomial chaos to perform a sensitivity analysis of a fluidized bed system. We treat four key parameters as random variables; each assigned a specific probability distribution over a designated range. This uncertainty is propagated through high-fidelity Large Eddy Simulation (LES)-DEM simulations to statistically quantify its impact on the results. To effectively explore the four-dimensional parameter space, we analyze a comprehensive database comprising over 1,200 si... [more]
236. LAPSE:2025.0381
Unveiling Probability Histograms from Random Signals using a Variable-Order Quadrature Method of Moments
June 27, 2025 (v1)
Subject: Modelling and Simulations
Keywords: Modelling, Population Balances, Probability histogram, Random signals, Simulation, VOQMOM
Random signals are crucial in chemical and process engineering, where industrial plants generate big data that can be used for process understanding and decision-making. This makes it necessary to unveil the underlying probability histograms from these signals with a finite number of bins. However, the search for the optimal number of bins is still based on empirical optimisation and general rules of thumb. In this work, we introduce an alternative and general method to unveil probability histograms. Our method employs a novel variable-order QMOM, which adapts automatically based on the relevance of the information contained in the random data. The number of bins used to recover the underlying histogram is found to be proportional to the information entropy, where a search algorithm is developed that generates bins and assigns probabilities to them. The algorithm terminates when no more significant information is available for assignment to the newly created nodes, up to a user-defined... [more]
237. LAPSE:2025.0380
Linear and non-linear convolutional approaches and XAI for spectral data: classification of waste lubricant oils
June 27, 2025 (v1)
Subject: Numerical Methods and Statistics
Keywords: Classification, CNN, Multiblock analysis, PLS, Waste lubricating oil
Waste lubricant oil (WLO) is a hazardous residual that requires proper management, being WLO regeneration the preferred approach. However, regeneration is only viable if the WLO does not coagulate in the equipment. Otherwise, the process needs to be shut down for cleaning and maintenance. To mitigate this risk, a laboratory test is currently used to assess the WLO coagulation potential before it enters the process. This laboratory test is, however, time-consuming, presents several safety risks, and is subjective. To expedite decision-making, process analytics technology (PAT) and machine learning were used to develop a model to classify WLOs according to their coagulation potential. Three approaches were followed, spanning linear and non-linear models. The first approach (benchmark) uses partial least squares for discriminant analysis (PLS-DA) and interval PLS combined with standard chemometric preprocessing techniques (27 model variants). The second approach uses wavelet transforms to... [more]
238. LAPSE:2025.0379
Handling discrete decisions in bilevel optimization via neural network embeddings
June 27, 2025 (v1)
Subject: Planning & Scheduling
Keywords: Bilevel Optimization, MILP reformulation, Neural Network Embeddings, Supply Chain Planning, Surrogate Modelling
Bilevel optimization is an active area of research within the operations research community due to its ability to capture the interdependencies between two levels of decisions. This study introduces a metamodeling approach for addressing mixed-integer bilevel optimization problems, exploiting the approximation capabilities of neural networks. The proposed methodology employs neural network embeddings to approximate the optimal follower's response, bypassing the inner optimization problem by parametrizing it with continuous leaders decisions. The use of Rectified Linear Unit (ReLU) activations allows the forward pass of the neural network to be represented as a set of mixed-integer linear constraints. Thereby, the bilevel structure is simplified into a single-level optimization model. A case study based on a two-echelon supply chain demonstrates the effectiveness of the approach, with solutions comparable to traditional bilevel optimization methods. The results suggest that neural netw... [more]
239. LAPSE:2025.0378
Comparative and Statistical Study on Aspen Plus Interfaces Used for Stochastic Optimization
June 27, 2025 (v1)
Subject: Numerical Methods and Statistics
New research on complex intensified distillation schemes has popularized the use of several commercial process simulation software. The interfaces between process simulation and optimization-oriented software have allowed the use of rigorous and robust models. This type of optimization is mentioned in the literature as "Black Box Optimization", since successive evaluations exploits the information from the simulator without altering the model that represents the given process. Among process simulation software, Aspen Plus® has become popular due to their rigorous calculations, model customization, and results reliability. This work proposes a comparative study for Aspen Plus software and Microsoft Excel VBA®, Python® and MATLAB® interfaces. Five distillation schemes were analyzed: conventional column, reactive column, extractive column, column with side rectifier and a Petlyuk column. The optimization of the ?????? (Total Annual Cost) was carried out by a modified Simulated Annealing A... [more]
240. LAPSE:2025.0377
Enhanced Reinforcement Learning-driven Process Design via Quantum Machine Learning
June 27, 2025 (v1)
Subject: Process Design
Keywords: Process Design, Process Synthesis, Quantum Computing, Reinforcement Learning
In this work, we introduce a quantum-enhanced reinforcement learning (RL) framework for process design synthesis. RL-driven methods for generating process designs have gained momentum due to their ability to intelligently identify optimal configurations without requiring pre-defined superstructures or flowsheet configurations. This eliminates reliance on prior expert knowledge, offering a comprehensive and robust design strategy. However, navigating the vast combinatorial design space poses computational challenges. To address this, a novel approach integrating RL with quantum machine learning (QML) is proposed. QML leverages theoretical advantages over classical methods to accelerate searches in large spaces. Built upon our prior work, the approach begins with a maximum set of available unit operations, represented in a flowsheet structure using an input-output stream matrix as RL observations. A Deep Q-Network (DQN) algorithm trains a parameterized quantum circuit (PQC) in place of a... [more]
241. LAPSE:2025.0376
Differentiation between Process and Equipment Drifts in Chemical Plants
June 27, 2025 (v1)
Subject: Process Monitoring
Keywords: Coupled Drifts, Fault Detection, Modelling, Namur Open Architecture, Process Monitoring
The performance of chemical plants is inevitably related to knowledge about the current state of the system. However, both process and equipment drifts may distort state information. Deviations of process values caused by equipment malfunction may be misinterpreted as process drifts and vice versa. Determining the cause of the drift is further complicated by the fact that equipment drifts typically occur in combination with process drifts. This paper presents a method that uses available additional equipment data to reliably detect and decouple combined equipment and process drifts in chemical plants by combining statistical methods with model-based approaches. The utility of additional equipment information is assessed based on its effect on the decoupling of process and equipment drifts. First results demonstrate the feasibility of the approach in a real plant.
242. LAPSE:2025.0375
Soft-Sensor-Enhanced Monitoring of an Alkylation Unit via Multi-Fidelity Model Correction
June 27, 2025 (v1)
Subject: Process Monitoring
Keywords: Industry 40, Information Management, Machine Learning, Modelling, Process Monitoring
Industrial process monitoring can benefit from utilizing historical data, providing insights for decision-making and operational efficiency. This study develops a soft-sensor-based approach leveraging multi-fidelity modeling to correct discrepancies between online sensors and laboratory analyses. A Gaussian process-based strategy is used to predict deviations between high-frequency low-fidelity sensor data and less frequent high-fidelity laboratory measurements. By exploring static and dynamic modeling frameworks, we assess their suitability for capturing process dynamics and addressing time-dependent variability. The multi-fidelity soft sensor noticeably improves predictive accuracy, outperforming high-fidelity and low-fidelity methods. This approach demonstrates applicability across various industrial settings where integrating diverse data sources enhances real-time process control and monitoring, reducing reliance on costly laboratory sampling.
243. LAPSE:2025.0374
A Stochastic Techno-Economic Assessment of Emerging Artificial Photosynthetic Bio-Electrochemical Systems for CO2 Conversion
June 27, 2025 (v1)
Subject: Process Design
Keywords: Artificial Photosynthesis, Carbon Conversion, Synthetic Biology, Techno Economic Assessment
Artificial Photosynthetic Bio-Electrochemical Systems (AP-BES) offer a promising approach for converting CO2 to valuable bioproducts, addressing carbon mitigation and sustainable production. This study employs a stochastic techno-economic assessment (TEA) to estimate the viability of rhodopsin driven AP-BES, from carbon capture to product purification. Unlike traditional deterministic TEAs, this approach uses Monte Carlo simulations to model uncertainties in key technoeconomic parameters, including energy consumption, CO2 conversion efficiency, and bioproduct market prices. The analysis generates probability distributions for economic metrics such as Operational Expenditure (OPEX), Capital Expenditure (CAPEX), and profit. Enhancements in light-harvesting efficiency and advancements in reactor materials were predicted to reduce the payback period to just one year, thereby making large-scale deployment a feasible option.
244. LAPSE:2025.0373
Redefining Stage Efficiency in Liquid-Liquid Extraction: Development and Application of a Modified Murphree Efficiency
June 27, 2025 (v1)
Subject: Modelling and Simulations
Keywords: Aspen Custom Modeler, Extraction column, Liquid-liquid extraction, Murphree efficiency, Process simulation
Liquid-liquid extraction stages often deviate from equilibrium due to factors like insufficient mixing, making accurate efficiency modeling essential for process simulation. This study addresses the limitations of Aspen Plus (AP), which distorts equilibrium calculations by directly multiplying efficiency with the distribution coefficient. A modified Murphree efficiency definition, more suitable for liquid-liquid systems but absent in AP's Extraction Column module, was implemented using Aspen Custom Modeler (ACM). The custom multi-stage extraction column model replaces mole fractions with mole flows to better represent mass transfer and phase interactions, enhancing simulation accuracy when imported into AP. Two test cases validated the custom model's effectiveness. Test Case I, utilizing the UNIQ-RK thermodynamic model, compared the ACM model to AP's built-in module, revealing that the ACM model provides a more realistic representation of extraction processes under varying stage effici... [more]
245. LAPSE:2025.0372
Kolmogorov Arnold Networks (KANs) as surrogate models for global process optimization
June 27, 2025 (v1)
Subject: Optimization
Keywords: Deterministic Global Optimization, Kolmogorov Arnold Networks, Mixed-Integer Nonlinear Programming, Surrogate modeling
Surrogate models are widely used to improve the tractability of process optimization. Some commonly used surrogate models are obtained via machine learning such as multi-layer perceptrons (MLPs), Gaussian processes, and decision trees. Recently, a new class of machine learning models named Kolmogorov Arnold Networks (KANs) have been proposed. Broadly, KANs are similar to MLPs, yet they are based on the Kolmogorov representation theorem instead of the universal approximation theorem for the MLPs. Compared to MLPs, it was reported that KANs require significantly fewer parameters to approximate a given input/output relationship. One of the bottlenecks preventing the embedding of MLPs into optimization formulations is that MLPs with a high number of parameters (larger width or depth) are more challenging to globally optimize. We investigate whether the parameter efficiency of KANs relative to MLPs can be translated to computational benefits when embedding them into optimization problems an... [more]
246. LAPSE:2025.0371
pyDEXPI: A Python framework for piping and instrumentation diagrams (P&IDs) using the DEXPI information model
June 27, 2025 (v1)
Subject: Energy Systems
Keywords: Data model, DEXPI, FAIR data, Open-source, Piping and instrumentation diagram, Software toolbox
Developing piping and instrumentation diagrams (P&IDs) is a fundamental task in process engineering. For designing complex installations, such as petroleum plants, multiple departments across several companies are involved in refining and updating these diagrams, creating significant challenges in data exchange between different software platforms from various vendors. The primary challenge in this context is interoperability, which refers to the seamless exchange and interpretation of information to collectively pursue shared objectives. To enhance the P&ID creation process, a unified, machine-readable data format for P&ID data is essential. A promising candidate is the Data Exchange in the Process Industry (DEXPI) standard. We present pyDEXPI, an open-source implementation of the DEXPI format for P&IDs in Python. pyDEXPI makes P&ID data more efficient to handle, more flexible, and more interoperable. We envision that, with further development, pyDEXPI will act as a central scientific... [more]
247. LAPSE:2025.0369
A Benchmark Simulation Model of Ammonia Production: Enabling Safe Innovation in the Emerging Renewable Hydrogen Economy
June 27, 2025 (v1)
Subject: Process Design
Keywords: Process Safety, Renewable Ammonia Production, Simulation Benchmark Model
The green transition accelerates innovations and developments targeting the integration of green hydrogen in the chemical industry. However, all new hydrogen pathways and process designs must be tested on operability and safety. A big challenge is the typical fluctuating characteristic of green hydrogen supply that contrasts the steady-state operation of most conventional chemical processes. Therefore, to adequately assess control and monitoring techniques, a benchmark model tailored to the relevant aspects of the hydrogen economy is required. We introduce a benchmark model based on the production of green ammonia using the Haber-Bosch process that remains operable when coupled to a fluctuating hydrogen supply from water electrolysis. The main section of the process model is an adiabatic indirect cooled reactor system that provides realistic modeling of industrial applications. Like the ammonia reactor, all process units and the underlying control structure are precisely dimensioned to... [more]
248. LAPSE:2025.0368
Phenomena-Based Graph Representations and Applications to Chemical Process Simulation
June 27, 2025 (v1)
Subject: Modelling and Simulations
Keywords: Distillation, Flowsheet Convergence, Graph-Theory, Liquid Extraction, Process Simulation
Rapid and robust simulation of chemical production processes is critical to address core scientific questions related to process design, optimization, and sustainability. Efficiently solving a chemical process, however, remains a challenge due to their highly coupled and nonlinear nature. Graph abstractions of the underlying physical phenomena within unit operations may help identify potential avenues to systematically reformulate the network of equations and enable more robust convergence of flowsheets. To this end, we further refined a flowsheet graph-theoretic abstraction that consists of a mesh of interconnected variable nodes and equation nodes. The new network of equations is formulated at the phenomenological level agnostic to the thermodynamic property package by extending equation formulations widely used to solve multistage equilibrium columns. Decomposition of the graph by phenomena linearizes material and energy balances across the flowsheet by decoupling phenomenological n... [more]
249. LAPSE:2025.0367
A Component Property Modeling Framework Utilizing Molecular Similarity for Accurate Predictions and Uncertainty Quantification
June 27, 2025 (v1)
Subject: Numerical Methods and Statistics
Keywords: Molecular design, Property prediction, Similarity coefficient
A key step in developing high-performance industrial products lies in the design of their constituent molecules. Computer-aided molecular design (CAMD) has garnered significant attention for its potential to accelerate and improve the design process. The mainstream method involves using property prediction models to predict the properties of potential molecules and selecting the best candidates based on these predictions. However, prediction errors are inevitable, introducing unreliability into the design. To address this issue, this paper proposes a novel component property modeling framework based on a molecular similarity coefficient. By calculating the similarity between a target molecule and those in an existing database, the framework selects the most similar molecules to form a tailored training dataset. The similarity coefficient also quantifies the reliability of the property predictions. In tests across various properties, this framework not only provides a quantifiable evalu... [more]
250. LAPSE:2025.0366
Introducing Competition in a Multi-Agent System for Hybrid Optimization
June 27, 2025 (v1)
Subject: Optimization
Keywords: computational resource allocation, hybrid solution methods, multi-agent systems, multiobjective optimization
Process systems engineering optimization problems may be challenging. These problems often exhibit nonlinearity, non-convexity, discontinuity, and uncertainty, and often only the values of objective and constraint functions are accessible. Additionally, some problems may be computationally expensive. In such scenarios, black-box optimization methods may be appropriate to tackle such problems. A general-purpose multi-agent framework for optimization has been developed to automate the configuration and use of hybrid optimization, allowing for multiple optimization solvers, including different instances of the same solver. Solvers can share solutions, leading to better outcomes with the same computational effort. Alongside cooperation, competition is introduced by dynamically allocating more computational resource to solvers best suited to the problem. Each solver is assigned a priority that adapts to the evolution of the search. The scheduler is priority-based and uses similar algorithms... [more]

