BY DAVID C. MILLER, NATIONAL ENERGY TECHNOLOGY LABORATORY; XIN SUN, PACIFIC NORTHWEST NATIONAL LABORATORY; CURTIS B. STORLIE, LOS ALAMOS NATIONAL LABORATORY; DEBANGSU BHATTACHARYYA, WEST VIRGINIA UNIVERSITY
Carbon capture and storage (CCS) is one of many approaches that is critical for significantly reducing domestic and global CO2 emissions. The U.S. Department of Energy’s (DOE) Clean Coal Technology Program Plan envisions second-generation CO2 capture technologies ready for demonstration-scale testing around 2020, with the goal of enabling commercial deployment by 2025. Third-generation technologies have a similarly aggressive timeline. A major challenge to this plan is the fact that development and scale-up of new technologies in the energy sector historically takes up to 15 years to move from the laboratory to pre-deployment, and another 20 to 30 years for widespread industrial-scale deployment. In order to help meet the goals of the DOE carbon capture program, the Carbon Capture Simulation Initiative (CCSI) was launched in early 2011 to develop, demonstrate, and deploy advanced computational tools and validated multi-scale models to reduce the time required to develop and scale-up new carbon capture technologies.
|SaskPower’s Boundary Dam Integrated Carbon Capture and Storage Project is the world’s first commercial-scale CCS project. Initiatives such as this hope to preserve the viability of the coal-fired power industry. Photo courtesy: SaskPower|
The CCSI Toolset (1) enables promising concepts to be more quickly identified through rapid computational screening of processes and devices, (2) reduces required time to design and troubleshoot new devices and processes by using optimization techniques to focus development on the best overall process conditions and by using detailed device-scale models to better understand and improve the internal behavior of complex equipment, and (3) provides quantitative predictions of device and process performance during scale-up based on rigorously validated smaller scale simulations that take into account model and parameter uncertainty. This article focuses on essential elements related to the development and validation of multi-scale models in order to help minimize risk and maximize learning as new technologies progress from pilot to demonstration scale.
MULTI-SCALE MODEL DEVELOPMENT
Models that predict device and process behavior are actually a series of models that are linked together. These submodels represent physical properties, thermodynamics, chemical reactivity, heat transfer, hydrodynamics, mass transfer, and other aspects of the device/process. Oftentimes, these submodels can be hidden from the user of a macroscopic model (such as a model for an absorber); however, the reliability of the overall model is highly dependent on the predictive capability of all the submodels. Thus, it is essential to rigorously calibrate and validate both the underlying submodels as well as the overall model to ensure that it can accurately represent the physical system.
Parameters for submodels are typically calibrated deterministically for each submodel, which causes a number of issues. First, both models and experimental data are imperfect. Thus, given experimental error, a number of sets of parameters can represent experimental data equally well. By taking only the “best fit” set of parameters, the submodel may not be able to represent the “true” behavior. When combining multiple submodels together into an aggregate model of a process or device, these errors can multiply, resulting in poor predictive capability.
A second issue arises from the fact that submodels are often coupled with one another. Thus, if the parameters of one submodel are determined in isolation from the related submodels, the regressed value of the parameters of the second submodel may be far from the “true” values.
A third issue that can arise is the lack of adequate experimental data to cover the range of conditions over which the model will be used. Sometimes this occurs because prebuilt submodels are utilized from within commercial simulation packages without thoroughly investigating the data and assumptions that went into the model. Other times, this occurs because it is difficult to measure the properties under those conditions. Ultimately, this can be a critical oversight causing the overall model to perform poorly if the submodel is extrapolating well beyond the range of conditions under which it was developed.
To overcome these issues, CCSI has developed a comprehensive, hierarchical model calibration and validation framework, which utilizes Bayesian statistics and other principles of uncertainty quantification, to provide stochastic model predictions that result in a complete probability distribution of expected behavior. This is especially important when using models to help predict scale-up performance since it enables confidence bounds to be placed on simulation results. Furthermore, the sensitivity of the model predictions to uncertainty in specific submodels and parameters can be determined. This allows technology developers to focus additional resources on those aspects of their process that have the biggest influence on uncertainty, which is closely related to technical risk.
Two examples of this framework are described below. The first is the application of the framework to a high-fidelity computational fluid dynamics (CFD) model of bubbling fluidized bed (BFB) adsorber for capturing CO2 on a chemically reactive solid sorbent. The second is the application of the framework to a solvent-based CO2 capture system consisting of a packed bed absorber and packed bed stripper.
HIERARCHICAL CALIBRATION AND VALIDATION OF HIGH-FIDELITY CFD MODELS
A BFB is a potential approach for using solid sorbents for carbon capture. Because of the complex gas-solid contacting that occurs in such a system, predictive three-dimensional models can be helpful in understanding and predicting behavior of the equipment as it is scaled up. In this example, the goal is to predict the behavior of a 1MWe equivalent pilot-scale adsorber based on unit problems from laboratory-scale data.
The validation process proceeds by initially considering only the hydrodynamics of multiphase flow, then adding heat transfer, and finally including a complete reacting system. A fully coupled multiphase flow CFD model for a proposed 1-MW pilot-scale adsorber with chemical reaction, energy, and species transport is then used to predict the device’s overall performance on CO2 capture by using a large number of simulations with input parameters covering the distributions resulting from the calibration of small-scale unit problems; a quantified confidence level can be derived for each operating condition.
Laboratory-scale data is collected from the Carbon Capture Unit (C2U) at the National Energy Technology Laboratory (NETL). Gas flows into the bottom of the unit, through a solid sorbent, creating a bubbling fluidized bed, and then out at the top. The resulting multiphase flow involves complex flow hydrodynamics, heat transfer, and chemical reactions. The experiments run on the C2U to support the hierarchical calibration/validation range from simple to complex. It is also possible to show the increasing complexity (from left to right) of the validation hierarchy, with unit problems ranging from cold flow, flow with heat transfer, coupled flow and heat transfer with CO2 adsorption chemistry, and finally using the calibrated model parameters in quantifying prediction confidence levels at scale. Because the C2U has heat-exchanger coils, heat transfer between the multiphase flow and the coils is also considered. When the inlet gas contains CO2, an exothermic chemical reaction takes place, resulting in mass and heat transfer among sorbent, gas, and the coils. This example is based on sorbent 32D, developed at NETL.
CFD models have many parameters that must be calibrated (i.e., regressed) from the experimental data collected through the unit problems. The CFD model for the first two unit problems has six model parameters relating to sorbent particle properties that were thought to be important and relevant. The reaction equations in the fully coupled model add another 13 such model parameters. Definitions of all 19 parameters in the fully coupled model are provided. In each unit problem the relevant parameters of the CFD model were calibrated.
Uncertainty quantification is explicitly built into each step via sensitivity analysis and a Bayesian calibration approach. Bayesian model calibration requires that a prior distribution first be placed on the unknown parameters to represent the scientists’ subjective belief about what reasonable values of the parameters might be. The prior distribution can be as simple as a uniform range of values, to a more complex parametric distribution with dependencies among parameters. The Bayesian model calibration of unit problem 1:32D Cold Flow results in a posterior distribution to describe the remaining uncertainty in the model parameters . This posterior distribution of is then used as the prior distribution for for the Bayesian calibration in unit problem 2:32D Hot Flow. This results in a more refined posterior distribution for , which can then be used as a prior distribution for the final calibration of unit problem 3:32D Reacting Flow. The end result of the hierarchical calibration is a posterior distribution for the model parameters that has been refined at each layer of the hierarchy. Each subsequent calibration results in a reduction in the variance of . The Bayesian calibration framework also provides an assessment for model validation via an estimate of model lack of fit in each unit problem along the way. Overall, the CFD modeling results demonstrated that the multi-phase reactive flow models can be used to accurately capture the bed pressure drop, bed temperature, and CO2 breakthrough curves of the C2U.
The final posterior distribution at the end of unit problem 3 characterizes the state of knowledge (or uncertainty) in the form of a probability distribution. This uncertainty distribution is then finally used to predict quantities of interest, such as percent CO2 capture, at a larger (1 MW) scale with uncertainty bounds. The posterior distribution from unit problem 3 was used to predict the percent CO2 capture and bed height.
This hierarchical calibration and validation framework is broadly applicable and can be used to calibrate any complex model of a physical process where multiple relevant experimental data sources can be obtained.
CALIBRATION AND VALIDATION OF PROCESS MODELS FOR SOLVENT-BASED CO2 CAPTURE
Chemical solvents are among the most promising approaches for post-combustion CO2 capture systems. These systems are highly non-ideal due to long-range ion-ion and short-range ion-molecule and molecule-molecule interactions. Thus, the properties of CO2 loaded solvents change nonlinearly with a number of variables such as temperature, solvent concentration, extent of CO2 capture, and presence of other species such as H2S or SO2 in the system. In addition, rapid chemical reaction coupled with mass transfer makes it challenging to distinguish reaction rates from mass transfer rates in actual operating contactors. Thus, it is essential to utilize a rigorous framework to determine submodels for the hydrodynamic, mass transfer, heat transfer, and kinetic and physical properties of solvent systems.
The CCSI Technical Team recently published a systematic approach for determining model parameters with due consideration of uncertainty in the experimental data via a Bayesian approach. As in the above example, this approach ultimately yields a posterior distribution of all the parameters in the model, which can then be propagated through the process model to obtain the bounded confidence on the model’s predictions.
Following the calibration of the parameters in the submodels, the model of the entire process (i.e., process model) consisting of the absorber and stripper can be validated against experimental data from a laboratory-scale system or a pilot plant. Validation with data from a pilot plant provides additional information due to change in the scale. Both steady state and dynamic models of the process should be validated so that sufficient trust in the models can be built before advancing to the next level of scale-up.
Model validation can be instrumental in identifying model deficiencies due to lack of knowledge or the inability of the model to capture certain behavior. High quality validation data from CO2 capture technologies as it transitions from one scale to the next is rare, and data under dynamic conditions with actual flue gas are practically nonexistent. Limited steady-state data have been reported; however, these data have typically been collected at small scale under very limited operating ranges. In addition to steady-state data, it is absolutely essential to collect the dynamic data by introducing step changes in all manipulated and disturbance variables and recording the transients of all key variables. Such tests should be conducted by exciting the process so that all its frequencies can be observed to capture the underlying nonlinearity of the process.
The transient response of a process provides hundreds of temporal data points in response to a change in an input providing a much richer set of validation date than if only steady-state conditions are considered. While steady-state responses can be arbitrarily fit to many non-unique combinations of model parameters, nonlinear transient response enables better model calibration in order to minimize uncertainty. Such studies will also show if any mechanism is being ignored and points to any form of model uncertainty.
CCSI recently conducted such steady-state and dynamic tests at U.S. DOE’s National Carbon Capture Center (NCCC) in Alabama. In these test runs, key manipulated variables such as solvent flowrate, reboiler steam flowrate, and disturbance variables like flue gas flowrate and CO2 concentration were varied widely, and dynamic data was collected by introducing carefully-designed step changes and recording the transients of all key variables.
The experimental data shows a number of interesting trends. The temperature profile across the stripper changes for different solvent flowrates and reboiler duties. With a high solvent flowrate and a high reboiler duty, the temperature changes gradually along the tower. However, when both the solvent flowrate and reboiler duty are decreased, the temperature change through most of the tower is minimal with a sudden change at the top and the bottom. The overall CCSI process model, which utilizes submodels developed using the Bayesian calibration framework as described above, can accurately predict these changing patterns. The model also predicts CO2 loading of the solvent very accurately.
We have shown the benefits of applying a Bayesian calibration and validation framework to submodels, device-scale models, and process models in order to obtain better predictive capability to enable them to be reliably used to help accelerate the scale-up of new carbon capture technologies. Device and process models are made up of a large number of submodels, each with numerous parameters. Since experimental data contains noise and error, and models are imperfect, it is essential that model development and parameter fitting take these uncertainties into consideration to ensure a predictive model. In addition, macroscopic models should be run in a stochastic manner to provide a range of potential solutions that represent the propagation of the underlying uncertainties in the underlying submodels and parameters. With these advanced capabilities, we have demonstrated that models can be used to accelerate the development and scale-up of carbon capture technologies.