Skip to content Skip to navigation

CME500 Series: Autumn 2013

September 23, 2013 - 4:15pm to December 2, 2013 - 5:15pm

CME 500 Series- ICME Colloquium

Organizer:  Shela Aboud

This seminar series has a theme of Computational Geosciences

 

Models and data go together: Computational isotope geochemistry

9/23/2013               

Donald DePaolo, Associate Laboratory Director for Energy & Environmental Area at Lawrence Berkeley National Laboratory; Geochemistry Professor at UC Berkeley

Much of geochemistry is predicated on the expectation that new analytical tools will allow us to make unprecedented measurements that will then automatically increase our understanding of natural materials and processes.  To a fair degree this is true, and it is unquestionably the case that new measurements fuel scientific progress.  However, when it comes to evaluating what we actually know, models are critical, and for the most part they are undervalued.  In my research I have concentrated on relatively simple, heuristic models that can illustrate key aspects of the behavior of complex systems.  This approach has turned out to have value, because other researchers can understand the models for the most part and use them.  Now it is possible, and with collaborations we have been making progress, to extend the simple models to more systematic numerical approaches that require fewer approximations.  In this talk I will give some examples of the simpler approaches and how we have been able to use them to help guide development of more computationally demanding models, with the applications relating to moisture transport in the atmosphere, diffusion and crystallization in magmas prior to volcanic eruptions, reactive transport in fluid-rock systems where flow is mainly through fractures, and precipitation of minerals from aqueous solutions.


Hessian-based uncertainty quantification in Bayesian inference, with applications to large-scale seismic inversion

9/30/2013               

Georg Stadler, Institute for Computational Engineering and Science, University of Texas at Austin

I will address the problem of quantifying uncertainty in the solution of inverse problems governed by partial differential equations (PDEs) within the framework of Bayesian inference. The posterior probability density is explored using local Gaussian approximations based on gradients and Hessians of the log posterior. Computations with these derivatives for inverse problems governed by expensive-to-solve PDEs are made computationally feasible through the use of adjoint methods and low rank ideas, which exploit the fact that the data are typically informative only about a low-dimensional subspace of the parameter fields. These methods are applied to a synthetic global seismic inversion problem, in which the local wave speed is inferred from seismogram data. This requires the repeated solution of large-scale elastic wave propagation problems, for which a high-order discontinuous Galerkin discretization is used. I will discuss implementation aspects of this method, its parallel scalability to 100,000s of CPU cores and challenges arising in the computation of derivatives with respect to the local wave speed due to the use of the discontinuous Galerkin method. For the global seismic inversion test problem, I will show variance fields and samples drawn from the distribution of the inferred local wave speeds. Parts of this talk are based on joint work with Tan Bui-Thanh, Carsten Burstedde, James Martin, Omar Ghattas and Lucas Wilcox.


Computational strategies for simulation and analysis of high resolution global climate modeling

10/7/2013               

Katherine J. Evans, Climate Change Science Institute, Oak Ridge National Laboratory

Global Earth system models have been developed with finer spatial resolution to enable global simulations that resolve regional and local climate. Recent improvements to component dynamical cores and effective utilization of DOE leadership class computing systems have enabled these model configurations to be run with reasonable throughput. However, additional near-term improvements to these high-resolution models, which include increasingly multiscale and coupled equations, more complete process models that handle multiple spatial resolutions around the globe, and the transport of numerous additional tracers of chemical and moisture-based variables, will increase computational costs and create unknown accuracy issues when coupled together. I will present progress towards the incorporation of implicit-based time-stepping methods designed to improve efficiency, accuracy, and robustness within several components of the Community Earth System Model and show some recent analysis of high-resolution atmosphere simulations that motivate this work.


Spectral-element and adjoint methods for structural tomography and seismic source parameters inversion

10/14/2013

Christina Morency, Computational Geosciences Group, Lawrence Livermore National Laboratory

The development of high-performance computing and numerical techniques has enabled global and regional tomography to reach high levels of precision, and seismic adjoint tomography has become a state-of-the-art tomographic technique.

Adjoint tomography uses full waveform simulations and back projection to compute finite frequency sensitivity kernels. These kernels describe the variation of the discrepancy (or misfit) between observed seismic data and modeled synthetics as a function of the model parameters. They are used in an iterative inversion aiming at minimizing the misfit function, thereby recovering model parameters.

This inverse approach benefits from an accurate numerical technique to solve the seismic wave equation in complex 3D media, in the first place. Here I use a spectral-element method, which contrary to finite-element methods (FEM), uses high degree Lagrange polynomials, allowing the technique to not only handle complex geometries, like the FEM, but also to retain the strength of exponential convergence and accuracy due to the use of high degree polynomials. After describing spectral-element and adjoint methods, I will discuss two applications: (1) a 3D adjoint tomography for the Middle East to improve seismic waveform predictions in the area, and (2) results on seismic source parameters inversion for seismic monitoring.


Seismic Modeling & Applications of FD Modeling to Rock Physics and Geomechanics

10/21/2103            

Joseph Stefani, Senior Consultant at Chevron Energy Technology Company, AAPG Foundation J. Ben Carsey and AAPG/SEG Inter-Society Distinguished Lecturer

Earth modeling, from the construction of subsurface structure and stratigraphy, to the accurate understanding of rock physics, through the simulation of seismic and nonseismic responses, is an enabling technology to guide decisions in acquisition, processing, imaging, inversion and reservoir property inference, for both static and time-lapse understanding. So it is crucial to capture those earth elements that most influence the geophysical phenomena we seek to study. This is notoriously difficult, probably because we regularly underestimate how clever the earth can be in producing various geophysical phenomena.

The main part of the talk focuses on methods we have used in building complex earth models (both overburden and reservoirs) and their seismic simulations, emphasizing the challenge to reproduce the appropriate features observed in real data. Questions to consider are the quality of the seismic data that will act as a guide in the model building, and that of the well logs used to quantify the rock physics. Another consideration is the amount of physics to include in the geophysical response simulation, which is a tradeoff between computational load and acceptable characterization of the data features.

Finally, the industry workhorse for seismic modeling continues to be the time-domain finite-difference (FD) algorithm, mainly because of its balance between accuracy and efficiency, simple concept and gridding, and ease of programming on various hardware platforms. Because of this simplicity, and the growing interest in time-lapse and geomechanical problems, a short treatment is included of how FD modeling can be adapted to problems in rock physics and geomechanics from core to basin scales.

Joe Stefani received degrees in engineering and geophysics from Cal and Stanford. Since 1984, he has worked for Chevron Energy Technology Company, during which time he has been involved in a range of geophysical R&D, including high fidelity earth and seismic modeling, acquisition, anisotropy, inversion, and general Aki & Richards stuff. Most recently he has helped to build the SEG SEAM Phase 1 and Phase 2 earth models.


Creating a Framework to Address General Computational Geoscience Problems

10/28/2013

Scott Johnson, Postdoctoral Fellow Atmospheric, Earth, and Energy Division. Lawrence Livermore Lab

An often substantial barrier faced by technical staff in developing and applying new algorithms or addressing novel problems at scale is the work needed to create that part of the code infrastructure that is common to many simulation codes, from the need to handle user input to generating and analyzing output from simulations. The problem is compounded by the manifold concerns inherent in migrating between platforms and in addressing scalable, distributed memory parallelization. This talk will discuss the development of the GEOS framework at LLNL and discuss detailed examples of how the framework has been used to address several different types of computational geosciences problems, from understanding the mechanisms and sensitivity of injection-induced seismicity to the modeling of hydraulic fracture stimulation. The aim here is to demonstrate not only how the process of software design for massively parallel physical simulation can be simplified and modularized but also how such an approach can accelerate research in computational geosciences and facilitate interaction between researchers.


Computational Molecular Geochemistry for Understanding Iron and Contaminant Cycling in the Environment

11/4/2013

Kevin M. Ross, Pacific Northwest National Laboratory

Life on earth has evolved utilizing the unique redox chemistry of iron. It is an essential element for virtually all life forms due to its presence in heme-°©‐containing proteins, With biological functions including oxygen transportation, Chemical catalysis, and electron transfer. Electron Exchange between ferrous and ferric iron determines the availability of iron in the biosphere by influencing the form of key iron-°©‐bearing minerals and their interactions with trace polyvalent contaminant metals. Under Environmentally relevant conditions this exchange involves interaction between aqueous ferrous iron and solid-°©‐phase ferrous iron oxides and oxyhydroxides, with complex involvement of solid-°©‐state charge migration. Examples Include Fe(II)-°©‐catalyzed Recrystallization of hematite and goethite, and mixed-°©‐valent spinels such as magnetite acting as a mineralogic source and sink for reactive Fe(II) Due to its topotactic solid-°©‐ solution property. Ferrous-°©‐ferric Electron exchange is also essential for microbial respiration via the evolution-°©‐optimized molecular machinery present in metal-°©‐respiring bacteria. This Machinery transmits current across their cell membranes using redox metalloprotein modules comprised of distinct multiheme c-°©‐type cytochromes. This Presentation explores the dynamics of ferrous-°©‐ferric electron exchange in such systems at the atomic and nanoscale levels from theory, computational molecular simulation, spectroscopy, and microscopy. It Will particularly emphasize use of molecular simulation and modeling strategies for predicting the thermodynamics and kinetics of ferrous-°©‐ferric electron transfer reactions, based on the Marcus Theory framework. More Generally the topic also illustrates the importance of computational molecular science for making fundamental advances in understanding the environmental biogeochemistry of the earth’s near-°©‐surface.


A multiscale modeling approach for deformation and failure of geologic materials in extreme loading environments

11/11/2013

Tarabay Antoun, Lawrence Livermore National Lab

Geologic materials are heterogeneous and their macroscopic response is often dominated by changes in the material at lower scales. For this reason, development of predictive modeling capabilities that represent deformation and failure in geologic materials remains a significant scientific challenge. In part this is due to a lack of comprehensive understanding of the complex physical processes that underlie this behavior, and in part due to numerical challenges that prohibit accurate representation

Of the heterogeneities that influence the material response. Recent advances in modeling capabilities coupled with modern high performance computing platforms enable physics-°©‐based simulations of heterogeneous media with unprecedented details, offering a prospect for significant advances in the state of the art. This presentation provides an overview of some of the modern computational approaches under development at LLNL, discusses their advantages and limitations, and presents simulation results demonstrating the application of these numerical methods to practical problems involving heterogeneous geologic materials subjected to extreme dynamic loading environments.


Model Reduction for Parametric and Nonlinear Problems via Reduced Basis and Kernel Methods

11/18/2013

Bernard Hassdonk, University of Stuttgart

In this presentation I will address some recent results in model reduction of parametric and nonlinear problems by focusing on two classes of methods. The first class of techniques consist of Reduced Basis (RB) methods aiming at complexity reduction for parametric partial differential equations by projection techniques. Basic ingredients are an offline/online decomposition, rigorous and rapid a-°©‐posteriori error estimation for certification of the reduced simulation results and Greedy/POD-°©‐Greedy procedures for construction of provable quasi-°©‐optimal approximation spaces. We show some results for nonlinear problems, including hyperbolic transport problems, two-°©‐phase flow in porous media and contact-°©‐problems involving inequality constraints.

A second class of methods are Kernel Methods for generating approximative models of nonlinear functions. These powerful machine learning techniques can be used for sparse vectorial function approximation, for example by vectorial support vector regression or vectorial kernel greedy procedures. The resulting approximants allow efficient complexity reduction in projection-°©‐based model order reduction or in multiscale problems as demonstrated in applications from biomechanics and porous media flow.


CO2 sequestration in shale beds to the Origins of Life: Applications of electronic structure theory to clay minerals

12/2/2013

Dawn Greatches, Stanford University

Examining the electronic structure of a material requires quantum mechanical methods, and in my studies, this is density functional theory (DFT), a computationally intensive technique that originates in traditional physics such as semi-conductor research, but which is increasingly applied in fields such as earth sciences and biology. Studying the behavior of electrons is important because electrons are responsible for all chemical reactivity, and understanding this leads to control over processes. For example, understanding how CO2 interacts with the materials present in underground storage environments, contributes to determining the conditions required for secure CO2 sequestration and long-term storage.

In this presentation I will show you the tools of my trade and how I have applied DFT to the investigation of clay minerals within gas-shale, within the wider context of CO2 sequestration; to the tentative exploration of the formation of fossil oils; to the undesirable behavior of clays within cement and finally, to the preliminary investigation into the formation of peptides within the context of Origins of Life studies.