<p>Ensemble based data assimilation approaches, such as the Ensemble Kalman Filter (EnKF), have been widely and successfully implemented to combine observations with dynamic models to investigate the evolution of a system’s state. Such inversions are powerful tools for providing forecasts as well as “hindcasting” events such as volcanic eruptions to investigate source parameters and triggering mechanisms. In this study, a high performance computing (HPC) adaptation of the EnKF is used to assimilate ground deformation observations from interferometric synthetic-aperture radar (InSAR) into high-fidelity, multiphysics finite element models to evaluate the prolonged unrest and June 26, 2018 eruption of Sierra Negra volcano, Galápagos. The stability of the Sierra Negra magma system is evaluated at each time step by estimating variations in reservoir overpressure, Mohr-Coulomb failure in the host rock, and tensile stress and failure along the reservoir boundary. The deformation of Sierra Negra is tracked over a decade, during which almost 5 meters of surface uplift has been recorded. The EnKF reveals that the evolution of the stress state in the host rock surrounding the Sierra Negra magma reservoir likely controlled the timing of the eruption. While increases in magma reservoir overpressure remained modest (< 10 MPa) throughout the data assimilation time period, significant Mohr-Coulomb failure is indicated in the lead up to the eruption coincident with increased seismicity along both trapdoor faults within Sierra Negra’s caldera and along the caldera’s ring faults. During the final stages of pre-eruptive unrest, the EnKF models indicate limited tensile failure, with no tensile failure along the northern portion of the magma system where the eruption commenced. Most strikingly, model calculations of significant through-going Mohr-Coulomb failure correspond in space and time with a Mw 5.4 earthquake recorded in the hours preceding the 2018 eruption. Subsequent stress modeling implicates the Mw 5.4 earthquake along the southern intra-caldera trapdoor fault as the potential catalyst for tensile failure and dike initiation along the reservoir to the north. In conclusion, the volcano EnKF approach successfully tracked the evolving stability of Sierra Negra, indicating great potential for future forecasting efforts.</p>
The field of optimal design of linear elastic structures has seen many exciting successes that resulted in new architected materials and designs. With the availability of cloud computing, including high-performance computing, machine learning, and simulation, searching for optimal nonlinear structures is now within reach. In this study, we develop two convolutional neural network models to predict optimized designs for a given set of boundary conditions, loads, and volume constraints. The first convolutional neural network model is for the case of materials with a linear elastic response while the second developed model is for hyperelastic response where material and geometric nonlinearities are involved. For the nonlinear elastic case, the neo-Hookean model is utilized. For this purpose, we generate datasets, composed of the optimized designs paired with the corresponding boundary conditions, loads, and constraints, using topology optimization framework to train and validate both models. The developed models are capable of accurately predicting the optimized designs without requiring an iterative scheme and with negligible computational time. The suggested pipeline can be generalized to other nonlinear mechanics scenarios and design domains.
Model extrapolation to unseen flow is one of the biggest challenges facing data-driven turbulence modeling, especially for models with high dimensional inputs that involve many flow features. In this study we review previous efforts on data-driven Reynolds-Averaged Naiver Stokes (RANS) turbulence modeling and model extrapolation, with main focus on the popular methods being used in the field of transfer learning. Several potential metrics to measure the dissimilarity between training flows and testing flows are examined. Different Machine Learning (ML) models are compared to understand how the capacity or complexity of the model affects its behavior in the face of dataset shift. Data preprocessing schemes which are robust to covariate shift, like normalization, transformation, and importance re-weighted likelihood, are studied to understand whether it is possible to find projections of the data that attenuate the differences in the training and test distributions while preserving predictability. Three metrics are proposed to assess the dissimilarity between training/testing dataset. To attenuate the dissimilarity, a distribution matching framework is used to align the statistics of the distributions. These modifications also allow the regression tasks to have better accuracy in forecasting under-represented extreme values of the target variable. These findings are useful for future ML based turbulence models to evaluate their model predictability and provide guidance to systematically generate diversified high-fidelity simulation database.
As a type of architectured material, knitted textiles exhibit global mechanical behavior which is affected by their microstructure defined at the scale at which yarns are arranged topologically given the type of textile manufactured. To relate local geometrical, interfacial, material, kinematic and kinetic properties to global mechanical behavior, a first-order, two-scale homogenization scheme was developed and applied in this investigation. In this approach, the equivalent stress at the far field and the consistent material stiffness are explicitly derived from the microstructure. In addition, the macrofield is linked to the microstructural properties by a user subroutine which can compute stresses and stiffness in a looped finite element (FE) code. This multiscale homogenization scheme is computationally efficient and capable of predicting the mechanical behavior at the macroscopic level while accounting directly for the deformation-induced evolution of the underlying microstructure.
Direct numerical simulations (DNS) of knitted textile mechanical behavior are for the first time conducted on high performance computing (HPC) using both the explicit and implicit finite element analysis (FEA) to directly assess effective ways to model the behavior of such complex material systems. Yarn-level models including interyarn interactions are used as a benchmark computational problem to enable direct comparison in terms of computational efficiency between explicit and implicit methods. The need for such comparison stems from both a significant increase in the degrees-of-freedom (DOFs) with increasing size of the computational models considered as well as from memory and numerical stability issues due to the highly complex three-dimensional (3D) mechanical behavior of such 3D architectured materials. Mesh and size dependency, as well as parallelization in an HPC environment are investigated. The results demonstrate a satisfying accuracy combined with higher computational efficiency and much less memory requirements for the explicit method, which could be leveraged in modeling and design of such novel materials.
Only time and resource constraints limit the size and complexity of the implicit analyses that LS-DYNA users would like to perform. Rolls-Royce is an example thereof, challenging its suppliers of computers and mechanical computer aided engineering (MCAE) software to run ever larger models, with more physics, in shorter periods of time. This will allow CAE to have a greater impact on the design cycle for new engines, and is a step towards the long-term vision of digital twins. Towards this end, Rolls-Royce created a family of representative engine models, with as many as 66 million finite elements. Figure 1 depicts a cross-section of the representative engine model.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više