Iterative methods are widely used for solving sparse linear systems of equations and eigenvalue problems. Their performances are relevant to the conditioning of the linear systems. This work explores factors which affects the conditioning of the discretized system, including material heterogeneity, different constitutive characteristics and element sizes, and reveals the dependencies among solvers performance and the conditioning of linear systems. Results show that multiple materials can alter the eigenvalue distributions significantly, while lowering Young’s modulus results in higher condition numbers but has less effects on the spectral scope, additionally, there is a approximately reciprocal square linear relation between element size and condition numbers. These entangled effects along with the chosen pre-conditioners render that there is no simple monotonic increasing dependency among condition numbers and solving time, except with specific conditions. It is hoped that this work will provide more understanding of the iterative sparse linear solver behavior used in similar structural problems.
The solidifying steel follows highly nonlinear thermo-mechanical behavior depending on the loading history, temperature, and metallurgical phase fraction calculations (liquid, ferrite, and austenite). Numerical modeling with a computationally challenging multiphysics approach is used on high-performance computing to generate sufficient training and testing data for subsequent deep learning. We have demonstrated how the innovative sequence deep learning methods can learn from multiphysics modeling data of a solidifying slice traveling in a continuous caster and correctly and instantly capture the complex history and temperature-dependent phenomenon in test data samples never seen by the deep learning networks.
This article introduces a computational design framework for obtaining three‐dimensional (3D) periodic elastoplastic architected materials with enhanced performance, subject to uniaxial or shear strain. A nonlinear finite element model accounting for plastic deformation is developed, where a Lagrange multiplier approach is utilized to impose periodicity constraints. The analysis assumes that the material obeys a von Mises plasticity model with linear isotropic hardening. The finite element model is combined with a corresponding path‐dependent adjoint sensitivity formulation, which is derived analytically. The optimization problem is parametrized using the solid isotropic material penalization method. Designs are optimized for either end compliance or toughness for a given prescribed displacement. Such a framework results in producing materials with enhanced performance through much better utilization of an elastoplastic material. Several 3D examples are used to demonstrate the effectiveness of the mathematical framework.
Deep learning (DL) and the collocation method are merged and used to solve partial differential equations (PDEs) describing structures' deformation. We have considered different types of materials: linear elasticity, hyperelasticity (neo‐Hookean) with large deformation, and von Mises plasticity with isotropic and kinematic hardening. The performance of this deep collocation method (DCM) depends on the architecture of the neural network and the corresponding hyperparameters. The presented DCM is meshfree and avoids any spatial discretization, which is usually needed for the finite element method (FEM). We show that the DCM can capture the response qualitatively and quantitatively, without the need for any data generation using other numerical methods such as the FEM. Data generation usually is the main bottleneck in most data‐driven models. The DL model is trained to learn the model's parameters yielding accurate approximate solutions. Once the model is properly trained, solutions can be obtained almost instantly at any point in the domain, given its spatial coordinates. Therefore, the DCM is potentially a promising standalone technique to solve PDEs involved in the deformation of materials and structural systems as well as other physical phenomena.
Deep learning and the collocation method are merged and used to solve partial differential equations describing structures' deformation. We have considered different types of materials: linear elasticity, hyperelasticity (neo-Hookean) with large deformation, and von Mises plasticity with isotropic and kinematic hardening. The performance of this deep collocation method (DCM) depends on the architecture of the neural network and the corresponding hyperparameters. The presented DCM is meshfree and avoids any spatial discretization, which is usually needed for the finite element method (FEM). We show that the DCM can capture the response qualitatively and quantitatively, without the need for any data generation using other numerical methods such as the FEM. Data generation usually is the main bottleneck in most data-driven models. The deep learning model is trained to learn the model's parameters yielding accurate approximate solutions. Once the model is properly trained, solutions can be obtained almost instantly at any point in the domain, given its spatial coordinates. Therefore, the deep collocation method is potentially a promising standalone technique to solve partial differential equations involved in the deformation of materials and structural systems as well as other physical phenomena.
Fluid-Structure Interaction (FSI) simulations have applications to a wide range of engineering areas. One popular technique to solve FSI problems is the Arbitrary Lagrangian-Eulerian (ALE) method. Both academic and industry communities developed codes to implement the ALE method. One of them is Alya, a Finite Element Method (FEM) based code developed in Barcelona Supercomputing Center (BSC). By analyzing the application on a simplified artery case and compared to another commercial code, which is Finite Volume Method (FVM) based, this paper discusses the mathematical background of the solver for domains, and carries out verification work on Alya’s FSI capability. The results show that while both codes provide comparable FSI results, Alya has exhibited better robustness due to its Subgrid Scale (SGS) technique for stabilization of convective term and the subsequent numerical treatments. Thus this code opens the door for more extensive use of higher fidelity finite element based FSI methods in future.
Among advanced manufacturing techniques for Fiber-Reinforced Polymer-matrix Composites (FRPCs) which are critical for aerospace, marine, automotive, and energy industries, Frontal Polymerization (FP) has been recently proposed to save orders of magnitude time and energy. However, the cure kinetics of the matrix phase, usually a thermosetting polymer, brings difculty to the design and control of the process. Here, we develop a deep learning model, ChemNet, to solve an inverse problem in predicting and optimizing the cure kinetics parameters of the thermosetting FRPCs for a desired fabrication strategy. ChemNet consists of a fully connected FeedForward 9-layer deep neural network trained on one million examples, and predicts activation energy and reaction enthalpy given the front characteristics such as speed and maximum temperature. ChemNet provides highly accurate predictions measured by the mean square error (MSE) and by the maximum absolute error metrics. The MSE of ChemNet, on the train set and test set attain the values of 1E-4 and 2E-4, respectively.
LS-DYNA is a well-known multiphysics code with both explicit and implicit time stepping capabilities. Implicit simulations rely heavily on sparse matrix computations, in particular direct solvers, and are notoriously much harder to scale than explicit simulations. In this paper, we investigate the scalability challenges of the implicit structural mode of LS- DYNA. In particular, we focus on linear constraint analysis, sparse matrix reordering, symbolic factorization, and numerical factorization. Our problem of choice for this study is a thermomechanical simulation of jet engine models built by Rolls-Royce with up to 200 million degrees of freedom, or equations. The models are used for engine performance analysis and design optimization, in particular optimization of tip clearances in the compressor and turbine sections of the engine. We present results using as many as 131,072 cores on the Blue Waters Cray XE6/XK7 supercomputer at NCSA and the Titan Cray XK7 supercomputer at OLCF. Since the main focus is on general linear algebra problems, this work is of interest for all linear algebra practitioners, not only developers of implicit finite element codes.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više