Metamaterials, synthetic materials with customized properties, have emerged as a promising field due to advancements in additive manufacturing. These materials derive unique mechanical properties from their internal lattice structures, which are often composed of multiple materials that repeat geometric patterns. While traditional inverse design approaches have shown potential, they struggle to map nonlinear material behavior to multiple possible structural configurations. This paper presents a novel framework leveraging video diffusion models, a type of generative artificial Intelligence (AI), for inverse multi-material design based on nonlinear stress-strain responses. Our approach consists of two key components: (1) a fields generator using a video diffusion model to create solution fields based on target nonlinear stress-strain responses, and (2) a structure identifier employing two UNet models to determine the corresponding multi-material 2D design. By incorporating multiple materials, plasticity, and large deformation, our innovative design method allows for enhanced control over the highly nonlinear mechanical behavior of metamaterials commonly seen in real-world applications. It offers a promising solution for generating next-generation metamaterials with finely tuned mechanical characteristics.
Data-driven models that act as surrogates for computationally costly 3D topology optimization techniques are very popular because they help alleviate multiple time-consuming 3D finite element analyses during optimization. In this study, one such 3D CNN-based surrogate model for the topology optimization of Schoen’s gyroid triply periodic minimal surface unit cell is investigated. Gyroid-like unit cells are designed using a voxel algorithm and homogenization-based topology optimization codes in MATLAB. A few such optimization data are used as input–output for supervised learning of the topology-optimization process via the 3D CNN model in Python code. These models could then be used to instantaneously predict the optimized unit cell geometry for any topology parameters. The high accuracy of the model was demonstrated by a low mean square error metric and a high Dice coefficient metric. The model has the major disadvantage of running numerous costly topology optimization runs but has the advantages that the trained model can be reused for different cases of TO and that the methodology of the accelerated design of 3D metamaterials can be extended for designing any complex, computationally costly problems of metamaterials with multi-objective properties or multiscale applications. The main purpose of this paper is to provide the complete associated MATLAB and PYTHON codes for optimizing the topology of any cellular structure and predicting new topologies using deep learning for educational purposes.
Unlike classical artificial neural networks, which require retraining for each new set of parametric inputs, the Deep Operator Network (DeepONet), a lately introduced deep learning framework, approximates linear and nonlinear solution operators by taking parametric functions (infinite-dimensional objects) as inputs and mapping them to complete solution fields. In this paper, two newly devised DeepONet formulations with sequential learning and Residual U-Net (ResUNet) architectures are trained for the first time to simultaneously predict complete thermal and mechanical solution fields under variable loading, loading histories, process parameters, and even variable geometries. Two real-world applications are demonstrated: 1- coupled thermo-mechanical analysis of steel continuous casting with multiple visco-plastic constitutive laws and 2- sequentially coupled direct energy deposition for additive manufacturing. Despite highly challenging spatially variable target stress distributions, DeepONets can infer reasonably accurate full-field temperature and stress solutions several orders of magnitude faster than traditional and highly optimized finite-element analysis (FEA), even when FEA simulations are run on the latest high-performance computing platforms. The proposed DeepONet model's ability to provide field predictions almost instantly for unseen input parameters opens the door for future preliminary evaluation and design optimization of these vital industrial processes.
Frontal polymerization (FP) is a self-sustaining curing process that enables rapid and energy-efficient manufacturing of thermoset polymers and composites. Computational methods conventionally used to simulate the FP process are time-consuming, and repeating simulations are required for sensitivity analysis, uncertainty quantification, or optimization of the manufacturing process. In this work, we develop an adaptive surrogate deep-learning model for FP of dicyclopentadiene (DCPD), which predicts the evolution of temperature and degree of cure orders of magnitude faster than the finite-element method (FEM). The adaptive algorithm provides a strategy to select training samples efficiently and save computational costs by reducing the redundancy of FEM-based training samples. The adaptive algorithm calculates the residual error of the FP governing equations using automatic differentiation of the deep neural network. A probability density function expressed in terms of the residual error is used to select training samples from the Sobol sequence space. The temperature and degree of cure evolution of each training sample are obtained by a 2D FEM simulation. The adaptive method is more efficient and has a better prediction accuracy than the random sampling method. With the well-trained surrogate neural network, the FP characteristics (front speed, shape, and temperature) can be extracted quickly from the predicted temperature and degree-of-cure fields.
A state-of-the-art large eddy simulation code has been developed to solve compressible flows in turbomachinery. The code has been en-gineered with a high degree of scalability, enabling it to effectively leverage the many-core architecture of the new Sunway system. A consistent performance of 115.8 DP-PFLOPs has been achieved on a high-pressure turbine cascade consisting of over 1.69 billion mesh elements and 865 billion Degree of Freedoms (DOFs). By leveraging a high-order unstructured solver and its portability to large hetero-geneous parallel systems, we have progressed towards solving the
This paper introduces Cybershuttle, a new type of user-facing cyberinfrastructure that provides seamless access to a range of resources for researchers, enhancing their productivity. The Cybershuttle Research Environment is built on open source Apache Airavata software and uses a hybrid approach that integrates locally deployed agent programs with centrally hosted middleware. This enables end-to-end integration of computational science and engineering research across a range of resources, including users’ local resources, centralized university computing and data resources, computational clouds, and NSF-funded national-scale computing centers. To ensure a user-centered approach, we have designed the scientific user environments with the best user-centered design practices.
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više