Volume 2B: 44th Design Automation Conference
Latest Publications


TOTAL DOCUMENTS

63
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Published By American Society Of Mechanical Engineers

9780791851760

Author(s):  
Shanglong Zhang ◽  
Julián A. Norato

Topology optimization problems are typically non-convex, and as such, multiple local minima exist. Depending on the initial design, the type of optimization algorithm and the optimization parameters, gradient-based optimizers converge to one of those minima. Unfortunately, these minima can be highly suboptimal, particularly when the structural response is very non-linear or when multiple constraints are present. This issue is more pronounced in the topology optimization of geometric primitives, because the design representation is more compact and restricted than in free-form topology optimization. In this paper, we investigate the use of tunneling in topology optimization to move from a poor local minimum to a better one. The tunneling method used in this work is a gradient-based deterministic method that finds a better minimum than the previous one in a sequential manner. We demonstrate this approach via numerical examples and show that the coupling of the tunneling method with topology optimization leads to better designs.


Author(s):  
Vincenzo Castorani ◽  
Paolo Cicconi ◽  
Michele Germani ◽  
Sergio Bondi ◽  
Maria Grazia Marronaro ◽  
...  

Modularization is a current issue in the context of plant design. A modular system aims to reduce lead time and cost in design phases. An oil & gas plant consists of many Engineered-To-Order solutions to be submitted and approved during the negotiation phase. In this context, design tools and methods are necessary to support the design life cycle from the conceptual study to the detailed project. The paper proposes an approach to optimize the design of modularized oil & gas plants with a focus on the related steel structures. A test case shows the configuration workflow applied to a modular steel structure of about 400 tons. The modularized layout has been optimized using genetic algorithms. A Knowledge Base has been described to support the configuration phase related to the conceptual design. Design rules and metrics have been formalized from the analysis of past solutions.


Author(s):  
Xiaolin Li ◽  
Zijiang Yang ◽  
L. Catherine Brinson ◽  
Alok Choudhary ◽  
Ankit Agrawal ◽  
...  

In Computational Materials Design (CMD), it is well recognized that identifying key microstructure characteristics is crucial for determining material design variables. However, existing microstructure characterization and reconstruction (MCR) techniques have limitations to be applied for materials design. Some MCR approaches are not applicable for material microstructural design because no parameters are available to serve as design variables, while others introduce significant information loss in either microstructure representation and/or dimensionality reduction. In this work, we present a deep adversarial learning methodology that overcomes the limitations of existing MCR techniques. In the proposed methodology, generative adversarial networks (GAN) are trained to learn the mapping between latent variables and microstructures. Thereafter, the low-dimensional latent variables serve as design variables, and a Bayesian optimization framework is applied to obtain microstructures with desired material property. Due to the special design of the network architecture, the proposed methodology is able to identify the latent (design) variables with desired dimensionality, as well as capturing complex material microstructural characteristics. The validity of the proposed methodology is tested numerically on a synthetic microstructure dataset and its effectiveness for materials design is evaluated through a case study of optimizing optical performance for energy absorption. Additional features, such as scalability and transferability, are also demonstrated in this work. In essence, the proposed methodology provides an end-to-end solution for microstructural design, in which GAN reduces information loss and preserves more microstructural characteristics, and the GP-Hedge optimization improves the efficiency of design exploration.


Author(s):  
Xiangxue Zhao ◽  
Shapour Azarm ◽  
Balakumar Balachandran

Online prediction of dynamical system behavior based on a combination of simulation data and sensor measurement data has numerous applications. Examples include predicting safe flight configurations, forecasting storms and wildfire spread, estimating railway track and pipeline health conditions. In such applications, high-fidelity simulations may be used to accurately predict a system’s dynamical behavior offline (“non-real time”). However, due to the computational expense, these simulations have limited usage for online (“real-time”) prediction of a system’s behavior. To remedy this, one possible approach is to allocate a significant portion of the computational effort to obtain data through offline simulations. The obtained offline data can then be combined with online sensor measurements for online estimation of the system’s behavior with comparable accuracy as the off-line, high-fidelity simulation. The main contribution of this paper is in the construction of a fast data-driven spatiotemporal prediction framework that can be used to estimate general parametric dynamical system behavior. This is achieved through three steps. First, high-order singular value decomposition is applied to map high-dimensional offline simulation datasets into a subspace. Second, Gaussian processes are constructed to approximate model parameters in the subspace. Finally, reduced-order particle filtering is used to assimilate sparsely located sensor data to further improve the prediction. The effectiveness of the proposed approach is demonstrated through a case study. In this case study, aeroelastic response data obtained for an aircraft through simulations is integrated with measurement data obtained from a few sparsely located sensors. Through this case study, the authors show that along with dynamic enhancement of the state estimates, one can also realize a reduction in uncertainty of the estimates.


Author(s):  
Aniket N. Chitale ◽  
Joseph K. Davidson ◽  
Jami J. Shah

The purpose of math models for tolerances is to aid a designer in assessing relationships between tolerances that contribute to variations of a dependent dimension that must be controlled to achieve some design function and which identifies a target (functional) feature. The T-Maps model for representing limits to allowable manufacturing variations is applied to identify the sensitivity of a dependent dimension to each of the contributing tolerances to the relationship. The method is to choose from a library of T-Maps the one that represents, in its own local (canonical) reference frame, each contributing feature and the tolerances specified on it; transform this T-Map to a coordinate frame centered at the target feature; obtain the accumulation T-Map for the assembly with the Minkowski sum; and fit a circumscribing functional T-Map to it. The fitting is accomplished numerically to determine the associated functional tolerance value. The sensitivity for each contributing tolerance-and-feature combination is determined by perturbing the tolerance, refitting the functional map to the accumulation map, and forming a ratio of incremental tolerance values from the two functional T-Maps. Perturbing the tolerance-feature combinations one at a time, the sensitivities for an entire stack of contributing tolerances can be built. For certain classes of loop equations, the same sensitivities result by fitting the functional T-Map to the T-Map for each feature, one-by-one, and forming the overall result as a scalar sum. Sensitivities help a designer to optimize tolerance assignments by identifying those tolerances that most strongly influence the dependent dimension at the target feature. Since the fitting of the functional T-Map is accomplished by intersection of geometric shapes, all the T-Maps are constructed with linear half-spaces.


Author(s):  
Sacha W. Ruzzante ◽  
Amy M. Bilton

Agricultural technology transfer to people in the developing world is a potentially powerful tool to raise productivity and improve livelihoods. Despite this, many technologies are not adopted by their intended beneficiaries. Qualitative studies have identified guidelines to follow in the design and dissemination of agricultural technology, but there has been comparatively little synthesis of quantitative studies of adoption. This study presents a meta-analysis of adoption studies of agricultural technologies in the developing world. The results confirm most earlier findings, but cast doubt on the importance of some classic predictors of adoption, such as education and landholding size. Contact with extension services and membership in farming associations are found to be the most important variables in predicting adoption. Attributes of the technologies are found to modify the relationships of predictor variables to adoption. Membership in farming associations and farmer experience are found to be positively linked to adoption in general, but for technologies that reduce labour the effect is amplified. The findings have potential implications for researchers, extension workers, and policy makers.


Author(s):  
Zixi Han ◽  
Mian Li ◽  
Zixian Jiang ◽  
Zuoxing Min ◽  
Sophie Bourmich

Strength requirement is one of the most important criteria in the design of gas turbine casing. Traditionally, deterministic analyses are used in strength assessment, with boundary conditions and loads set as fixed design values. However, real boundary conditions and loads in the operation can often differ from the fixed design values, such that the mechanical integrity of the turbine casing can vary from the strength and fatigue calculations. In this work, the effect of the variability of the boundary conditions and loads is investigated on the static thermal stress problem of gas turbine casings using a probabilistic approach. The probability distribution is estimated using a Monte Carlo simulation based on the distribution of boundary conditions and loads obtained from field measurements. The finite element analysis is used to calculate the stress corresponding to different boundary conditions and a surrogate model is built to reduce the computational time of Monte Carlo simulations. This methodology is applied to a real engineering case which better quantifies the strength assessment result.


Author(s):  
Erica Gralla ◽  
Zoe Szajnfarber

It has long been recognized that games are useful in engineering education, and more recently they have also become a common setting for empirical research. Games are useful for both teaching and research because they mimic aspects of reality and require participants to reason within that realistic context, and they allow researchers to study phenomena empirically that are hard to observe in reality. This paper explores what can be learned by students and by researchers, based on the authors’ experience with two sets of games. These games vary in both the experience level of the participants and the “fidelity” or realism of the game itself. Our experience suggests that what can be learned by participants and by researchers depends on both these dimensions. For teaching purposes, inexperienced participants may struggle to connect lessons from medium-fidelity games to the real world. On the other hand, experienced participants may learn more from medium-fidelity games that provide the time and support to practice and reflect on new skills. For research purposes, high-fidelity games are best due to their higher ecological validity, even with inexperienced participants, although experienced participants may enable strong validity in medium-fidelity settings. These findings are based on experience with two games, but provide promising directions for future research.


Author(s):  
Jennifer Ventrella ◽  
Nordica MacCarty

Accurate, accessible methods for monitoring and evaluation of improved cookstoves are necessary to optimize designs, quantify impacts, and ensure programmatic success. Despite recent advances in cookstove monitoring technologies, there are no existing devices that autonomously measure fuel use in a household over time and this important metric continues to rely on in-person visits to conduct measurements by hand. To address this need, researchers at Oregon State University and Waltech Systems have developed the Fuel, Usage, and Emissions Logger (FUEL), an integrated sensor platform that quantifies fuel consumption and cookstove use by monitoring the mass of the household’s fuel supply with a load cell and the cookstove body temperature with a thermocouple. Following a proof-of-concept study of five prototypes in Honduras, a pilot study of one hundred prototypes was conducted in the Apac District of northern Uganda for one month. The results were used to evaluate user engagement with the system, verify technical performance, and develop algorithms to quantify fuel consumption and stove usage over time. Due to external hardware malfunctions, 31% of the deployed FUEL sensors did not record data. However, results from the remaining 69% of sensors indicated that 82% of households used the sensor consistently for a cumulative 2188 days. Preliminary results report an average daily fuel consumption of 6.3 ± 1.9 kg across households. Detailed analysis algorithms are still under development. With higher quality external hardware, it is expected that FUEL will perform as anticipated, providing long-term, quantitative data on cookstove adoption, fuel consumption, and emissions.


Author(s):  
Eliot Rudnick-Cohen ◽  
Jeffrey W. Herrmann ◽  
Shapour Azarm

Feasibility robust optimization techniques solve optimization problems with uncertain parameters that appear only in their constraint functions. Solving such problems requires finding an optimal solution that is feasible for all realizations of the uncertain parameters. This paper presents a new feasibility robust optimization approach involving uncertain parameters defined on continuous domains without any known probability distributions. The proposed approach integrates a new sampling-based scenario generation scheme with a new scenario reduction approach in order to solve feasibility robust optimization problems. An analysis of the computational cost of the proposed approach was performed to provide worst case bounds on its computational cost. The new proposed approach was applied to three test problems and compared against other scenario-based robust optimization approaches. A test was conducted on one of the test problems to demonstrate that the computational cost of the proposed approach does not significantly increase as additional uncertain parameters are introduced. The results show that the proposed approach converges to a robust solution faster than conventional robust optimization approaches that discretize the uncertain parameters.


Sign in / Sign up

Export Citation Format

Share Document