Assessing the Effectiveness of Using Graveyard Data for Generating Design Alternatives

Author(s):  
Garrett Foster ◽  
Scott Ferguson

Modeling to Generate Alternatives (MGA) is a technique used to identify variant designs that maximize design space distance from an initial point while satisfying performance loss constraints. Recent work has explored the application of this technique to nonlinear design problems, where the design space was investigated using an exhaustive sampling procedure. While computational cost concerns were noted, the main focus was determining how scaling and distance metric selection influenced alternative discovery. To increase the viability of MGA for engineering design problems, this work looks to reduce the computational overhead needed to identify design alternatives. This paper investigates and quantifies the effectiveness of using previously sampled designs, i.e. a graveyard, from a multiobjective genetic algorithm as a means of reducing computational expense. Computational savings and the expected error are quantified to assess the effectiveness of this approach. These results are compared to other more common “search” techniques; namely Latin hypercube samplings, grid search, and the Nelder-Mead simplex method. The performance of these “search” techniques are subsequently explored in two case study problems — the design of a two bar truss, and an I-beam — to find the most unique alternative design over a range of different thresholds. Results from this work show the graveyard can be used as a way of inexpensively generating alternatives that are close to ideal, especially nearer to the starting design. Additionally, this paper demonstrates that graveyard information can be used to increase the performance of the Nelder-Mead simplex method when searching for alternative designs.

Author(s):  
ZAHED SIDDIQUE ◽  
DAVID W. ROSEN

For typical optimization problems, the design space of interest is well defined: It is a subset of Rn, where n is the number of (continuous) variables. Constraints are often introduced to eliminate infeasible regions of this space from consideration. Many engineering design problems can be formulated as search in such a design space. For configuration design problems, however, the design space is much more difficult to define precisely, particularly when constraints are present. Configuration design spaces are discrete and combinatorial in nature, but not necessarily purely combinatorial, as certain combinations represent infeasible designs. One of our primary design objectives is to drastically reduce the effort to explore large combinatorial design spaces. We believe it is imperative to develop methods for mathematically defining design spaces for configuration design. The purpose of this paper is to outline our approach to defining configuration design spaces for engineering design, with an emphasis on the mathematics of the spaces and their combinations into larger spaces that more completely capture design requirements. Specifically, we introduce design spaces that model physical connectivity, functionality, and assemblability considerations for a representative product family, a class of coffeemakers. Then, we show how these spaces can be combined into a “common” product variety design space. We demonstrate how constraints can be defined and applied to these spaces so that feasible design regions can be directly modeled. Additionally, we explore the topological and combinatorial properties of these spaces. The application of this design space modeling methodology is illustrated using the coffeemaker product family.


2004 ◽  
Vol 126 (6) ◽  
pp. 945-949 ◽  
Author(s):  
Maarten Franssen ◽  
Louis L. Bucciarelli

Rationality has different meanings within different contexts. In engineering design, to be rational usually means to be instrumentally rational, that is, to take a measured decision aimed at the realization of a particular goal, as in attempts to optimize an objective function. But in many engineering design problems, especially those that involve several engineers collaborating on a design task, there is no obvious or uncontested, unique objective function. An alternative approach then takes the locus of optimization to be individual engineers’ utility functions. In this paper, we address an argument which claimed that unless the engineers hold a common utility function over design alternatives, a suboptimal, hence, irrational, design is bound to ensue. We challenge this claim and show that, while sticking to the utility-function approach but adopting a game-theoretic perspective, rational outcomes to the problem at issue are possible.


Author(s):  
Peter Simov ◽  
Scott Ferguson

Significant research has focused on multiobjective design optimization and negotiating trade-offs between conflicting objectives. Many times, this research has referred to the possibility of attaining similar performance from multiple, unique design combinations. While such occurrences may allow for greater design freedom, their significance has yet to be quantified for trade-off decisions made in the design space (DS). In this paper, we computationally explore which regions of the performance space (PS) exhibit “one-to-many” mappings back to the DS, and examine the behavior and validity of the corresponding region associated with this mapping. Regions of interest in the PS and DS are identified and generated using indifference thresholds to effectively “discretize” both spaces. The properties analyzed in this work are a mapped region’s location in the PS and DS and the total hypervolume of the mappings. Our proposed approach is demonstrated on two different multiobjective engineering problems. The results indicate that one-to-many mappings occur in engineering design problems, and that while these mappings can result in significant design space freedom, they often result in notable performance sacrifice.


2017 ◽  
Vol 2017 ◽  
pp. 1-23 ◽  
Author(s):  
Yuting Lu ◽  
Yongquan Zhou ◽  
Xiuli Wu

In this paper, a novel hybrid lightning search algorithm-simplex method (LSA-SM) is proposed to solve the shortcomings of lightning search algorithm (LSA) premature convergence and low computational accuracy and it is applied to function optimization and constrained engineering design optimization problems. The improvement adds two major optimization strategies. Simplex method (SM) iteratively optimizes the current worst step leaders to avoid the population searching at the edge, thus improving the convergence accuracy and rate of the algorithm. Elite opposition-based learning (EOBL) increases the diversity of population to avoid the algorithm falling into local optimum. LSA-SM is tested by 18 benchmark functions and five constrained engineering design problems. The results show that LSA-SM has higher computational accuracy, faster convergence rate, and stronger stability than other algorithms and can effectively solve the problem of constrained nonlinear optimization in reality.


2019 ◽  
pp. 51-60
Author(s):  
Mustafa Teksoy ◽  
Onur Dursun

Determining control parameters of kinetic shading devices introduces a dynamic problem to designers, which can best be tackled by computational tools. Yet, excessive computational cost inherits in reaching near optimum solutions led to exclusion of many design alternatives and weather conditions. Addressing the issue, the current study aims to explore the design space adequately and evaluate the performance of responsivekinetic shading devices (RKSD) by proposing a novel framework. Current framework adopts a surrogatebased technique for multiobjective optimization of control parameters of a RKSD on randomly sampled daylight hours. To test the plausibility of any results obtained by the proposed framework, a controlled experiment is designed. Empirical evidences suggest RKSD outperforms the static one in daylighting and view performance metrics. However, considering indoor temperature no significant differences observed.


Author(s):  
Tomonori Honda ◽  
Erik K. Antonsson

The Method of Imprecision (MOI) is a multi-objective design method that maximizes the overall degree of both design and performance preferences. Sets of design variables are iteratively selected, and the corresponding performances are approximately computed. The designer’s judgment (expressed as preferences) are combined (aggregated) with the customer’s preferences, to determine the overall preference for sets of points in the design space. In addition to degrees of preference for values of the design and performance variables, engineering design problems also typically include uncertainties caused by uncontrolled variations, for example, measuring and fabrication limitations. This paper illustrates the computation of expected preference for cases where the uncertainties are uncorrelated, and also where the uncertainties are correlated. The result is a “best” set of design variable values for engineering problems, where the overall aggregated preference is maximized. As is illustrated by the examples shown here, where both preferences and uncontrolled variations are present, the presence of uncertainties can have an important effect on the choice of the overall best set of design variable values.


Author(s):  
W. Hu ◽  
K. H. Saleh ◽  
S. Azarm

Approximation Assisted Optimization (AAO) is widely used in engineering design problems to replace computationally intensive simulations with metamodeling. Traditional AAO approaches employ global metamodeling for exploring an entire design space. Recent research works in AAO report on using local metamodeling to focus on promising regions of the design space. However, very limited works have been reported that combine local and global metamodeling within AAO. In this paper, a new approximation assisted multiobjective optimization approach is developed. In the proposed approach, both global and local metamodels for objective and constraint functions are used. The approach starts with global metamodels for objective and constraint functions and using them it selects the most promising points from a large number of randomly generated points. These selected points are then “observed”, which means their actual objective/constraint function values are computed. Based on these values, the “best” points are grouped in multiple clustered regions in the design space and then local metamodels of objective/constraint functions are constructed in each region. All observed points are also used to iteratively update the metamodels. In this way, the predictive capabilities of the metamodels are progressively improved as the optimizer approaches the Pareto optimum frontier. An advantage of the proposed approach is that the most promising points are observed and that there is no need to verify the final solutions separately. Several numerical examples are used to compare the proposed approach with previous approaches in the literature. Additionally, the proposed approach is applied to a CFD-based engineering design example. It is found that the proposed approach is able to estimate Pareto optimum points reasonably well while significantly reducing the number of function evaluations.


Author(s):  
Sanga Lee ◽  
Saeil Lee ◽  
Kyu-Hong Kim ◽  
Dong-Ho Lee ◽  
Young-Seok Kang ◽  
...  

In simple optimization problem, direct searching methods are most accurate and practical enough. However, for more complicated problem which contains many design variables and demands high computational costs, surrogate model methods are recommendable instead of direct searching methods. In this case, surrogate models should have reliability for not only accuracy of the optimum value but also globalness of the solution. In this paper, the Kriging method was used to construct surrogate model for finding aerodynamically improved three dimensional single stage turbine. At first, nozzle was optimized coupled with base rotor blade. And then rotor was optimized with the optimized nozzle vane in order. Kriging method is well known for its good describability of nonlinear design space. For this reason, Kriging method is appropriate for describing the turbine design space, which has complicated physical phenomena and demands many design variables for finding optimum three dimensional blade shapes. To construct airfoil shape, Prichard topology was used. The blade was divided into 3 sections and each section has 9 design variables. Considering computational cost, some design variables were picked up by using sensitivity analysis. For selecting experimental point, D-optimal method, which scatters each experimental points to have maximum dispersion, was used. Model validation was done by comparing estimated values of random points by Kriging model with evaluated values by computation. The constructed surrogate model was refined repeatedly until it reaches convergence criteria, by supplying additional experimental points. When the surrogate model satisfies the reliability condition and developed enough, finding optimum point and its validation was followed by. If any variable was located on the boundary of design space, the design space was shifted in order to avoid the boundary of the design space. This process was also repeated until finding appropriate design space. As a result, the optimized design has more complicated blade shapes than that of the baseline design but has higher aerodynamic efficiency than the baseline turbine stage.


Author(s):  
David C. Zimmerman

Abstract The overall objective of this study is to formulate and study a generic procedure for navigating expensive and complex design spaces. The term generic is meant to imply that the procedure would be equally valid in exploring design problems in a multitude of fields. The term expensive design space implies that the computational cost, or burden, associated with a single function is considered “large”. What is desired is a methodology which can identify “promising regions” of the design space using as few function evaluations as possible. To approach this problem, a neural network approach is developed to serve as an inexpensive and generic function approximation procedure. The genetic algorithm was selected as the optimization technique based on its ability to search multi-modal, discontinuous, mixed parameter, and noisy design spaces.


Author(s):  
Conner Sharpe ◽  
Clinton Morris ◽  
Benjamin Goldsberry ◽  
Carolyn Conner Seepersad ◽  
Michael R. Haberman

Modern design problems present both opportunities and challenges, including multifunctionality, high dimensionality, highly nonlinear multimodal responses, and multiple levels or scales. These factors are particularly important in materials design problems and make it difficult for traditional optimization algorithms to search the space effectively, and designer intuition is often insufficient in problems of this complexity. Efficient machine learning algorithms can map complex design spaces to help designers quickly identify promising regions of the design space. In particular, Bayesian network classifiers (BNCs) have been demonstrated as effective tools for top-down design of complex multilevel problems. The most common instantiations of BNCs assume that all design variables are independent. This assumption reduces computational cost, but can limit accuracy especially in engineering problems with interacting factors. The ability to learn representative network structures from data could provide accurate maps of the design space with limited computational expense. Population-based stochastic optimization techniques such as genetic algorithms (GAs) are ideal for optimizing networks because they accommodate discrete, combinatorial, and multimodal problems. Our approach utilizes GAs to identify optimal networks based on limited training sets so that future test points can be classified as accurately and efficiently as possible. This method is first tested on a common machine learning data set, and then demonstrated on a sample design problem of a composite material subjected to a planar sound wave.


Sign in / Sign up

Export Citation Format

Share Document