Surrogate-Based Modeling and Optimization: Applications in Engineering

Free download. Book file PDF easily for everyone and every device. You can download and read online Surrogate-Based Modeling and Optimization: Applications in Engineering file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Surrogate-Based Modeling and Optimization: Applications in Engineering book. Happy reading Surrogate-Based Modeling and Optimization: Applications in Engineering Bookeveryone. Download file Free Book PDF Surrogate-Based Modeling and Optimization: Applications in Engineering at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Surrogate-Based Modeling and Optimization: Applications in Engineering Pocket Guide.

Create citation alert. Buy this article in print. Journal RSS feed. Sign up for new issue notifications. As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method.

As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. See all. Item Information Condition:. List price:. What does this price mean? You save:. Sign in to check out Check out as guest. The item you've selected was not added to your cart. Add to Watchlist Unwatch. Watch list is full.

May not ship to Germany - Read item description or contact seller for shipping options.

Citations per year

See details. Item location:. Jessup, Maryland, United States. Ships to:. This amount is subject to change until you make payment. For additional information, see the Global Shipping Program terms and conditions - opens in a new window or tab This amount includes applicable customs duties, taxes, brokerage and other fees. For additional information, see the Global Shipping Program terms and conditions - opens in a new window or tab.

Any international shipping is paid in part to Pitney Bowes Inc. Learn More - opens in a new window or tab International shipping and import charges paid to Pitney Bowes Inc. Learn More - opens in a new window or tab Any international shipping and import charges are paid in part to Pitney Bowes Inc. Learn More - opens in a new window or tab Any international shipping is paid in part to Pitney Bowes Inc.

Learn More - opens in a new window or tab. Related sponsored items Feedback on our suggestions - Related sponsored items. The Surrogate: A gripping psychological thriller with an incredible twist by…. Report item - opens in a new window or tab. Item specifics Condition: Like New: A book that looks new but has been read.

Cover has no visible wear, and the dust jacket if applicable is included for hard covers. May be very minimal identifying marks on the inside cover. Very minimal wear and tear. See all condition definitions - opens in a new window or tab Read more about the condition. Shipping and handling. The seller has not specified a shipping method to Germany.

Shipping to: United States. Typically, the metamodel fitted on a set of points collected in a previous optimization attempt would be biased toward the already explored regions of the feasible space probably containing local optima and could be quite misleading; therefore, the unexplored regions would likely remain unexplored when optimizing on such metamodels [ Jin et al. In such a case, the metamodel that is only accurate in small regions might be completely misleading in the remaining parts that may contain the global optimum [ Broad et al.

The framework through which the metamodel is used see sections 2. The optimal size also varies for different function approximation techniques based on their level of flexibility and conformability. The only prior knowledge is usually the minimum limit on the number of design sites see section 2. There are some suggestions in the metamodeling literature on the size of initial DoEs when the approximation techniques are kriging and RBFs as summarized in the following. Jones et al.

For relatively small n values, equation 8 is equivalent to equation 7 , but when n becomes larger, in order to design a more detailed metamodel with a better global accuracy, 0. They suggest that Sobester et al. They also demonstrate that if the size of the initial DoE exceeds 0. Overall, as the size of initial DoEs and the total number of function evaluations cannot typically be large when the original function is computationally expensive, there is no guarantee that the initial design sites are adequately well distributed to effectively represent the shape of the underlying function e.

Generally, the computational time required for refitting a metamodel mostly nonlinearly increases with an increase in the size of the set of design sites. The type of the function approximation technique used to build the metamodel is also a main factor in determining the appropriate refitting frequency. As such, the computational time required for the metamodel refitting substantially varies for different types of function approximation techniques and different data sizes and may become computationally demanding and even sometimes prohibitively long.

Neural networks may suffer the most in this regard, as the neural network training process is typically computationally demanding relative to other alternatives even for small sets of design sites. Kriging refitting may also become computationally demanding for large numbers more than a few hundreds of design sites [ Gano et al. The maximum likelihood estimation methodology for correlation parameter tuning is the main computational effort in the kriging re fitting procedure. Razavi et al.

Refitting polynomials and RBFs with no correlation parameters is very fast even for moderately large sets i. SVMs, as explained in section 2. The appropriate metamodel refitting frequency for a given problem is also a function of the computational demand of the original computationally expensive model as the computational budget required for metamodel refitting may sometimes be negligible and easily justified when compared to the computational demands of the original computationally expensive models.

The discussions in this paper mainly apply to surrogate modeling when emulating the objective functions and also when constraints are included in the objective function through penalty function approaches. As a result, Broad et al. They also note the importance of training the surrogate model on both feasible and infeasible design sites. Yan and Minsker [] report for their ANN surrogate model of constraints that their penalty function parameters were determined by trial and error experiment. There are also different approaches in the broader research community to more accurately handle constraints with surrogates [e.

The paper by Viana et al. The multiobjective optimization algorithms that utilize multiple surrogates commonly assume that the approximation errors uncertainties of these multiple surrogates are independent no correlation despite the fact that the objectives are typically conflicting [ Wagner et al. The issue of multiple correlated outputs being approximated by surrogate models that is discussed in section 2.

Recent research by Bautista [] and Svenson [] address the dependencies between multiple objective functions when these functions are emulated with multivariate Gaussian processes. When using the basic sequential framework, at the end of Step 3, all approximate tradeoff solutions should be evaluated with the original computationally expensive objective functions to determine which of these solutions are actually nondominated i. Li et al. These limitations and considerations are discussed below in sections 2. Response surface surrogate modeling becomes less attractive or even infeasible when the number of explanatory variables is large.

In such problems, the primary issue is that the minimum number of design sites required to develop some function approximation models can be excessively large. Koch et al. As such, the number of design sites required to reasonably cover the space becomes extremely large for higher number of variables. Behzadian et al.

Shop with confidence

However, di Pierro et al. They point out that they could not improve the final solution quality of ParEGO even with increasing the total number of original function evaluations. Shan and Wang [a] develop a new function approximation model that is computationally efficient for larger number of decision variables. Exact emulation, also referred to as interpolation in numerical analysis, aims to construct a response surface surrogate representing the underlying function that goes through all design sites i. Kriging for computer experiments, RBFs, and Gaussian emulator machines [ O'Hagan , ] are examples of exact emulators.

Unlike exact emulators, there are emulation techniques that are inexact in that they produce a varying bias deviations from the true values that are sometimes unpredictable at different design sites. For example, a polynomial can exactly reproduce all design sites when the degree of freedom of the polynomial regression is zero—in case where there are as many coefficients in the polynomial as there are design sites.

Nevertheless, neither SVMs nor ANNs have been developed to apply as exact emulators and such applications would be impractical. Although ANNs are inexact emulators, their smoothing properties are usually unclear to the user and very hard to manipulate [ Razavi and Tolson , ]. Any smoothing capability usually has an associated tuning parameter that controls the extent of smoothing.

The inexact emulators are more suitable for physical experiments than computer experiments as the usual objective is to have an approximation that is insensitive or less sensitive to noise. Conversely, exact emulators are usually more advisable when approximating the deterministic response of a computer model. Figure 6a shows a case where the set of design sites is relatively well distributed and includes a point very close to a local optimum, however, the quadratic polynomial fitted on this set is quite misleading and returns a point surrogate function minimizer on the plateau while ignoring the already found local region of attraction; evaluating the surrogate function minimizer and refitting would not noticeably change the polynomial.

In this case, the set of design sites is intentionally very well distributed such that there are design sites located at both regions of attraction one global and one local. As can be seen, in our experiment with this neural network, the local region of attraction local mode on the left is easily ignored despite the fact that there is a design site very close to the local minimum, and second the location of the global region is misinterpreted.

Notably, evaluating and adding the surrogate function minimizer to the set of design sites and refitting might not properly change the shape of the surrogate.


  • Advances in surrogate based modeling, feasibility analysis, and optimization.
  • Surrogate-Based Modeling and Optimization : Applications in Engineering - Details - Trove.
  • Join Kobo & start eReading today;
  • Once Upon a Hallows Eve: A Halloween Fairy Tale.
  • Commanders and Command in the Roman Republic and Early Empire.

In contrast, inexact emulators can be very misleading in the regions of attraction i. Combining the two behaviors i. To our knowledge, the issues and shortcomings of inexact emulation for response surface modeling as described above have not been fully addressed in the literature, although the problems associated have been acknowledged to some extent in some publications, e. For example, Jones [] points out that inexact emulators are unreliable because they might not sufficiently capture the shape of the deterministic underlying function.

In particular, it is not clear to us from the water resources literature on surrogates for constraints, why the inexact emulation of a penalty function which for large and sometimes continuous regions of decision space can be zero is preferred or selected over an exact emulator.

In this case, the choice between exact versus inexact emulation is not so clear. The appropriate range lower and upper bounds for this number varies from one function approximation technique to another.

Kamide Lecture

Generally, the more design sites used for metamodel fitting, the higher the computational expense incurred in the fitting process. The computational expense associated with metamodel development and fitting should be taken into account in any metamodeling application. In these techniques, except for SVMs, the number of correlation functions is typically as many as the number of design sites, and as such, their structures and the computations associated for large sets of design sites become excessively large.

GEM may suffer the most in this regard as the maximum number of design sites utilized in GEM applications in the literature is only [ Ratto et al. Kriging has also limited applicability when the number of design sites is large, mostly because determining the kriging correlation parameters through the maximum likelihood estimation methodology can become computationally demanding for large sets. Practical numbers of design sites in kriging applications are typically less than a few thousands. Least squares methods can efficiently fit RBFs even on large sets of design sites.

SVMs are also capable of more efficiently handling larger numbers of design sites as the operator associated with the design site vectors in the SVM formulation is dot product [ Yu et al. However, both RBFs and SVMs may involve a relatively computationally demanding parameter tuning process for the correlation parameters and the other two specific parameters of SVMs.

As such, even for large sets of design sites, ANNs may have relatively limited numbers of hidden neurons forming reasonably sized ANN structures. There are ANN applications for very large sets of design sites; for example, Broad et al. As such, polynomial structure does not expand as the number of design sites increases.

Polynomials can be fitted very fast even over very large sets of design sites. Similar to polynomials, the structure and complexity of multivariate adaptive regression splines MARS is not a function of the number of design sites, and instead, it is a function of the shape and complexity of the underlying function represented by the design sites. MARS builds multiple piecewise linear and nonlinear regression models basis functions to emulate the underlying function in the design sites [ Friedman , ], and its main computational effort is to search over a variety of combinations by first adding the basis functions to the model forward pass and then extensively prunes the model backward pass to find a parsimonious model with a satisfactory generalization ability.

For a polynomial, the minimum number equals the number of coefficients existing in the polynomial. As stated in section 2.

Log in to your subscription

For ANNs, although mathematically there is not any minimum limit for the number of design sites, it is commonly accepted that neural networks require relatively larger sets of design sites to be properly trained. The studies listed in Table 1 are consistent with this fact as the minimum number of initial design sites for neural network training in these studies is [from Zou et al. When a function approximation model exhibits a good fit to the design sites i.

The importance of validation differs for different approximation techniques.

GMD - Parameter calibration in global soil carbon models using surrogate-based optimization

For example, the process of developing the polynomials, RBFs, or kriging approximation models is less dependent on validation as there are studies utilizing them without conducting a validation step; whereas, validation is an inseparable step in developing SVMs and ANNs. Bastos and O'Hagan [] claim that there has been little research on validating emulators before using them i. While this statement, is accurate for some emulators in particular Gaussian process emulators studied in the work of Bastos and O'Hagan [] , it is much less accurate when emulators such as SVMs and ANNs are considered.

Bastos and O'Hagan [] propose some diagnostics to validate Gaussian process emulators. All approximation models are prone to overfitting. In statistics, overfitting is usually described as when the model fits the noise existing in the data dominantly rather than the underlying function. However, as discussed in the work of Razavi et al. Overfitting due to conformability is more likely when the approximation model has a large degree of freedom is overparameterized compared to the amount of available data. But in highly flexible approximation models, including ANNs, the problem associated with the conformability factor can be substantial.

Seven studies listed in Table 1 [ Broad et al. The main problem with early stopping is that the available design sites have to be split into a training set, a testing sets, and sometimes a validation set resulting in fewer data available to train ANNs. In contrast, the Bayesian regularization procedure does not require this splitting of design sites—all can be used for training.


  • After?
  • Modern Trends and Techniques in Computer Science: 3rd Computer Science On-line Conference 2014 (CSOC 2014)!
  • Philosophical Works of Etienne Bonnot, Abbe De Condillac, Volume 2.
  • Liberalism, Neoliberalism, Social Democracy: Thin Communitarian Perspectives on Political Philosophy and Education (Routledge Studies in Social and Political Thought).

The risk of overfitting is higher when there are very few design sites relative to the number of kriging and RBF parameters to be tuned. As overfitting in kriging is not a major challenge, it has not been directly addressed in most kriging studies. To mitigate the possible overfitting problem in kriging, Welch et al.

Recently, Bayesian emulation techniques have appeared that are tailored to approximate a time series of output e. Fricker et al. None of the other function approximation techniques reviewed in detail in section 2. Thus, a single ANN model of multiple correlated outputs should conceptually be able to account for these correlations among outputs.

Based on the work by Conti and O'Hagan [] , we believe that the ability to account for correlations among outputs that are significantly correlated even multiple output functions such as two model calibration objective functions in the response surface surrogate should conceptually lead to increased surrogate accuracy. However, there are multiple studies demonstrating the need for multiple ANN surrogates to model multiple outputs e. Yan and Minsker [] approximate six outputs with three independent ANN surrogate models while Broad et al.

Kourakos and Mantoglou [] utilize ANNs to approximate 34 outputs and they explain how their single ANN to model all outputs would lead to a practically infeasible ANN training procedure as nearly ANN parameters were to be specified. Instead they built and trained 34 modular subnetworks to circumvent this computational bottleneck in ANN training, assuming that the correlations between the outputs are negligible justified based on the physics of their case study.

If this assumption is violated, the surrogate modeling framework would not work or the gains would be minimal. Forrester et al. MOR aims to reduce the complexity of models by deriving substitute approximations of the original complex equations involved in the original model. These substitute approximations are systematically obtained by rigorous mathematical techniques without the need of knowing the underlying system. These frameworks have mostly arisen from the optimization context but have also applied for other purposes including uncertainty analysis. This process in the multifidelity modeling literature is referred to as correction, tuning, scaling, or alignment.

Eldred et al. Different approaches or tools have been proposed to develop the approximate correction function. More flexible function approximation models have also been used for this purpose, including kriging [ Gano et al. However, the limitations of response surface surrogates used in this context for high dimensional problems see section 2. As a result, less complex approximate correction functions may be more desirable in practice. Building correction functions that are correlated for multiple outputs is similarly not as important see section 2.

In Step 4, the candidate points from Step 3 are evaluated by the original function.

If needed, the framework goes back to Step 2 to modify the correction function and repeat the analyses in Step 3. In such a framework, an initial DoE is not required and the framework may start with any but desirably a good quality initial solution Step 1. The initial trust region size is also specified in this step.

In Step 3, the correction function is locally fitted around the current best solution. Steps 3 through 5 are repeated until convergence or stopping criteria are met. Initially introduced by Bandler et al. Many approaches have been proposed to address this problem of nonuniqueness [ Bakr et al. Once the corresponding points in the two spaces are available, different linear or nonlinear functions may be used to relate the two spaces by fitting over these points [ Bandler et al. The space mapping relationships can be updated as the algorithm progresses. These strategies may be used with any of the frameworks utilizing response surface surrogates detailed in section 2.

These two sets of design sites are used to either build a single response surface surrogate formed by the sets or two response surface surrogates representing the two sets independently. Leary et al. Huang et al. Vitali et al. Since their application is not trivial, multifidelity models when defined on different variable spaces have been less appealing in surrogate modeling literature [ Simpson et al.

Recommended for you

However, when such knowledge is not available, empirical relationships are to be derived. The space mapping approach, explained in section 3. The main procedure in space mapping between different spaces is essentially the same as the space mapping procedure when the spaces are identical. Robinson et al. The terms in quotations below represent the terminology used in the associated publications. Ulanicki et al. Shamir and Salomons [] and Preis et al.

Vermeulen et al. Cheng et al. Crout et al. Keating et al. Efendiev et al. In their methodology, both the surrogate and original models are defined over the same parameter space, and the surrogate is first evaluated to determine whether the original model is worth evaluating for a given solution. Mondal et al. Cui et al. As such, there is always a risk that surrogate models yield misleading results; this risk is higher when the original response landscape is complex and deceptive and is minimal for simple original response landscapes e.

A thorough discussion of this matter in the context of optimization is available in the work of Razavi et al. Although Broad et al. In a sampling context, one may take fewer samples than they would prefer, while in a search context, the algorithm can be terminated before it converges. Computational budgets may be quantified as the total CPU clock time as in the work of Behzadian et al. As stated in Table 1 , 15 out of 32 studies present quantitative information demonstrating the efficiency of the surrogate modeling strategies used.

Some of these studies report the associated computational savings explicitly; in the other ones, savings are not clearly reported and we interpreted them based on the published results. In very limited computational budgets surrogate modeling is expected to be very helpful, whereas when the computational budget is not severely limited, surrogate modeling might not be as helpful, as equivalent or better solutions can be achieved by the benchmark optimizer.



admin