diff --git a/vignettes/acquisition_functions.Rmd b/vignettes/acquisition_functions.Rmd index 667d3bfb..4bccf274 100644 --- a/vignettes/acquisition_functions.Rmd +++ b/vignettes/acquisition_functions.Rmd @@ -91,7 +91,7 @@ knitr::include_graphics("figures/trade_off_20.svg", auto_pdf = FALSE) There are two main strategies for _dynamic trade-offs_ during the optimization: - * Use a function to specify the parameter(s) for the acquisition functions. For expected improvement, this can be done using `exp_improve(trade_off = foo())`. `foo()` should be a function whose first parameter is the current iteration number. When `tune` invokes this function, only the first argument is used. A good strategy might be to set `trade_off` to some non-zero value at the start of the search and incrementally approach zero after a reasonable period. + * Use a function to specify the parameter(s) for the acquisition functions. For expected improvement, this can be done using `exp_improve(trade_off = foo())`. `foo()` should be a function whose first parameter is the current iteration number. When tune invokes this function, only the first argument is used. A good strategy might be to set `trade_off` to some non-zero value at the start of the search and incrementally approach zero after a reasonable period. * `control_bayes()` has an option for doing an additional _uncertainty sample_ when no improvements have been found. This is a technique from the active learning literature where new data points are sampled that most help the model. In this case, the candidate points are scored only on variance and a candidate is chosen from a set of the _most_ variable design points. This may find a location in the parameter space to help the optimization make improvements.