-
Notifications
You must be signed in to change notification settings - Fork 19
/
Copy pathstep_embed.Rd
202 lines (167 loc) · 7.2 KB
/
step_embed.Rd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/embed.R
\name{step_embed}
\alias{step_embed}
\alias{tidy.step_embed}
\alias{embed_control}
\title{Encoding Factors into Multiple Columns}
\usage{
step_embed(
recipe,
...,
role = "predictor",
trained = FALSE,
outcome = NULL,
predictors = NULL,
num_terms = 2,
hidden_units = 0,
options = embed_control(),
mapping = NULL,
history = NULL,
keep_original_cols = FALSE,
skip = FALSE,
id = rand_id("embed")
)
embed_control(
loss = "mse",
metrics = NULL,
optimizer = "sgd",
epochs = 20,
validation_split = 0,
batch_size = 32,
verbose = 0,
callbacks = NULL
)
}
\arguments{
\item{recipe}{A recipe object. The step will be added to the sequence of
operations for this recipe.}
\item{...}{One or more selector functions to choose variables. For
\code{step_embed}, this indicates the variables to be encoded into a numeric
format. See \code{\link[recipes:selections]{recipes::selections()}} for more details. For the \code{tidy}
method, these are not currently used.}
\item{role}{For model terms created by this step, what analysis role should
they be assigned?. By default, the function assumes that the embedding
variables created will be used as predictors in a model.}
\item{trained}{A logical to indicate if the quantities for preprocessing have
been estimated.}
\item{outcome}{A call to \code{vars} to specify which variable is used as the
outcome in the neural network.}
\item{predictors}{An optional call to \code{vars} to specify any variables to be
added as additional predictors in the neural network. These variables
should be numeric and perhaps centered and scaled.}
\item{num_terms}{An integer for the number of resulting variables.}
\item{hidden_units}{An integer for the number of hidden units in a dense ReLu
layer between the embedding and output later. Use a value of zero for no
intermediate layer (see Details below).}
\item{options}{A list of options for the model fitting process.}
\item{mapping}{A list of tibble results that define the encoding. This is
\code{NULL} until the step is trained by \code{\link[recipes:prep]{recipes::prep()}}.}
\item{history}{A tibble with the convergence statistics for each term. This
is \code{NULL} until the step is trained by \code{\link[recipes:prep]{recipes::prep()}}.}
\item{keep_original_cols}{A logical to keep the original variables in the
output. Defaults to \code{FALSE}.}
\item{skip}{A logical. Should the step be skipped when the recipe is baked by
\code{\link[recipes:bake]{recipes::bake()}}? While all operations are baked when \code{\link[recipes:prep]{recipes::prep()}} is
run, some operations may not be able to be conducted on new data (e.g.
processing the outcome variable(s)). Care should be taken when using \code{skip = TRUE} as it may affect the computations for subsequent operations.}
\item{id}{A character string that is unique to this step to identify it.}
\item{optimizer, loss, metrics}{Arguments to pass to keras::compile()}
\item{epochs, validation_split, batch_size, verbose, callbacks}{Arguments to pass
to keras::fit()}
}
\value{
An updated version of \code{recipe} with the new step added to the
sequence of existing steps (if any). For the \code{tidy} method, a tibble with
columns \code{terms} (the selectors or variables for encoding), \code{level} (the
factor levels), and several columns containing \code{embed} in the name.
}
\description{
\code{step_embed()} creates a \emph{specification} of a recipe step that will convert a
nominal (i.e. factor) predictor into a set of scores derived from a
tensorflow model via a word-embedding model. \code{embed_control} is a simple
wrapper for setting default options.
}
\details{
Factor levels are initially assigned at random to the new variables and these
variables are used in a neural network to optimize both the allocation of
levels to new columns as well as estimating a model to predict the outcome.
See Section 6.1.2 of Francois and Allaire (2018) for more details.
The new variables are mapped to the specific levels seen at the time of model
training and an extra instance of the variables are used for new levels of
the factor.
One model is created for each call to \code{step_embed}. All terms given to the
step are estimated and encoded in the same model which would also contain
predictors give in \code{predictors} (if any).
When the outcome is numeric, a linear activation function is used in the last
layer while softmax is used for factor outcomes (with any number of levels).
For example, the \code{keras} code for a numeric outcome, one categorical
predictor, and no hidden units used here would be
\if{html}{\out{<div class="sourceCode">}}\preformatted{ keras_model_sequential() \%>\%
layer_embedding(
input_dim = num_factor_levels_x + 1,
output_dim = num_terms,
input_length = 1
) \%>\%
layer_flatten() \%>\%
layer_dense(units = 1, activation = 'linear')
}\if{html}{\out{</div>}}
If a factor outcome is used and hidden units were requested, the code would
be
\if{html}{\out{<div class="sourceCode">}}\preformatted{ keras_model_sequential() \%>\%
layer_embedding(
input_dim = num_factor_levels_x + 1,
output_dim = num_terms,
input_length = 1
) \%>\%
layer_flatten() \%>\%
layer_dense(units = hidden_units, activation = "relu") \%>\%
layer_dense(units = num_factor_levels_y, activation = 'softmax')
}\if{html}{\out{</div>}}
Other variables specified by \code{predictors} are added as an additional dense
layer after \code{layer_flatten} and before the hidden layer.
Also note that it may be difficult to obtain reproducible results using this
step due to the nature of Tensorflow (see link in References).
tensorflow models cannot be run in parallel within the same session (via
\code{foreach} or \code{futures}) or the \code{parallel} package. If using a recipes with
this step with \code{caret}, avoid parallel processing.
}
\section{Tidying}{
When you \code{\link[recipes:tidy.recipe]{tidy()}} this step, a tibble is returned with
a number of columns with embedding information, and columns \code{terms},
\code{levels}, and \code{id}:
\describe{
\item{terms}{character, the selectors or variables selected}
\item{levels}{character, levels in variable}
\item{id}{character, id of this step}
}
}
\section{Tuning Parameters}{
This step has 2 tuning parameters:
\itemize{
\item \code{num_terms}: # Model Terms (type: integer, default: 2)
\item \code{hidden_units}: # Hidden Units (type: integer, default: 0)
}
}
\section{Case weights}{
The underlying operation does not allow for case weights.
}
\examples{
\dontshow{if (!embed:::is_cran_check() && rlang::is_installed(c("modeldata", "keras"))) (if (getRversion() >= "3.4") withAutoprint else force)(\{ # examplesIf}
data(grants, package = "modeldata")
set.seed(1)
grants_other <- sample_n(grants_other, 500)
rec <- recipe(class ~ num_ci + sponsor_code, data = grants_other) \%>\%
step_embed(sponsor_code,
outcome = vars(class),
options = embed_control(epochs = 10)
)
\dontshow{\}) # examplesIf}
}
\references{
Francois C and Allaire JJ (2018) \emph{Deep Learning with R}, Manning
"Concatenate Embeddings for Categorical Variables with Keras"
\url{https://flovv.github.io/Embeddings_with_keras_part2/}
}
\concept{preprocessing encoding}
\keyword{datagen}