-
Notifications
You must be signed in to change notification settings - Fork 19
/
Copy pathdictionary.Rd
48 lines (42 loc) · 1.59 KB
/
dictionary.Rd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/woe.R
\name{dictionary}
\alias{dictionary}
\title{Weight of evidence dictionary}
\usage{
dictionary(.data, outcome, ..., Laplace = 1e-06)
}
\arguments{
\item{.data}{A tbl. The data.frame where the variables come from.}
\item{outcome}{The bare name of the outcome variable with exactly 2 distinct
values.}
\item{...}{bare names of predictor variables or selectors accepted by
\code{dplyr::select()}.}
\item{Laplace}{Default to 1e-6. The \code{pseudocount} parameter of the Laplace
Smoothing estimator. Value to avoid -Inf/Inf from predictor category with
only one outcome class. Set to 0 to allow Inf/-Inf.}
}
\value{
a tibble with summaries and woe for every given predictor variable
stacked up.
}
\description{
Builds the woe dictionary of a set of predictor variables upon a given binary
outcome. Convenient to make a woe version of the given set of predictor
variables and also to allow one to tweak some woe values by hand.
}
\details{
You can pass a custom dictionary to \code{step_woe()}. It must have the
exactly the same structure of the output of \code{\link[=dictionary]{dictionary()}}. One easy way to
do this is by tweaking an output returned from it.
}
\examples{
mtcars \%>\% dictionary("am", cyl, gear:carb)
}
\references{
Kullback, S. (1959). \emph{Information Theory and Statistics.} Wiley, New York.
Hastie, T., Tibshirani, R. and Friedman, J. (1986). \emph{Elements of Statistical
Learning}, Second Edition, Springer, 2009.
Good, I. J. (1985), "Weight of evidence: A brief survey", \emph{Bayesian
Statistics}, 2, pp.249-270.
}