-
Notifications
You must be signed in to change notification settings - Fork 90
/
Copy pathdetails_linear_reg_stan_glmer.Rd
156 lines (122 loc) · 5.37 KB
/
details_linear_reg_stan_glmer.Rd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/linear_reg_stan_glmer.R
\name{details_linear_reg_stan_glmer}
\alias{details_linear_reg_stan_glmer}
\title{Linear regression via hierarchical Bayesian methods}
\description{
The \code{"stan_glmer"} engine estimates hierarchical regression parameters using
Bayesian estimation.
}
\details{
For this engine, there is a single mode: regression
\subsection{Tuning Parameters}{
This model has no tuning parameters.
}
\subsection{Important engine-specific options}{
Some relevant arguments that can be passed to \code{set_engine()}:
\itemize{
\item \code{chains}: A positive integer specifying the number of Markov chains.
The default is 4.
\item \code{iter}: A positive integer specifying the number of iterations for
each chain (including warmup). The default is 2000.
\item \code{seed}: The seed for random number generation.
\item \code{cores}: Number of cores to use when executing the chains in parallel.
\item \code{prior}: The prior distribution for the (non-hierarchical) regression
coefficients.
\item \code{prior_intercept}: The prior distribution for the intercept (after
centering all predictors).
}
See \code{?rstanarm::stan_glmer} and \code{?rstan::sampling} for more information.
}
\subsection{Translation from parsnip to the original package}{
The \strong{multilevelmod} extension package is required to fit this model.
\if{html}{\out{<div class="sourceCode r">}}\preformatted{library(multilevelmod)
linear_reg() \%>\%
set_engine("stan_glmer") \%>\%
set_mode("regression") \%>\%
translate()
}\if{html}{\out{</div>}}
\if{html}{\out{<div class="sourceCode">}}\preformatted{## Linear Regression Model Specification (regression)
##
## Computational engine: stan_glmer
##
## Model fit template:
## rstanarm::stan_glmer(formula = missing_arg(), data = missing_arg(),
## weights = missing_arg(), family = stats::gaussian, refresh = 0)
}\if{html}{\out{</div>}}
}
\subsection{Predicting new samples}{
This model can use subject-specific coefficient estimates to make
predictions (i.e. partial pooling). For example, this equation shows the
linear predictor (\verb{\eta}) for a random intercept:
\if{html}{\out{<div class="sourceCode">}}\preformatted{\eta_\{i\} = (\beta_0 + b_\{0i\}) + \beta_1x_\{i1\}
}\if{html}{\out{</div>}}
where $i$ denotes the \code{i}th independent experimental unit
(e.g. subject). When the model has seen subject \code{i}, it can use that
subject’s data to adjust the \emph{population} intercept to be more specific
to that subjects results.
What happens when data are being predicted for a subject that was not
used in the model fit? In that case, this package uses \emph{only} the
population parameter estimates for prediction:
\if{html}{\out{<div class="sourceCode">}}\preformatted{\hat\{\eta\}_\{i'\} = \hat\{\beta\}_0+ \hat\{\beta\}x_\{i'1\}
}\if{html}{\out{</div>}}
Depending on what covariates are in the model, this might have the
effect of making the same prediction for all new samples. The population
parameters are the “best estimate” for a subject that was not included
in the model fit.
The tidymodels framework deliberately constrains predictions for new
data to not use the training set or other data (to prevent information
leakage).
}
\subsection{Preprocessing requirements}{
There are no specific preprocessing needs. However, it is helpful to
keep the clustering/subject identifier column as factor or character
(instead of making them into dummy variables). See the examples in the
next section.
}
\subsection{Other details}{
The model can accept case weights.
With parsnip, we suggest using the formula method when fitting:
\if{html}{\out{<div class="sourceCode r">}}\preformatted{library(tidymodels)
data("riesby")
linear_reg() \%>\%
set_engine("stan_glmer") \%>\%
fit(depr_score ~ week + (1|subject), data = riesby)
}\if{html}{\out{</div>}}
When using tidymodels infrastructure, it may be better to use a
workflow. In this case, you can add the appropriate columns using
\code{add_variables()} then supply the typical formula when adding the model:
\if{html}{\out{<div class="sourceCode r">}}\preformatted{library(tidymodels)
glmer_spec <-
linear_reg() \%>\%
set_engine("stan_glmer")
glmer_wflow <-
workflow() \%>\%
# The data are included as-is using:
add_variables(outcomes = depr_score, predictors = c(week, subject)) \%>\%
add_model(glmer_spec, formula = depr_score ~ week + (1|subject))
fit(glmer_wflow, data = riesby)
}\if{html}{\out{</div>}}
For prediction, the \code{"stan_glmer"} engine can compute posterior
intervals analogous to confidence and prediction intervals. In these
instances, the units are the original outcome. When \code{std_error = TRUE},
the standard deviation of the posterior distribution (or posterior
predictive distribution as appropriate) is returned.
}
\subsection{Case weights}{
This model can utilize case weights during model fitting. To use them,
see the documentation in \link{case_weights} and the examples
on \code{tidymodels.org}.
The \code{fit()} and \code{fit_xy()} arguments have arguments called
\code{case_weights} that expect vectors of case weights.
}
\subsection{References}{
\itemize{
\item McElreath, R. 2020 \emph{Statistical Rethinking}. CRC Press.
\item Sorensen, T, Vasishth, S. 2016. Bayesian linear mixed models using
Stan: A tutorial for psychologists, linguists, and cognitive
scientists, arXiv:1506.06201.
}
}
}
\keyword{internal}