diff --git a/main/404.html b/main/404.html index 72e9514b..226a44de 100644 --- a/main/404.html +++ b/main/404.html @@ -1,4 +1,5 @@ - + + @@ -42,17 +43,7 @@ -
- + @@ -74,44 +44,34 @@

This file outlines how to propose and make changes to rbmi as well as providing details about more obscure aspects of the package’s development process.

-

Setup -

+

Setup

In order to develop or contribute to rbmi you will need to access to a C/C++ compiler. If you are on Windows you should install rtools or if you are on macOS you should install Xcode. Likewise, you will also need to install all of the package’s development dependencies. This can be done by launching R from within the project root and executing:

devtools::install_dev_deps()
-

Code changes -

+

Code changes

If you want to make a code contribution, it’s a good idea to first file an issue and make sure someone from the team agrees that it’s needed. If you’ve found a bug, please file an issue that illustrates the bug with a minimal reprex (this will also help you write a unit test, if needed).

-

Pull request process -

-
+
-

Coding Considerations -

-
+
-

Unit Testing & CI/CD -

+

Unit Testing & CI/CD

This project uses testthat to perform unit testing in combination with GitHub Actions for CI/CD.

-

Scheduled Testing -

+

Scheduled Testing

Due to the stochastic nature of this package some unit tests take a considerable amount of time to execute. To avoid issues with usability, unit tests that take more than a couple of seconds to run should be deferred to the scheduled testing. These are tests that are only run occasionally on a periodic basis (currently twice a month) and not on every pull request / push event.

To defer a test to the scheduled build simply include skip_if_not(is_full_test()) to the top of the test_that() block i.e.

@@ -122,39 +82,32 @@ 

Scheduled Testinghttps://github.com/insightsengineering/rbmi” -> “Actions” -> “Bi-Weekly” -> “Run Workflow”. It is advisable to do this before releasing to CRAN.

-

Docker Images -

+

Docker Images

To support CI/CD, in terms of reducing setup time, a Docker images has been created which contains all the packages and system dependencies required for this project. The image can be found at:

-
    -
  • ghcr.io/insightsengineering/rbmi:latest
  • -
-

This image is automatically re-built once a month to contain the latest version of R and its packages. The code to create this images can be found in misc/docker.

+
  • ghcr.io/insightsengineering/rbmi:latest
  • +

This image is automatically re-built once a month to contain the latest version of R and its packages. The code to create this images can be found in misc/docker.

To build the image locally run the following from the project root directory:

docker build -f misc/docker/Dockerfile  -t rbmi:latest .
-

Reproducibility, Print Tests & Snaps -

+

Reproducibility, Print Tests & Snaps

A particular issue with testing this package is reproducibility. For the most part this is handled well via set.seed() however stan/rstan does not guarantee reproducibility even with the same seed if run on different hardware.

This issue surfaces itself when testing the print messages of the pool object which displays treatment estimates which are thus not identical when run on different machines. To address this issue pre-made pool objects have been generated and stored in R/sysdata.rda (which itself is generated by data-raw/create_print_test_data.R). The generated print messages are compared to expected values which are stored in tests/testthat/_snaps/ (which themselves are automatically created by testthat::expect_snapshot())

-

Fitting MMRM’s -

+

Fitting MMRM’s

This package currently uses the mmrm package to fit MMRM models. This package is still fairly new but has so far proven to be very stable, fast and reliable. If you do spot any issues with the MMRM package please do raise them in the corresponding GitHub Repository - link

As the mmrm package uses TMB it is not uncommon to see warnings about either inconsistent versions between what TMB and the Matrix package were compiled as. In order to resolve this you may wish to re-compile these packages from source using:

install.packages(c("TMB", "mmrm"), type = "source")

Note that you will need to have rtools installed if you are on a Windows machine or Xcode if you are running macOS (or somehow else have access to a C/C++ compiler).

-

rstan -

+

rstan

The Bayesian models fitted by this package are implemented via stan/rstan. The code for this can be found in inst/stan/MMRM.stan. Note that the package will automatically take care of compiling this code when you install it or run devtools::load_all(). Please note that the package won’t recompile the code unless you have changed the source code or you delete the src directory.

-

Vignettes -

+

Vignettes

CRAN imposes a 10-minute run limit on building, compiling and testing your package. To keep to this limit the vignettes are pre-built; that is to say that simply changing the source code will not automatically update the vignettes, you will need to manually re-build them.

To do this you need to run:

Rscript vignettes/build.R
@@ -162,16 +115,14 @@

Vignettes -

Misc & Local Folders -

+

Misc & Local Folders

The misc/ folder in this project is used to hold useful scripts, analyses, simulations & infrastructure code that we wish to keep but isn’t essential to the build or deployment of the package. Feel free to store additional stuff in here that you feel is worth keeping.

Likewise, local/ has been added to the .gitignore file meaning anything stored in this folder won’t be committed to the repository. For example, you may find this useful for storing personal scripts for testing or more generally exploring the package during development.

- + - + + + - - diff --git a/main/LICENSE.html b/main/LICENSE.html index ff17425e..f238772d 100644 --- a/main/LICENSE.html +++ b/main/LICENSE.html @@ -1,18 +1,5 @@ - - - - - - -Apache License • rbmi - - - - - - - - + +Apache License • rbmi Skip to contents @@ -28,38 +15,21 @@ + @@ -74,11 +44,9 @@

Version 2.0, January 2004 <http://www.apache.org/licenses/>

-

Terms and Conditions for use, reproduction, and distribution -

+

Terms and Conditions for use, reproduction, and distribution

-

1. Definitions -

+

1. Definitions

“License” shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.

“Licensor” shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.

“Legal Entity” shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.

@@ -91,21 +59,17 @@

1. Definitions“Contributor” shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.

- +

Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.

-

3. Grant of Patent License -

+

3. Grant of Patent License

Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

-

4. Redistribution -

+

4. Redistribution

You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:

-
-

6. Trademarks -

+

6. Trademarks

This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.

-

7. Disclaimer of Warranty -

+

7. Disclaimer of Warranty

Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.

-

8. Limitation of Liability -

+

8. Limitation of Liability

In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.

-

9. Accepting Warranty or Additional Liability -

+

9. Accepting Warranty or Additional Liability

While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.

END OF TERMS AND CONDITIONS

-

APPENDIX: How to apply the Apache License to your work -

+

APPENDIX: How to apply the Apache License to your work

To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets [] replaced with your own identifying information. (Don’t include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same “printed page” as the copyright notice for easier identification within third-party archives.

Copyright [yyyy] [name of copyright owner]
 
@@ -164,8 +121,7 @@ 

APPENDIX: How to

-
+ - + + + - - diff --git a/main/articles/CondMean_Inference.html b/main/articles/CondMean_Inference.html index 0f8a4fed..d14ca685 100644 --- a/main/articles/CondMean_Inference.html +++ b/main/articles/CondMean_Inference.html @@ -1,4 +1,5 @@ - + + @@ -41,17 +42,7 @@ -
- + -
- + -
- +
@@ -73,29 +43,20 @@

All vignettes

-
-
rbmi: Advanced Functionality
+
rbmi: Advanced Functionality
-
-
rbmi: Inference with Conditional Mean Imputation
+
rbmi: Inference with Conditional Mean Imputation
-
-
rbmi: Frequently Asked Questions
+
rbmi: Frequently Asked Questions
-
-
rbmi: Quickstart
+
rbmi: Quickstart
-
-
rbmi: Implementation of retrieved-dropout models using rbmi
+
rbmi: Implementation of retrieved-dropout models using rbmi
-
-
rbmi: Statistical Specifications
+
rbmi: Statistical Specifications
-
-
- - - +
+ - + + + - - diff --git a/main/articles/quickstart.html b/main/articles/quickstart.html index d5d790bb..8dd47b49 100644 --- a/main/articles/quickstart.html +++ b/main/articles/quickstart.html @@ -1,4 +1,5 @@ - + + @@ -41,17 +42,7 @@ -
- + -
- + -
- + @@ -72,8 +42,7 @@

Authors

-
+

Citation

@@ -131,8 +99,7 @@

Citation

- + - + + + - - diff --git a/main/index.html b/main/index.html index 86120d9b..1248a2b0 100644 --- a/main/index.html +++ b/main/index.html @@ -1,4 +1,5 @@ - + + @@ -44,17 +45,7 @@ -
- + @@ -71,124 +41,79 @@
-

rbmi 1.3.1 -

- -
+

rbmi 1.3.1

+
-

rbmi 1.3.0 -

-

CRAN release: 2024-10-16

+

rbmi 1.3.0

CRAN release: 2024-10-16

-

Breaking Changes -

-
+
-

New Features -

-
+
-

Miscellaneous Bug Fixes -

-
+
-

rbmi 1.2.6 -

-

CRAN release: 2023-11-24

- -
+

rbmi 1.2.6

CRAN release: 2023-11-24

+
-

rbmi 1.2.5 -

-

CRAN release: 2023-09-20

-
+
-

rbmi 1.2.3 -

-

CRAN release: 2022-11-14

- -
+

rbmi 1.2.3

CRAN release: 2022-11-14

+
-

rbmi 1.2.1 -

-

CRAN release: 2022-10-25

-
+
-

rbmi 1.1.4 -

-

CRAN release: 2022-05-18

-
+
-

rbmi 1.1.1 & 1.1.3 -

-

CRAN release: 2022-03-08

-
+
-

rbmi 1.1.0 -

-

CRAN release: 2022-03-02

- -
+

rbmi 1.1.0

CRAN release: 2022-03-02

+ - + - + + + - - diff --git a/main/pkgdown.yml b/main/pkgdown.yml index 16455bd1..1154cee2 100644 --- a/main/pkgdown.yml +++ b/main/pkgdown.yml @@ -8,4 +8,4 @@ articles: quickstart: quickstart.html retrieved_dropout: retrieved_dropout.html stat_specs: stat_specs.html -last_built: 2024-12-10T16:15Z +last_built: 2024-12-10T16:35Z diff --git a/main/reference/QR_decomp.html b/main/reference/QR_decomp.html index 4e07c9b6..1364b4f1 100644 --- a/main/reference/QR_decomp.html +++ b/main/reference/QR_decomp.html @@ -1,22 +1,7 @@ - - - - - - -QR decomposition — QR_decomp • rbmi - - - - - - - - - - + +QR decomposition — QR_decomp • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,27 +49,21 @@
-

Usage -

+

Usage

QR_decomp(mat)
-

Arguments -

+

Arguments

-
-
mat -
+
mat

A matrix to perform the QR decomposition on.

-
-
+ - + - + + + - - diff --git a/main/reference/Stack.html b/main/reference/Stack.html index 05885d60..89230f71 100644 --- a/main/reference/Stack.html +++ b/main/reference/Stack.html @@ -1,20 +1,5 @@ - - - - - - -R6 Class for a FIFO stack — Stack • rbmi - - - - - - - - - - + +R6 Class for a FIFO stack — Stack • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -79,127 +47,69 @@
-

Public fields -

-

-
-
-
stack
+

Public fields

+

stack

A list containing the current stack

-
-

-
+

-

Methods -

+

Methods

-

Public methods -

+

Public methods

-
-

-
-
-

Method add() -

-

Adds content to the end of the stack (must be a list)

-
-

Usage -

-

-
-
Stack$add(x)
-

-
+


+

Method add()

+

Adds content to the end of the stack (must be a list)

+

Usage

+

Stack$add(x)

-

Arguments -

-

-
-
-
x
+

Arguments

+

x

content to add to the stack

-
-

-
+

-
-

-
-
-

Method pop() -

-

Retrieve content from the stack

-
-

Usage -

-

-
-
Stack$pop(i)
-

-
+


+

Method pop()

+

Retrieve content from the stack

+

Usage

+

Stack$pop(i)

-

Arguments -

-

-
-
-
i
+

Arguments

+

i

the number of items to retrieve from the stack. If there are less than i items left on the stack it will just return everything that is left.

-
-

-
+

-
-

-
-
-

Method clone() -

-

The objects of this class are cloneable with this method.

-
-

Usage -

-

-
-
Stack$clone(deep = FALSE)
-

-
+


+

Method clone()

+

The objects of this class are cloneable with this method.

+

Usage

+

Stack$clone(deep = FALSE)

-

Arguments -

-

-
-
-
deep
+

Arguments

+

deep

Whether to make a deep clone.

-
-

-
+

@@ -207,8 +117,7 @@

Arguments -

+
-
+
+ + - - diff --git a/main/reference/add_class.html b/main/reference/add_class.html index 32ec97b9..9aaa0aa8 100644 --- a/main/reference/add_class.html +++ b/main/reference/add_class.html @@ -1,22 +1,7 @@ - - - - - - -Add a class — add_class • rbmi - - - - - - - - - - + +Add a class — add_class • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,32 +49,25 @@
-

Usage -

+

Usage

add_class(x, cls)
-

Arguments -

+

Arguments

-
-
x -
+
x

object to add a class to.

-
cls -
+
cls

the class to be added.

-
-
+ - + - + + + - - diff --git a/main/reference/adjust_trajectories.html b/main/reference/adjust_trajectories.html index b56b2158..ca008b33 100644 --- a/main/reference/adjust_trajectories.html +++ b/main/reference/adjust_trajectories.html @@ -1,20 +1,5 @@ - - - - - - -Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories • rbmi - - - - - - - - - - + +Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,8 +46,7 @@
-

Usage -

+

Usage

adjust_trajectories(
   distr_pars_group,
   outcome,
@@ -91,74 +58,54 @@ 

Usage

-

Arguments -

- - -
-
distr_pars_group -
-
-

Named list containing the simulation parameters of the multivariate -normal distribution assumed for the given treatment group. It contains the following elements:

-
    -
  • mu: Numeric vector indicating the mean outcome trajectory. It should include the outcome +

    Arguments

    + + +
    distr_pars_group
    +

    Named list containing the simulation parameters of the multivariate +normal distribution assumed for the given treatment group. It contains the following elements:

    • mu: Numeric vector indicating the mean outcome trajectory. It should include the outcome at baseline.

    • sigma Covariance matrix of the outcome trajectory.

    • -
    -
    +
-
outcome -
+
outcome

Numeric variable that specifies the longitudinal outcome.

-
ids -
+
ids

Factor variable that specifies the id of each subject.

-
ind_ice -
+
ind_ice

A binary variable that takes value 1 if the corresponding outcome is affected by the ICE and 0 otherwise.

-
strategy_fun -
+
strategy_fun

Function implementing trajectories after the intercurrent event (ICE). Must be one of getStrategies(). See getStrategies() for details.

-
distr_pars_ref -
-
-

Optional. Named list containing the simulation parameters of the -reference arm. It contains the following elements:

-
    -
  • mu: Numeric vector indicating the mean outcome trajectory assuming no ICEs. It should +

    distr_pars_ref
    +

    Optional. Named list containing the simulation parameters of the +reference arm. It contains the following elements:

    • mu: Numeric vector indicating the mean outcome trajectory assuming no ICEs. It should include the outcome at baseline.

    • sigma Covariance matrix of the outcome trajectory assuming no ICEs.

    • -
    -
    +
-
-
+
-

Value -

+

Value

A numeric vector containing the adjusted trajectories.

-

See also -

+

See also

- + - + + + - - diff --git a/main/reference/adjust_trajectories_single.html b/main/reference/adjust_trajectories_single.html index 5574afbe..8fbee40d 100644 --- a/main/reference/adjust_trajectories_single.html +++ b/main/reference/adjust_trajectories_single.html @@ -1,20 +1,5 @@ - - - - - - -Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single • rbmi - - - - - - - - - - + +Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,8 +46,7 @@
-

Usage -

+

Usage

adjust_trajectories_single(
   distr_pars_group,
   outcome,
@@ -89,65 +56,47 @@ 

Usage

-

Arguments -

- - -
-
distr_pars_group -
-
-

Named list containing the simulation parameters of the multivariate -normal distribution assumed for the given treatment group. It contains the following elements:

-
    -
  • mu: Numeric vector indicating the mean outcome trajectory. It should include the +

    Arguments

    + + +
    distr_pars_group
    +

    Named list containing the simulation parameters of the multivariate +normal distribution assumed for the given treatment group. It contains the following elements:

    • mu: Numeric vector indicating the mean outcome trajectory. It should include the outcome at baseline.

    • sigma Covariance matrix of the outcome trajectory.

    • -
    -
    +
-
outcome -
+
outcome

Numeric variable that specifies the longitudinal outcome.

-
strategy_fun -
+
strategy_fun

Function implementing trajectories after the intercurrent event (ICE). Must be one of getStrategies(). See getStrategies() for details.

-
distr_pars_ref -
-
-

Optional. Named list containing the simulation parameters of the -reference arm. It contains the following elements:

-
    -
  • mu: Numeric vector indicating the mean outcome trajectory assuming no ICEs. It should +

    distr_pars_ref
    +

    Optional. Named list containing the simulation parameters of the +reference arm. It contains the following elements:

    • mu: Numeric vector indicating the mean outcome trajectory assuming no ICEs. It should include the outcome at baseline.

    • sigma Covariance matrix of the outcome trajectory assuming no ICEs.

    • -
    -
    +
-
-
+
-

Value -

+

Value

A numeric vector containing the adjusted trajectory for a single subject.

-

Details -

+

Details

outcome should be specified such that all-and-only the post-ICE observations (i.e. the observations to be adjusted) are set to NA.

- + - + + + - - diff --git a/main/reference/analyse.html b/main/reference/analyse.html index 005aa337..0631cc37 100644 --- a/main/reference/analyse.html +++ b/main/reference/analyse.html @@ -1,24 +1,9 @@ - - - - - - -Analyse Multiple Imputed Datasets — analyse • rbmi - - - - - -Analyse Multiple Imputed Datasets — analyse • rbmi - - - - +each of them."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,8 +52,7 @@
-

Usage -

+

Usage

analyse(
   imputations,
   fun = ancova,
@@ -97,59 +64,46 @@ 

Usage

-

Arguments -

+

Arguments

-
-
imputations -
+
imputations

An imputations object as created by impute().

-
fun -
+
fun

An analysis function to be applied to each imputed dataset. See details.

-
delta -
+
delta

A data.frame containing the delta transformation to be applied to the imputed datasets prior to running fun. See details.

-
... -
+
...

Additional arguments passed onto fun.

-
ncores -
+
ncores

The number of parallel processes to use when running this function. Can also be a cluster object created by make_rbmi_cluster(). See the parallisation section below.

-
.validate -
+
.validate

Should inputations be checked to ensure it conforms to the required format (default = TRUE) ? Can gain a small performance increase if this is set to FALSE when analysing a large number of samples.

-
-
+
-

Details -

-

This function works by performing the following steps:

-
    -
  1. Extract a dataset from the imputations object.

  2. +

    Details

    +

    This function works by performing the following steps:

    1. Extract a dataset from the imputations object.

    2. Apply any delta adjustments as specified by the delta argument.

    3. Run the analysis function fun on the dataset.

    4. Repeat steps 1-3 across all of the datasets inside the imputations object.

    5. Collect and return all of the analysis results.

    6. -
    -

    The analysis function fun must take a data.frame as its first +

The analysis function fun must take a data.frame as its first argument. All other options to analyse() are passed onto fun via .... fun must return a named list with each element itself being a @@ -157,9 +111,7 @@

Detailsest (or additionally se and df if you had originally specified method_bayes() or method_approxbayes()) i.e.:

-

-
-
myfun <- function(dat, ...) {
+

myfun <- function(dat, ...) {
     mod_1 <- lm(data = dat, outcome ~ group)
     mod_2 <- lm(data = dat, outcome ~ group + covar)
     x <- list(
@@ -175,9 +127,7 @@ 

Details ) ) return(x) - }

-

-
+ }

Please note that the vars$subjid column (as defined in the original call to draws()) will be scrambled in the data.frames that are provided to fun. This is to say they will not contain the original subject values and as such @@ -198,11 +148,7 @@

Detailsdraws()) and delta. Essentially this data.frame is merged onto the imputed dataset by vars$subjid and vars$visit and then the outcome variable is modified by:

-

-
-
imputed_data[[vars$outcome]] <- imputed_data[[vars$outcome]] + imputed_data[["delta"]]
-

-
+

imputed_data[[vars$outcome]] <- imputed_data[[vars$outcome]] + imputed_data[["delta"]]

Please note that in order to provide maximum flexibility, the delta argument can be used to modify any/all outcome values including those that were not imputed. Care must be taken when defining offsets. It is recommend that you @@ -211,8 +157,7 @@

Details

-

Parallelisation -

+

Parallelisation

To speed up the evaluation of analyse() you can use the ncores argument to enable parallelisation. @@ -220,9 +165,7 @@

Parallelisationmake_rbmi_cluster() function for example:

-

-
-
my_custom_fun <- function(...) <some analysis code>
+

my_custom_fun <- function(...) <some analysis code>
 cl <- make_rbmi_cluster(
     4,
     objects = list("my_custom_fun" = my_custom_fun),
@@ -233,9 +176,7 @@ 

Parallelisation fun = my_custom_fun, ncores = cl ) -parallel::stopCluster(cl)

-

-
+parallel::stopCluster(cl)

Note that there is significant overhead both with setting up the sub-processes and with transferring data back-and-forth between the main process and the sub-processes. As such parallelisation of the analyse() function tends to only be worth it when you have @@ -251,9 +192,7 @@

Parallelisationrbmi run.

Finally, if you are doing a tipping point analysis you can get a reasonable performance improvement by re-using the cluster between each call to analyse() e.g.

-

-
-
cl <- make_rbmi_cluster(4)
+

cl <- make_rbmi_cluster(4)
 ana_1 <- analyse(
     imputations = imputeObj,
     delta = delta_plan_1,
@@ -269,24 +208,18 @@ 

Parallelisation delta = delta_plan_3, ncores = cl ) -parallel::clusterStop(cl)

-

-
+parallel::clusterStop(cl)

-

See also -

-
-

extract_imputed_dfs() for manually extracting imputed +

See also

+

extract_imputed_dfs() for manually extracting imputed datasets.

delta_template() for creating delta data.frames.

-

ancova() for the default analysis function.

-
+

ancova() for the default analysis function.

-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 vars <- set_vars(
     subjid = "subjid",
@@ -316,8 +249,7 @@ 

Examples -

+
- + + + - - diff --git a/main/reference/ancova.html b/main/reference/ancova.html index 59ee478e..e80e8f8e 100644 --- a/main/reference/ancova.html +++ b/main/reference/ancova.html @@ -1,24 +1,9 @@ - - - - - - -Analysis of Covariance — ancova • rbmi - - - - - - - - - - +the least square means estimates in each group.'> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,8 +52,7 @@
-

Usage -

+

Usage

ancova(
   data,
   vars,
@@ -95,57 +62,44 @@ 

Usage

-

Arguments -

+

Arguments

-
-
data -
+
data

A data.frame containing the data to be used in the model.

-
vars -
+
vars

A vars object as generated by set_vars(). Only the group, visit, outcome and covariates elements are required. See details.

-
visits -
+
visits

An optional character vector specifying which visits to fit the ancova model at. If NULL, a separate ancova model will be fit to the outcomes for each visit (as determined by unique(data[[vars$visit]])). See details.

-
weights -
+
weights

Character, either "counterfactual" (default), "equal", "proportional_em" or "proportional". Specifies the weighting strategy to be used when calculating the lsmeans. See the weighting section for more details.

-
-
+
-

Details -

-

The function works as follows:

-
    -
  1. Select the first value from visits.

  2. +

    Details

    +

    The function works as follows:

    1. Select the first value from visits.

    2. Subset the data to only the observations that occurred on this visit.

    3. Fit a linear model as vars$outcome ~ vars$group + vars$covariates.

    4. Extract the "treatment effect" & least square means for each treatment group.

    5. Repeat points 2-3 for all other values in visits.

    6. -
    -

    If no value for visits is provided then it will be set to +

If no value for visits is provided then it will be set to unique(data[[vars$visit]]).

In order to meet the formatting standards set by analyse() the results will be collapsed into a single list suffixed by the visit name, e.g.:

-

-
-
list(
+

list(
    trt_visit_1 = list(est = ...),
    lsm_ref_visit_1 = list(est = ...),
    lsm_alt_visit_1 = list(est = ...),
@@ -153,9 +107,7 @@ 

Details lsm_ref_visit_2 = list(est = ...), lsm_alt_visit_2 = list(est = ...), ... -)

-

-
+)

Please note that "ref" refers to the first factor level of vars$group which does not necessarily coincide with the control arm. Analogously, "alt" refers to the second factor level of vars$group. "trt" refers to the model contrast translating the mean difference between the second level and first level.

@@ -164,13 +116,11 @@

Detailsset_vars(covariates = c("sex*age")).

-

Weighting -

+

Weighting

-

Counterfactual -

+

Counterfactual

For weights = "counterfactual" (the default) the lsmeans are obtained by @@ -178,11 +128,7 @@

Counterfactualemmeans this approach is equivalent to:

-

-
-
emmeans::emmeans(model, specs = "<treatment>", counterfactual = "<treatment>")
-

-
+

emmeans::emmeans(model, specs = "<treatment>", counterfactual = "<treatment>")

Note that to ensure backwards compatibility with previous versions of rbmi weights = "proportional" is an alias for weights = "counterfactual". To get results consistent with emmeans's weights = "proportional" @@ -190,59 +136,42 @@

Counterfactual
-

Equal -

+

Equal

For weights = "equal" the lsmeans are obtained by taking the model fitted -value of a hypothetical patient whose covariates are defined as follows:

-
    -
  • Continuous covariates are set to mean(X)

  • +value of a hypothetical patient whose covariates are defined as follows:

    • Continuous covariates are set to mean(X)

    • Dummy categorical variables are set to 1/N where N is the number of levels

    • Continuous * continuous interactions are set to mean(X) * mean(Y)

    • Continuous * categorical interactions are set to mean(X) * 1/N

    • Dummy categorical * categorical interactions are set to 1/N * 1/M

    • -
    -

    In comparison to emmeans this approach is equivalent to:

    -

    -
    -
    emmeans::emmeans(model, specs = "<treatment>", weights = "equal")
    -

    -
    +

In comparison to emmeans this approach is equivalent to:

+

emmeans::emmeans(model, specs = "<treatment>", weights = "equal")

-

Proportional -

+

Proportional

For weights = "proportional_em" the lsmeans are obtained as per weights = "equal" except instead of weighting each observation equally they are weighted by the proportion in which the given combination of categorical values occurred in the data. In comparison to emmeans this approach is equivalent to:

-

-
-
emmeans::emmeans(model, specs = "<treatment>", weights = "proportional")
-

-
+

emmeans::emmeans(model, specs = "<treatment>", weights = "proportional")

Note that this is not to be confused with weights = "proportional" which is an alias for weights = "counterfactual".

-

See also -

-
-
+ - + + + - - diff --git a/main/reference/ancova_single.html b/main/reference/ancova_single.html index e80fd05c..0125dd73 100644 --- a/main/reference/ancova_single.html +++ b/main/reference/ancova_single.html @@ -1,20 +1,5 @@ - - - - - - -Implements an Analysis of Covariance (ANCOVA) — ancova_single • rbmi - - - - - - - - - - + +Implements an Analysis of Covariance (ANCOVA) — ancova_single • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,8 +46,7 @@
-

Usage -

+

Usage

ancova_single(
   data,
   outcome,
@@ -90,58 +57,45 @@ 

Usage

-

Arguments -

+

Arguments

-
-
data -
+
data

A data.frame containing the data to be used in the model.

-
outcome -
+
outcome

Character, the name of the outcome variable in data.

-
group -
+
group

Character, the name of the group variable in data.

-
covariates -
+
covariates

Character vector containing the name of any additional covariates to be included in the model as well as any interaction terms.

-
weights -
+
weights

Character, either "counterfactual" (default), "equal", "proportional_em" or "proportional". Specifies the weighting strategy to be used when calculating the lsmeans. See the weighting section for more details.

-
-
+
-

Details -

+

Details

-
    -
  • group must be a factor variable with only 2 levels.

  • +
    • group must be a factor variable with only 2 levels.

    • outcome must be a continuous numeric variable.

    • -
    -
+
-

Weighting -

+

Weighting

-

Counterfactual -

+

Counterfactual

For weights = "counterfactual" (the default) the lsmeans are obtained by @@ -149,11 +103,7 @@

Counterfactualemmeans this approach is equivalent to:

-

-
-
emmeans::emmeans(model, specs = "<treatment>", counterfactual = "<treatment>")
-

-
+

emmeans::emmeans(model, specs = "<treatment>", counterfactual = "<treatment>")

Note that to ensure backwards compatibility with previous versions of rbmi weights = "proportional" is an alias for weights = "counterfactual". To get results consistent with emmeans's weights = "proportional" @@ -161,55 +111,40 @@

Counterfactual
-

Equal -

+

Equal

For weights = "equal" the lsmeans are obtained by taking the model fitted -value of a hypothetical patient whose covariates are defined as follows:

-
    -
  • Continuous covariates are set to mean(X)

  • +value of a hypothetical patient whose covariates are defined as follows:

    • Continuous covariates are set to mean(X)

    • Dummy categorical variables are set to 1/N where N is the number of levels

    • Continuous * continuous interactions are set to mean(X) * mean(Y)

    • Continuous * categorical interactions are set to mean(X) * 1/N

    • Dummy categorical * categorical interactions are set to 1/N * 1/M

    • -
    -

    In comparison to emmeans this approach is equivalent to:

    -

    -
    -
    emmeans::emmeans(model, specs = "<treatment>", weights = "equal")
    -

    -
    +

In comparison to emmeans this approach is equivalent to:

+

emmeans::emmeans(model, specs = "<treatment>", weights = "equal")

-

Proportional -

+

Proportional

For weights = "proportional_em" the lsmeans are obtained as per weights = "equal" except instead of weighting each observation equally they are weighted by the proportion in which the given combination of categorical values occurred in the data. In comparison to emmeans this approach is equivalent to:

-

-
-
emmeans::emmeans(model, specs = "<treatment>", weights = "proportional")
-

-
+

emmeans::emmeans(model, specs = "<treatment>", weights = "proportional")

Note that this is not to be confused with weights = "proportional" which is an alias for weights = "counterfactual".

-

See also -

+

See also

-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 iris2 <- iris[ iris$Species %in% c("versicolor", "virginica"), ]
 iris2$Species <- factor(iris2$Species)
@@ -218,8 +153,7 @@ 

Examples -

+
-
+ + + - - diff --git a/main/reference/antidepressant_data.html b/main/reference/antidepressant_data.html index dbac79b4..2b4338a8 100644 --- a/main/reference/antidepressant_data.html +++ b/main/reference/antidepressant_data.html @@ -1,16 +1,5 @@ - - - - - - -Antidepressant trial data — antidepressant_data • rbmi - - - - - - - - - - +three non-placebo arms.'> Skip to contents @@ -48,38 +33,21 @@ + @@ -105,17 +73,13 @@
-

Usage -

+

Usage

antidepressant_data
-

Format -

-

A data.frame with 608 rows and 11 variables:

-
    -
  • PATIENT: patients IDs.

  • +

    Format

    +

    A data.frame with 608 rows and 11 variables:

    • PATIENT: patients IDs.

    • HAMATOTL: total score Hamilton Anxiety Rating Scale.

    • PGIIMP: patient's Global Impression of Improvement Rating Scale.

    • RELDAYS: number of days between visit and baseline.

    • @@ -127,11 +91,9 @@

      Format
    • BASVAL: baseline outcome value.

    • HAMDTL17: Hamilton 17-item rating scale value.

    • CHANGE: change from baseline in the Hamilton 17-item rating scale.

    • -

    -
+
-

Details -

+

Details

The relevant endpoint is the Hamilton 17-item rating scale for depression (HAMD17) for which baseline and weeks 1, 2, 4, and 6 assessments are included. Study drug discontinuation occurred in 24% subjects from the active drug and 26% from @@ -140,16 +102,14 @@

Details

-

References -

+

References

Goldstein, Lu, Detke, Wiltse, Mallinckrodt, Demitrack. Duloxetine in the treatment of depression: a double-blind placebo-controlled comparison with paroxetine. J Clin Psychopharmacol 2004;24: 389-399.

- +
-
- + + + - - diff --git a/main/reference/apply_delta.html b/main/reference/apply_delta.html index 7eaafce3..6ef27fc7 100644 --- a/main/reference/apply_delta.html +++ b/main/reference/apply_delta.html @@ -1,22 +1,7 @@ - - - - - - -Applies delta adjustment — apply_delta • rbmi - - - - - - - - - - + +Applies delta adjustment — apply_delta • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,43 +49,34 @@
-

Usage -

+

Usage

apply_delta(data, delta = NULL, group = NULL, outcome = NULL)
-

Arguments -

+

Arguments

-
-
data -
+
data

data.frame which will have its outcome column adjusted.

-
delta -
+
delta

data.frame (must contain a column called delta).

-
group -
+
group

character vector of variables in both data and delta that will be used to merge the 2 data.frames together by.

-
outcome -
+
outcome

character, name of the outcome variable in data.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/as_analysis.html b/main/reference/as_analysis.html index bcee2d5e..a1da24bb 100644 --- a/main/reference/as_analysis.html +++ b/main/reference/as_analysis.html @@ -1,22 +1,7 @@ - - - - - - -Construct an analysis object — as_analysis • rbmi - - - - - - - - - - + +Construct an analysis object — as_analysis • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,50 +49,40 @@
-

Usage -

+

Usage

as_analysis(results, method, delta = NULL, fun = NULL, fun_name = NULL)
-

Arguments -

+

Arguments

-
-
results -
+
results

A list of lists contain the analysis results for each imputation See analyse() for details on what this object should look like.

-
method -
+
method

The method object as specified in draws().

-
delta -
+
delta

The delta dataset used. See analyse() for details on how this should be specified.

-
fun -
+
fun

The analysis function that was used.

-
fun_name -
+
fun_name

The character name of the analysis function (used for printing) purposes.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/as_ascii_table.html b/main/reference/as_ascii_table.html index 15663867..c071d263 100644 --- a/main/reference/as_ascii_table.html +++ b/main/reference/as_ascii_table.html @@ -1,26 +1,11 @@ - - - - - - -as_ascii_table — as_ascii_table • rbmi - - - - - -as_ascii_table — as_ascii_table • rbmi - - - - +in order to cast them to character."> Skip to contents @@ -36,38 +21,21 @@ + @@ -87,37 +55,29 @@
-

Usage -

+

Usage

as_ascii_table(dat, line_prefix = "  ", pcol = NULL)
-

Arguments -

+

Arguments

-
-
dat -
+
dat

Input dataset to convert into a ascii table

-
line_prefix -
+
line_prefix

Symbols to prefix infront of every line of the table

-
pcol -
+
pcol

name of column to be handled as a p-value. Sets the value to <0.001 if the value is 0 after rounding

-
-
+ - +
-
- + + + - - diff --git a/main/reference/as_class.html b/main/reference/as_class.html index 147c5828..0d087d93 100644 --- a/main/reference/as_class.html +++ b/main/reference/as_class.html @@ -1,20 +1,5 @@ - - - - - - -Set Class — as_class • rbmi - - - - - - - - - - + +Set Class — as_class • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,32 +46,25 @@
-

Usage -

+

Usage

as_class(x, cls)
-

Arguments -

+

Arguments

-
-
x -
+
x

object to set the class of.

-
cls -
+
cls

the class to be set.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/as_cropped_char.html b/main/reference/as_cropped_char.html index 4a90540a..43eb050f 100644 --- a/main/reference/as_cropped_char.html +++ b/main/reference/as_cropped_char.html @@ -1,22 +1,7 @@ - - - - - - -as_cropped_char — as_cropped_char • rbmi - - - - - - - - - - + +as_cropped_char — as_cropped_char • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,37 +49,29 @@
-

Usage -

+

Usage

as_cropped_char(inval, crop_at = 30, ndp = 3)
-

Arguments -

+

Arguments

-
-
inval -
+
inval

a single element value

-
crop_at -
+
crop_at

character limit

-
ndp -
+
ndp

Number of decimal places to display

-
-
+ - +
-
- + + + - - diff --git a/main/reference/as_dataframe.html b/main/reference/as_dataframe.html index cc1efd69..9547f942 100644 --- a/main/reference/as_dataframe.html +++ b/main/reference/as_dataframe.html @@ -1,20 +1,5 @@ - - - - - - -Convert object to dataframe — as_dataframe • rbmi - - - - - - - - - - + +Convert object to dataframe — as_dataframe • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,31 +46,23 @@
-

Usage -

+

Usage

as_dataframe(x)
-

Arguments -

+

Arguments

-
-
x -
-
-

a data.frame like object

+
x
+

a data.frame like object

Utility function to convert a "data.frame-like" object to an actual data.frame -to avoid issues with inconsistency on methods (such as [() and dplyr's grouped dataframes)

-
+to avoid issues with inconsistency on methods (such as [() and dplyr's grouped dataframes)

-
-
+ - +
-
- + + + - - diff --git a/main/reference/as_draws.html b/main/reference/as_draws.html index c749a885..e0f86786 100644 --- a/main/reference/as_draws.html +++ b/main/reference/as_draws.html @@ -1,20 +1,5 @@ - - - - - - -Creates a draws object — as_draws • rbmi - - - - - - - - - - + +Creates a draws object — as_draws • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,79 +46,60 @@
-

Usage -

+

Usage

as_draws(method, samples, data, formula, n_failures = NULL, fit = NULL)
-

Arguments -

+

Arguments

-
-
method -
+
method

A method object as generated by either method_bayes(), method_approxbayes(), method_condmean() or method_bmlmi().

-
samples -
+
samples

A list of sample_single objects. See sample_single().

-
data -
+
data

R6 longdata object containing all relevant input data information.

-
formula -
+
formula

Fixed effects formula object used for the model specification.

-
n_failures -
+
n_failures

Absolute number of failures of the model fit.

-
fit -
+
fit

If method_bayes() is chosen, returns the MCMC Stan fit object. Otherwise NULL.

-
-
+
-

Value -

-

A draws object which is a named list containing the following:

-
    -
  • data: R6 longdata object containing all relevant input data information.

  • +

    Value

    +

    A draws object which is a named list containing the following:

    • data: R6 longdata object containing all relevant input data information.

    • method: A method object as generated by either method_bayes(), method_approxbayes() or method_condmean().

    • -
    • -

      samples: list containing the estimated parameters of interest. -Each element of samples is a named list containing the following:

      -
        -
      • ids: vector of characters containing the ids of the subjects included in the original dataset.

      • +
      • samples: list containing the estimated parameters of interest. +Each element of samples is a named list containing the following:

        • ids: vector of characters containing the ids of the subjects included in the original dataset.

        • beta: numeric vector of estimated regression coefficients.

        • sigma: list of estimated covariance matrices (one for each level of vars$group).

        • theta: numeric vector of transformed covariances.

        • failed: Logical. TRUE if the model fit failed.

        • ids_samp: vector of characters containing the ids of the subjects included in the given sample.

        • -
        -
      • +
    • fit: if method_bayes() is chosen, returns the MCMC Stan fit object. Otherwise NULL.

    • n_failures: absolute number of failures of the model fit. Relevant only for method_condmean(type = "bootstrap"), method_approxbayes() and method_bmlmi().

    • formula: fixed effects formula object used for the model specification.

    • -
    -
+ - +
-
- + + + - - diff --git a/main/reference/as_imputation.html b/main/reference/as_imputation.html index f7599d5c..a4c8a9bd 100644 --- a/main/reference/as_imputation.html +++ b/main/reference/as_imputation.html @@ -1,24 +1,9 @@ - - - - - - -Create an imputation object — as_imputation • rbmi - - - - - -Create an imputation object — as_imputation • rbmi - - - - +set and that the class is added as expected."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,44 +52,35 @@
-

Usage -

+

Usage

as_imputation(imputations, data, method, references)
-

Arguments -

+

Arguments

-
-
imputations -
+
imputations

A list of imputations_list's as created by imputation_df()

-
data -
+
data

A longdata object as created by longDataConstructor()

-
method -
+
method

A method object as created by method_condmean(), method_bayes() or method_approxbayes()

-
references -
+
references

A named vector. Identifies the references to be used when generating the imputed values. Should be of the form c("Group" = "Reference", "Group" = "Reference").

-
-
+ - +
-
- + + + - - diff --git a/main/reference/as_indices.html b/main/reference/as_indices.html index 8a1b0e08..b7757236 100644 --- a/main/reference/as_indices.html +++ b/main/reference/as_indices.html @@ -1,22 +1,7 @@ - - - - - - -Convert indicator to index — as_indices • rbmi - - - - - - - - - - + +Convert indicator to index — as_indices • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,38 +49,27 @@
-

Usage -

+

Usage

as_indices(x)
-

Arguments -

+

Arguments

-
-
x -
+
x

a character vector whose values are all either "0" or "1". All elements of the vector must be the same length

-
-
+
-

Details -

+

Details

i.e.

-

-
-
patmap(c("1101", "0001"))  ->   list(c(1,2,4,999), c(4,999, 999, 999))
-

-
+

patmap(c("1101", "0001"))  ->   list(c(1,2,4,999), c(4,999, 999, 999))

- + - + + + - - diff --git a/main/reference/as_mmrm_df.html b/main/reference/as_mmrm_df.html index b6c7855e..7491f560 100644 --- a/main/reference/as_mmrm_df.html +++ b/main/reference/as_mmrm_df.html @@ -1,16 +1,5 @@ - - - - - - -Creates a "MMRM" ready dataset — as_mmrm_df • rbmi - - - - - -Creates a "MMRM" ready dataset — as_mmrm_df • rbmi - - - - +"> Skip to contents @@ -46,38 +31,21 @@ + @@ -91,61 +59,48 @@

Converts a design matrix + key variables into a common format -In particular this function does the following:

-
    -
  • Renames all covariates as V1, V2, etc to avoid issues of special characters in variable names

  • +In particular this function does the following:

    • Renames all covariates as V1, V2, etc to avoid issues of special characters in variable names

    • Ensures all key variables are of the right type

    • Inserts the outcome, visit and subjid variables into the data.frame naming them as outcome, visit and subjid

    • If provided will also insert the group variable into the data.frame named as group

    • -
    -
+
-

Usage -

+

Usage

as_mmrm_df(designmat, outcome, visit, subjid, group = NULL)
-

Arguments -

+

Arguments

-
-
designmat -
+
designmat

a data.frame or matrix containing the covariates to use in the MMRM model. Dummy variables must already be expanded out, i.e. via stats::model.matrix(). Cannot contain any missing values

-
outcome -
+
outcome

a numeric vector. The outcome value to be regressed on in the MMRM model.

-
visit -
+
visit

a character / factor vector. Indicates which visit the outcome value occurred on.

-
subjid -
+
subjid

a character / factor vector. The subject identifier used to link separate visits that belong to the same subject.

-
group -
+
group

a character / factor vector. Indicates which treatment group the patient belongs to.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/as_mmrm_formula.html b/main/reference/as_mmrm_formula.html index c08eb71e..6cd5ede9 100644 --- a/main/reference/as_mmrm_formula.html +++ b/main/reference/as_mmrm_formula.html @@ -1,22 +1,7 @@ - - - - - - -Create MMRM formula — as_mmrm_formula • rbmi - - - - - - - - - - + +Create MMRM formula — as_mmrm_formula • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,42 +49,30 @@
-

Usage -

+

Usage

as_mmrm_formula(mmrm_df, cov_struct)
-

Arguments -

+

Arguments

-
-
mmrm_df -
+
mmrm_df

an mmrm data.frame as created by as_mmrm_df()

-
cov_struct -
+
cov_struct

Character - The covariance structure to be used, must be one of "us" (default), "ad", "adh", "ar1", "ar1h", "cs", "csh", "toep", or "toeph")

-
-
+
-

Details -

-

-
-
outcome ~ 0 + V1 + V2 + V4 + ... + us(visit | group / subjid)
-

-
+

Details

+

outcome ~ 0 + V1 + V2 + V4 + ... + us(visit | group / subjid)

- + - + + + - - diff --git a/main/reference/as_model_df.html b/main/reference/as_model_df.html index 2a95b86f..d230bf3e 100644 --- a/main/reference/as_model_df.html +++ b/main/reference/as_model_df.html @@ -1,24 +1,9 @@ - - - - - - -Expand data.frame into a design matrix — as_model_df • rbmi - - - - - -Expand data.frame into a design matrix — as_model_df • rbmi - - - - +the first column of the return object."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,38 +52,30 @@
-

Usage -

+

Usage

as_model_df(dat, frm)
-

Arguments -

+

Arguments

-
-
dat -
+
dat

a data.frame

-
frm -
+
frm

a formula

-
-
+
-

Details -

+

Details

The outcome column may contain NA's but none of the other variables listed in the formula should contain missing values

- + - + + + - - diff --git a/main/reference/as_simple_formula.html b/main/reference/as_simple_formula.html index d9d80ae5..900915b7 100644 --- a/main/reference/as_simple_formula.html +++ b/main/reference/as_simple_formula.html @@ -1,20 +1,5 @@ - - - - - - -Creates a simple formula object from a string — as_simple_formula • rbmi - - - - - - - - - - + +Creates a simple formula object from a string — as_simple_formula • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,37 +46,29 @@
-

Usage -

+

Usage

as_simple_formula(outcome, covars)
-

Arguments -

+

Arguments

-
-
outcome -
+
outcome

character (length 1 vector). Name of the outcome variable

-
covars -
+
covars

character (vector). Name of covariates

-
-
+
-

Value -

+

Value

A formula

- +
-
- + + + - - diff --git a/main/reference/as_stan_array.html b/main/reference/as_stan_array.html index 6ff0c7e3..7e9cfdca 100644 --- a/main/reference/as_stan_array.html +++ b/main/reference/as_stan_array.html @@ -1,24 +1,9 @@ - - - - - - -As array — as_stan_array • rbmi - - - - - -As array — as_stan_array • rbmi - - - - +are provided by R for stan::vector inputs"> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,27 +52,21 @@
-

Usage -

+

Usage

as_stan_array(x)
-

Arguments -

+

Arguments

-
-
x -
+
x

a numeric vector

-
-
+ - +
-
- + + + - - diff --git a/main/reference/as_strata.html b/main/reference/as_strata.html index c06ea766..e81ed0b2 100644 --- a/main/reference/as_strata.html +++ b/main/reference/as_strata.html @@ -1,32 +1,17 @@ - - - - - - -Create vector of Stratas — as_strata • rbmi - - - - - -Create vector of Stratas — as_strata • rbmi - - - - +"> Skip to contents @@ -42,38 +27,21 @@ + @@ -88,49 +56,34 @@

Collapse multiple categorical variables into distinct unique categories. e.g.

-

-
-
as_strata(c(1,1,2,2,2,1), c(5,6,5,5,6,5))
-

-
+

as_strata(c(1,1,2,2,2,1), c(5,6,5,5,6,5))

would return

-

-
-
c(1,2,3,3,4,1)
-

-
+

c(1,2,3,3,4,1)

-

Usage -

+

Usage

as_strata(...)
-

Arguments -

+

Arguments

-
-
... -
+
...

numeric/character/factor vectors of the same length

-
-
+
-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 as_strata(c(1,1,2,2,2,1), c(5,6,5,5,6,5))
 } # }
 
- +
-
- + + + - - diff --git a/main/reference/assert_variables_exist.html b/main/reference/assert_variables_exist.html index 7ef788d3..15138d0a 100644 --- a/main/reference/assert_variables_exist.html +++ b/main/reference/assert_variables_exist.html @@ -1,20 +1,5 @@ - - - - - - -Assert that all variables exist within a dataset — assert_variables_exist • rbmi - - - - - - - - - - + +Assert that all variables exist within a dataset — assert_variables_exist • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,32 +46,25 @@
-

Usage -

+

Usage

assert_variables_exist(data, vars)
-

Arguments -

+

Arguments

-
-
data -
+
data

a data.frame

-
vars -
+
vars

a character vector of variable names

-
-
+ - +
-
- + + + - - diff --git a/main/reference/char2fct.html b/main/reference/char2fct.html index a0c6907a..1bd19bf6 100644 --- a/main/reference/char2fct.html +++ b/main/reference/char2fct.html @@ -1,24 +1,9 @@ - - - - - - -Convert character variables to factor — char2fct • rbmi - - - - - -Convert character variables to factor — char2fct • rbmi - - - - +factor variables"> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,32 +52,25 @@
-

Usage -

+

Usage

char2fct(data, vars = NULL)
-

Arguments -

+

Arguments

-
-
data -
+
data

A data.frame

-
vars -
+
vars

a character vector of variables in data

-
-
+ - +
-
- + + + - - diff --git a/main/reference/check_ESS.html b/main/reference/check_ESS.html index ca5f4d4c..ac98de94 100644 --- a/main/reference/check_ESS.html +++ b/main/reference/check_ESS.html @@ -1,22 +1,7 @@ - - - - - - -Diagnostics of the MCMC based on ESS — check_ESS • rbmi - - - - - - - - - - + +Diagnostics of the MCMC based on ESS — check_ESS • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,55 +49,42 @@
-

Usage -

+

Usage

check_ESS(stan_fit, n_draws, threshold_lowESS = 0.4)
-

Arguments -

+

Arguments

-
-
stan_fit -
+
stan_fit

A stanfit object.

-
n_draws -
+
n_draws

Number of MCMC draws.

-
threshold_lowESS -
+
threshold_lowESS

A number in [0,1] indicating the minimum acceptable value of the relative ESS. See details.

-
-
+
-

Value -

+

Value

A warning message in case of detected problems.

-

Details -

-

check_ESS() works as follows:

-
    -
  1. Extract the ESS from stan_fit for each parameter of the model.

  2. +

    Details

    +

    check_ESS() works as follows:

    1. Extract the ESS from stan_fit for each parameter of the model.

    2. Compute the relative ESS (i.e. the ESS divided by the number of draws).

    3. Check whether for any of the parameter the ESS is lower than threshold. If for at least one parameter the relative ESS is below the threshold, a warning is thrown.

    4. -
    -
+ - + - + + + - - diff --git a/main/reference/check_hmc_diagn.html b/main/reference/check_hmc_diagn.html index c8411684..35a4b7d0 100644 --- a/main/reference/check_hmc_diagn.html +++ b/main/reference/check_hmc_diagn.html @@ -1,32 +1,17 @@ - - - - - - -Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn • rbmi - - - - - -Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn • rbmi - - - - +Please see rstan::check_hmc_diagnostics() for details."> Skip to contents @@ -42,38 +27,21 @@ + @@ -86,42 +54,32 @@
-

Check that:

-
    -
  1. There are no divergent iterations.

  2. +

    Check that:

    1. There are no divergent iterations.

    2. The Bayesian Fraction of Missing Information (BFMI) is sufficiently low.

    3. The number of iterations that saturated the max treedepth is zero.

    4. -
    -

    Please see rstan::check_hmc_diagnostics() for details.

    +

Please see rstan::check_hmc_diagnostics() for details.

-

Usage -

+

Usage

check_hmc_diagn(stan_fit)
-

Arguments -

+

Arguments

-
-
stan_fit -
+
stan_fit

A stanfit object.

-
-
+
-

Value -

+

Value

A warning message in case of detected problems.

- +
-
- + + + - - diff --git a/main/reference/check_mcmc.html b/main/reference/check_mcmc.html index 0ff35d94..d87dae00 100644 --- a/main/reference/check_mcmc.html +++ b/main/reference/check_mcmc.html @@ -1,20 +1,5 @@ - - - - - - -Diagnostics of the MCMC — check_mcmc • rbmi - - - - - - - - - - + +Diagnostics of the MCMC — check_mcmc • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,49 +46,39 @@
-

Usage -

+

Usage

check_mcmc(stan_fit, n_draws, threshold_lowESS = 0.4)
-

Arguments -

+

Arguments

-
-
stan_fit -
+
stan_fit

A stanfit object.

-
n_draws -
+
n_draws

Number of MCMC draws.

-
threshold_lowESS -
+
threshold_lowESS

A number in [0,1] indicating the minimum acceptable value of the relative ESS. See details.

-
-
+
-

Value -

+

Value

A warning message in case of detected problems.

-

Details -

+

Details

Performs checks of the quality of the MCMC. See check_ESS() and check_hmc_diagn() for details.

- + - + + + - - diff --git a/main/reference/compute_sigma.html b/main/reference/compute_sigma.html index bd14936c..90c7c8d1 100644 --- a/main/reference/compute_sigma.html +++ b/main/reference/compute_sigma.html @@ -1,26 +1,11 @@ - - - - - - -Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma • rbmi - - - - - -Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma • rbmi - - - - +et al. (2013)"> Skip to contents @@ -36,38 +21,21 @@ + @@ -87,40 +55,32 @@
-

Usage -

+

Usage

compute_sigma(sigma_group, sigma_ref, index_mar)
-

Arguments -

+

Arguments

-
-
sigma_group -
+
sigma_group

the covariance matrix with dimensions equal to index_mar for the subjects original group

-
sigma_ref -
+
sigma_ref

the covariance matrix with dimensions equal to index_mar for the subjects reference group

-
index_mar -
+
index_mar

A logical vector indicating which visits meet the MAR assumption for the subject. I.e. this identifies the observations that after a non-MAR intercurrent event (ICE).

-
-
+
-

References -

+

References

Carpenter, James R., James H. Roger, and Michael G. Kenward. "Analysis of longitudinal trials with protocol deviation: a framework for relevant, accessible assumptions, and inference via multiple imputation." Journal of Biopharmaceutical statistics 23.6 (2013): @@ -128,8 +88,7 @@

References -

+ - + + + - - diff --git a/main/reference/convert_to_imputation_list_df.html b/main/reference/convert_to_imputation_list_df.html index 220388a6..16c6f1e8 100644 --- a/main/reference/convert_to_imputation_list_df.html +++ b/main/reference/convert_to_imputation_list_df.html @@ -1,22 +1,7 @@ - - - - - - -Convert list of imputation_list_single() objects to an imputation_list_df() object (i.e. a list of imputation_df() objects's) — convert_to_imputation_list_df • rbmi - - - - - - - - - - + +Convert list of imputation_list_single() objects to an imputation_list_df() object (i.e. a list of imputation_df() objects's) — convert_to_imputation_list_df • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,26 +49,20 @@
-

Usage -

+

Usage

convert_to_imputation_list_df(imputes, sample_ids)
-

Arguments -

+

Arguments

-
-
imputes -
+
imputes

a list of imputation_list_single() objects

-
sample_ids -
-
-

A list with 1 element per required imputation_df. Each element +

sample_ids
+

A list with 1 element per required imputation_df. Each element must contain a vector of "ID"'s which correspond to the imputation_single() ID's that are required for that dataset. The total number of ID's must by equal to the total number of rows within all of imputes$imputations

@@ -113,9 +75,7 @@

Arguments -
imputes = list(
+

imputes = list(
     imputation_list_single(
         id = "Tom",
         imputations = matrix(
@@ -135,13 +95,9 @@ 

Argumentssample_ids <- list( c("Tom", "Harry", "Tom"), c("Tom") -)

-

-
+)

Then convert_to_imputation_df(imputes, sample_ids) would result in:

-

-
-
imputation_list_df(
+

imputation_list_df(
     imputation_df(
         imputation_single_t_1_1,
         imputation_single_h_1_1,
@@ -158,19 +114,14 @@ 

Arguments imputation_df( imputation_single_t_3_2 ) -)

-

-
+)

Note that the different repetitions (i.e. the value set for D) are grouped together -sequentially.

- +sequentially.

- - + - +
-
- + + + - - diff --git a/main/reference/d_lagscale.html b/main/reference/d_lagscale.html index e8fbba4b..4413b810 100644 --- a/main/reference/d_lagscale.html +++ b/main/reference/d_lagscale.html @@ -1,22 +1,7 @@ - - - - - - -Calculate delta from a lagged scale coefficient — d_lagscale • rbmi - - - - - - - - - - + +Calculate delta from a lagged scale coefficient — d_lagscale • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,46 +49,37 @@
-

Usage -

+

Usage

d_lagscale(delta, dlag, is_post_ice)
-

Arguments -

+

Arguments

-
-
delta -
+
delta

a numeric vector. Determines the baseline amount of delta to be applied to each visit.

-
dlag -
+
dlag

a numeric vector. Determines the scaling to be applied to delta based upon with visit the ICE occurred on. Must be the same length as delta.

-
is_post_ice -
+
is_post_ice

logical vector. Indicates whether a visit is "post-ICE" or not.

-
-
+
-

Details -

+

Details

See delta_template() for full details on how this calculation is performed.

- + - + + + - - diff --git a/main/reference/delta_template.html b/main/reference/delta_template.html index 4fdfedaf..39e86f6a 100644 --- a/main/reference/delta_template.html +++ b/main/reference/delta_template.html @@ -1,22 +1,7 @@ - - - - - - -Create a delta data.frame template — delta_template • rbmi - - - - - - - - - - + +Create a delta data.frame template — delta_template • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,49 +49,40 @@
-

Usage -

+

Usage

delta_template(imputations, delta = NULL, dlag = NULL, missing_only = TRUE)
-

Arguments -

+

Arguments

-
-
imputations -
+
imputations

an imputation object as created by impute().

-
delta -
+
delta

NULL or a numeric vector. Determines the baseline amount of delta to be applied to each visit. See details. If a numeric vector it must have the same length as the number of unique visits in the original dataset.

-
dlag -
+
dlag

NULL or a numeric vector. Determines the scaling to be applied to delta based upon which visit the ICE occurred on. See details. If a numeric vector it must have the same length as the number of unique visits in the original dataset.

-
missing_only -
+
missing_only

Logical, if TRUE then non-missing post-ICE data will have a delta value of 0 assigned. Note that the calculation (as described in the details section) is performed first and then overwritten with 0's at the end (i.e. the delta values for missing post-ICE visits will stay the same regardless of this option).

-
-
+
-

Details -

+

Details

To apply a delta adjustment the analyse() function expects a delta data.frame with 3 variables: vars$subjid, vars$visit and delta (where vars is the object supplied in the original call to draws() @@ -139,49 +98,37 @@

DetailsLet delta = c(5,6,7,8) and dlag=c(1,2,3,4) (i.e. assuming there are 4 visits) and lets say that the subject had an ICE on visit 2. The calculation would then be as follows:

-

-
-
v1  v2  v3  v4
+

v1  v2  v3  v4
 --------------
  5   6   7   8  # delta assigned to each visit
  0   1   2   3  # lagged scaling starting from the first visit after the subjects ICE
 --------------
  0   6  14  24  # delta * lagged scaling
 --------------
- 0   6  20  44  # accumulative sum of delta to be applied to each visit
-

-
+ 0 6 20 44 # accumulative sum of delta to be applied to each visit

That is to say the subject would have a delta offset of 0 applied for visit-1, 6 for visit-2, 20 for visit-3 and 44 for visit-4. As a comparison, lets say that the subject instead had their ICE on visit 3, the calculation would then be as follows:

-

-
-
v1  v2  v3  v4
+

v1  v2  v3  v4
 --------------
  5   6   7   8  # delta assigned to each visit
  0   0   1   2  # lagged scaling starting from the first visit after the subjects ICE
 --------------
  0   0   7  16  # delta * lagged scaling
 --------------
- 0   0   7  23  # accumulative sum of delta to be applied to each visit
-

-
+ 0 0 7 23 # accumulative sum of delta to be applied to each visit

In terms of practical usage, lets say that you wanted a delta of 5 to be used for all post ICE visits regardless of their proximity to the ICE visit. This can be achieved by setting delta = c(5,5,5,5) and dlag = c(1,0,0,0). For example lets say a subject had their ICE on visit-1, then the calculation would be as follows:

-

-
-
v1  v2  v3  v4
+

v1  v2  v3  v4
 --------------
  5   5   5   5  # delta assigned to each visit
  1   0   0   0  # lagged scaling starting from the first visit after the subjects ICE
 --------------
  5   0   0  0  # delta * lagged scaling
 --------------
- 5   5   5  5  # accumulative sum of delta to be applied to each visit
-

-
+ 5 5 5 5 # accumulative sum of delta to be applied to each visit

Another way of using these arguments is to set delta to be the difference in time between visits and dlag to be the amount of delta per unit of time. For example lets say that we have a visit on weeks @@ -189,49 +136,39 @@

Detailsdelta = c(0,4,1,3) (the difference in weeks between each visit) and dlag = c(3, 3, 3, 3). For example lets say we have a subject who had their ICE on week-5 (i.e. visit-2) then the calculation would be:

-

-
-
v1  v2  v3  v4
+

v1  v2  v3  v4
 --------------
  0   4   1   3  # delta assigned to each visit
  0   0   3   3  # lagged scaling starting from the first visit after the subjects ICE
 --------------
  0   0   3   9  # delta * lagged scaling
 --------------
- 0   0   3  12  # accumulative sum of delta to be applied to each visit
-

-
+ 0 0 3 12 # accumulative sum of delta to be applied to each visit

i.e. on week-6 (1 week after the ICE) they have a delta of 3 and on week-9 (4 weeks after the ICE) they have a delta of 12.

Please note that this function also returns several utility variables so that the user can create their own custom logic for defining what delta -should be set to. These additional variables include:

-
    -
  • is_mar - If the observation was missing would it be regarded as MAR? This variable +should be set to. These additional variables include:

    • is_mar - If the observation was missing would it be regarded as MAR? This variable is set to FALSE for observations that occurred after a non-MAR ICE, otherwise it is set to TRUE.

    • is_missing - Is the outcome variable for this observation missing.

    • is_post_ice - Does the observation occur after the patient's ICE as defined by the data_ice dataset supplied to draws().

    • strategy - What imputation strategy was assigned to for this subject.

    • -
    -

    The design and implementation of this function is largely based upon the same functionality +

The design and implementation of this function is largely based upon the same functionality as implemented in the so called "five marcos" by James Roger. See Roger (2021).

-

References -

+

References

Roger, James. Reference-based mi via multivariate normal rm (the “five macros” and miwithd), 2021. URL https://www.lshtm.ac.uk/research/centres-projects-groups/missing-data#dia-missing-data.

-

See also -

+

See also

-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 delta_template(imputeObj)
 delta_template(imputeObj, delta = c(5,6,7,8), dlag = c(1,2,3,4))
@@ -239,8 +176,7 @@ 

Examples -

+
- + + + - - diff --git a/main/reference/draws.html b/main/reference/draws.html index 7a0ad3ae..243320ae 100644 --- a/main/reference/draws.html +++ b/main/reference/draws.html @@ -1,16 +1,5 @@ - - - - - - -Fit the base imputation model and get parameter estimates — draws • rbmi - - - - - -Fit the base imputation model and get parameter estimates — draws • rbmi - - - - +In any case the covariance matrix can be assumed to be the same or different across each group."> Skip to contents @@ -60,38 +45,21 @@ + @@ -123,8 +91,7 @@
-

Usage -

+

Usage

draws(data, data_ice = NULL, vars, method, ncores = 1, quiet = FALSE)
 
 # S3 method for class 'approxbayes'
@@ -141,84 +108,64 @@ 

Usage

-

Arguments -

+

Arguments

-
-
data -
+
data

A data.frame containing the data to be used in the model. See details.

-
data_ice -
+
data_ice

A data.frame that specifies the information related to the ICEs and the imputation strategies. See details.

-
vars -
+
vars

A vars object as generated by set_vars(). See details.

-
method -
+
method

A method object as generated by either method_bayes(), method_approxbayes(), method_condmean() or method_bmlmi(). It specifies the multiple imputation methodology to be used. See details.

-
ncores -
+
ncores

A single numeric specifying the number of cores to use in creating the draws object. Note that this parameter is ignored for method_bayes() (Default = 1). Can also be a cluster object generated by make_rbmi_cluster()

-
quiet -
+
quiet

Logical, if TRUE will suppress printing of progress information that is printed to the console.

-
-
+
-

Value -

-

A draws object which is a named list containing the following:

-
    -
  • data: R6 longdata object containing all relevant input data information.

  • +

    Value

    +

    A draws object which is a named list containing the following:

    • data: R6 longdata object containing all relevant input data information.

    • method: A method object as generated by either method_bayes(), method_approxbayes() or method_condmean().

    • -
    • -

      samples: list containing the estimated parameters of interest. -Each element of samples is a named list containing the following:

      -
        -
      • ids: vector of characters containing the ids of the subjects included in the original dataset.

      • +
      • samples: list containing the estimated parameters of interest. +Each element of samples is a named list containing the following:

        • ids: vector of characters containing the ids of the subjects included in the original dataset.

        • beta: numeric vector of estimated regression coefficients.

        • sigma: list of estimated covariance matrices (one for each level of vars$group).

        • theta: numeric vector of transformed covariances.

        • failed: Logical. TRUE if the model fit failed.

        • ids_samp: vector of characters containing the ids of the subjects included in the given sample.

        • -
        -
      • +
    • fit: if method_bayes() is chosen, returns the MCMC Stan fit object. Otherwise NULL.

    • n_failures: absolute number of failures of the model fit. Relevant only for method_condmean(type = "bootstrap"), method_approxbayes() and method_bmlmi().

    • formula: fixed effects formula object used for the model specification.

    • -
    -
+
-

Details -

+

Details

draws performs the first step of the multiple imputation (MI) procedure: fitting the base imputation model. The goal is to estimate the parameters of interest needed for the imputation phase (i.e. the regression coefficients and the covariance matrices from a MMRM model).

-

The function distinguishes between the following methods:

-

Bayesian MI based on MCMC sampling has been proposed in Carpenter, Roger, and Kenward (2013) who first introduced reference-based imputation methods. Approximate Bayesian MI is discussed in Little and Rubin (2002). Conditional mean imputation methods are discussed in Wolbers et al (2022). Bootstrapped Maximum Likelihood MI is described in Von Hippel & Bartlett (2021).

-

The argument data contains the longitudinal data. It must have at least the following variables:

-
    -
  • subjid: a factor vector containing the subject ids.

  • +

    The argument data contains the longitudinal data. It must have at least the following variables:

    • subjid: a factor vector containing the subject ids.

    • visit: a factor vector containing the visit the outcome was observed on.

    • group: a factor vector containing the group that the subject belongs to.

    • outcome: a numeric vector containing the outcome variable. It might contain missing values. Additional baseline or time-varying covariates must be included in data.

    • -
    -

    data must have one row per visit per subject. This means that incomplete +

data must have one row per visit per subject. This means that incomplete outcome data must be set as NA instead of having the related row missing. Missing values in the covariates are not allowed. If data is incomplete @@ -268,21 +211,16 @@

Detailsdraws().

The argument data_ice contains information about the occurrence of ICEs. It is a -data.frame with 3 columns:

-

The data_ice argument is necessary at this stage since (as explained in Wolbers et al (2022)), the model is fitted after removing the observations which are incompatible with the imputation model, i.e. any observed data on or after data_ice[[vars$visit]] that are addressed with an imputation strategy different from MAR are excluded for the model fit. However such observations @@ -307,9 +243,7 @@

Detailsimpute(); this means that subjects who didn't have a record in data_ice will always have their missing data imputed under the MAR assumption even if their strategy is updated.

The vars argument is a named list that specifies the names of key variables within -data and data_ice. This list is created by set_vars() and contains the following named elements:

-
    -
  • subjid: name of the column in data and data_ice which contains the subject ids variable.

  • +data and data_ice. This list is created by set_vars() and contains the following named elements:

    -

    In our experience, Bayesian MI (method = method_bayes()) with a relatively low number of +

In our experience, Bayesian MI (method = method_bayes()) with a relatively low number of samples (e.g. n_samples below 100) frequently triggers STAN warnings about R-hat such as "The largest R-hat is X.XX, indicating chains have not mixed". In many instances, this warning might be spurious, i.e. standard diagnostics analysis of the MCMC samples do not indicate any @@ -331,8 +264,7 @@

Details

-

References -

+

References

James R Carpenter, James H Roger, and Michael G Kenward. Analysis of longitudinal trials with protocol deviation: a framework for relevant, accessible assumptions, and inference via multiple imputation. Journal of Biopharmaceutical Statistics, 23(6):1352–1371, 2013.

@@ -349,20 +281,16 @@

References -

See also -

-
-

method_bayes(), method_approxbayes(), method_condmean(), method_bmlmi() for setting method.

+

See also

+

method_bayes(), method_approxbayes(), method_condmean(), method_bmlmi() for setting method.

set_vars() for setting vars.

expand_locf() for expanding data in case of missing rows.

For more details see the quickstart vignette: -vignette("quickstart", package = "rbmi").

-
+vignette("quickstart", package = "rbmi").

- + - + + + - - diff --git a/main/reference/ensure_rstan.html b/main/reference/ensure_rstan.html index cf7c10a8..81414f3d 100644 --- a/main/reference/ensure_rstan.html +++ b/main/reference/ensure_rstan.html @@ -1,20 +1,5 @@ - - - - - - -Ensure rstan exists — ensure_rstan • rbmi - - - - - - - - - - + +Ensure rstan exists — ensure_rstan • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,14 +46,12 @@
-

Usage -

+

Usage

ensure_rstan()
- - + - + + + - - diff --git a/main/reference/eval_mmrm.html b/main/reference/eval_mmrm.html index d79c8c54..b31d7f00 100644 --- a/main/reference/eval_mmrm.html +++ b/main/reference/eval_mmrm.html @@ -1,30 +1,15 @@ - - - - - - -Evaluate a call to mmrm — eval_mmrm • rbmi - - - - - -Evaluate a call to mmrm — eval_mmrm • rbmi - - - - +without the program exiting."> Skip to contents @@ -40,38 +25,21 @@ + @@ -93,40 +61,32 @@
-

Usage -

+

Usage

eval_mmrm(expr)
-

Arguments -

+

Arguments

-
-
expr -
+
expr

An expression to be evaluated. Should be a call to mmrm::mmrm().

-
-
+
-

Details -

+

Details

This function was originally developed for use with glmmTMB which needed more hand-holding and dropping of false-positive warnings. It is not as important now but is kept around encase we need to catch false-positive warnings again in the future.

-

See also -

+

See also

-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 eval_mmrm({
     mmrm::mmrm(formula, data)
@@ -135,8 +95,7 @@ 

Examples -

+
- + + + - - diff --git a/main/reference/expand.html b/main/reference/expand.html index 11171fb3..ac594e97 100644 --- a/main/reference/expand.html +++ b/main/reference/expand.html @@ -1,24 +1,9 @@ - - - - - - -Expand and fill in missing data.frame rows — expand • rbmi - - - - - -Expand and fill in missing data.frame rows — expand • rbmi - - - - +covariate values of newly created rows."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,8 +52,7 @@
-

Usage -

+

Usage

expand(data, ...)
 
 fill_locf(data, vars, group = NULL, order = NULL)
@@ -94,43 +61,34 @@ 

Usage

-

Arguments -

+

Arguments

-
-
data -
+
data

dataset to expand or fill in.

-
... -
+
...

variables and the levels that should be expanded out (note that duplicate entries of levels will result in multiple rows for that level).

-
vars -
+
vars

character vector containing the names of variables that need to be filled in.

-
group -
+
group

character vector containing the names of variables to group by when performing LOCF imputation of var.

-
order -
+
order

character vector containing the names of additional variables to sort the data.frame by before performing LOCF.

-
-
+
-

Details -

+

Details

The draws() function makes the assumption that all subjects and visits are present in the data.frame and that all covariate values are non missing; expand(), fill_locf() and expand_locf() are utility functions to support users in ensuring @@ -144,10 +102,8 @@

Detailsc(group, order) before performing the LOCF imputation; the data.frame will be returned in the original sort order however.

expand_locf() a simple composition function of fill_locf() and expand() i.e. -fill_locf(expand(...)).

-
-

Missing First Values -

+fill_locf(expand(...)).

+

Missing First Values

The fill_locf() function performs last observation carried forward imputation. @@ -158,9 +114,7 @@

Missing First Values -
library(dplyr)
+

library(dplyr)
 
 dat_expanded <- expand(
     data = dat,
@@ -169,16 +123,13 @@ 

Missing First Values) dat_filled <- dat_expanded %>% - left_join(baseline_covariates, by = "subject")

-

-
+ left_join(baseline_covariates, by = "subject")

-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 dat_expanded <- expand(
     data = dat,
@@ -207,8 +158,7 @@ 

Examples -

+
- + + + - - diff --git a/main/reference/extract_covariates.html b/main/reference/extract_covariates.html index 3f92300c..d70dd6c4 100644 --- a/main/reference/extract_covariates.html +++ b/main/reference/extract_covariates.html @@ -1,22 +1,7 @@ - - - - - - -Extract Variables from string vector — extract_covariates • rbmi - - - - - - - - - - + +Extract Variables from string vector — extract_covariates • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,32 +49,25 @@
-

Usage -

+

Usage

extract_covariates(x)
-

Arguments -

+

Arguments

-
-
x -
+
x

string of variable names potentially including interaction terms

-
-
+
-

Details -

+

Details

i.e. c("v1", "v2", "v2*v3", "v1:v2") becomes c("v1", "v2", "v3")

- + - + + + - - diff --git a/main/reference/extract_data_nmar_as_na.html b/main/reference/extract_data_nmar_as_na.html index d5f4f048..f5b64dfd 100644 --- a/main/reference/extract_data_nmar_as_na.html +++ b/main/reference/extract_data_nmar_as_na.html @@ -1,22 +1,7 @@ - - - - - - -Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na • rbmi - - - - - - - - - - + +Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,33 +49,26 @@
-

Usage -

+

Usage

extract_data_nmar_as_na(longdata)
-

Arguments -

+

Arguments

-
-
longdata -
+
longdata

R6 longdata object containing all relevant input data information.

-
-
+
-

Value -

+

Value

A data.frame containing longdata$get_data(longdata$ids), but MNAR outcome values are set to NA.

- +
-
- + + + - - diff --git a/main/reference/extract_draws.html b/main/reference/extract_draws.html index 99a1cc93..8e0bcaa3 100644 --- a/main/reference/extract_draws.html +++ b/main/reference/extract_draws.html @@ -1,26 +1,11 @@ - - - - - - -Extract draws from a stanfit object — extract_draws • rbmi - - - - - -Extract draws from a stanfit object — extract_draws • rbmi - - - - +and then convert the arrays into lists."> Skip to contents @@ -36,38 +21,21 @@ + @@ -87,40 +55,30 @@
-

Usage -

+

Usage

extract_draws(stan_fit)
-

Arguments -

+

Arguments

-
-
stan_fit -
+
stan_fit

A stanfit object.

-
-
+
-

Value -

-

A named list of length 2 containing:

-
    -
  • beta: a list of length equal to the number of draws containing +

    Value

    +

    A named list of length 2 containing:

    • beta: a list of length equal to the number of draws containing the draws from the posterior distribution of the regression coefficients.

    • sigma: a list of length equal to the number of draws containing the draws from the posterior distribution of the covariance matrices. Each element of the list is a list with length equal to 1 if same_cov = TRUE or equal to the number of groups if same_cov = FALSE.

    • -
    -
+ - +
-
- + + + - - diff --git a/main/reference/extract_imputed_df.html b/main/reference/extract_imputed_df.html index 0e1c9a9e..45e3e12c 100644 --- a/main/reference/extract_imputed_df.html +++ b/main/reference/extract_imputed_df.html @@ -1,32 +1,17 @@ - - - - - - -Extract imputed dataset — extract_imputed_df • rbmi - - - - - -Extract imputed dataset — extract_imputed_df • rbmi - - - - +values."> Skip to contents @@ -42,38 +27,21 @@ + @@ -96,49 +64,39 @@
-

Usage -

+

Usage

extract_imputed_df(imputation, ld, delta = NULL, idmap = FALSE)
-

Arguments -

+

Arguments

-
-
imputation -
+
imputation

An imputation object as generated by imputation_df().

-
ld -
+
ld

A longdata object as generated by longDataConstructor().

-
delta -
+
delta

Either NULL or a data.frame. Is used to offset outcome values in the imputed dataset.

-
idmap -
+
idmap

Logical. If TRUE an attribute called "idmap" is attached to the return object which contains a list that maps the old subject ids the new subject ids.

-
-
+
-

Value -

+

Value

A data.frame.

- +
-
- + + + - - diff --git a/main/reference/extract_imputed_dfs.html b/main/reference/extract_imputed_dfs.html index fa8a0435..ad55896c 100644 --- a/main/reference/extract_imputed_dfs.html +++ b/main/reference/extract_imputed_dfs.html @@ -1,22 +1,7 @@ - - - - - - -Extract imputed datasets — extract_imputed_dfs • rbmi - - - - - - - - - - + +Extract imputed datasets — extract_imputed_dfs • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,8 +49,7 @@
-

Usage -

+

Usage

extract_imputed_dfs(
   imputations,
   index = seq_along(imputations$imputations),
@@ -92,55 +59,43 @@ 

Usage

-

Arguments -

+

Arguments

-
-
imputations -
+
imputations

An imputations object as created by impute().

-
index -
+
index

The indexes of the imputed datasets to return. By default, all datasets within the imputations object will be returned.

-
delta -
+
delta

A data.frame containing the delta transformation to be applied to the imputed dataset. See analyse() for details on the format and specification of this data.frame.

-
idmap -
+
idmap

Logical. The subject IDs in the imputed data.frame's are replaced with new IDs to ensure they are unique. Setting this argument to TRUE attaches an attribute, called idmap, to the returned data.frame's that will provide a map from the new subject IDs to the old subject IDs.

-
-
+
-

Value -

+

Value

A list of data.frames equal in length to the index argument.

-

See also -

-
-

delta_template() for creating delta data.frames.

-

analyse().

-
+

See also

+

delta_template() for creating delta data.frames.

+

analyse().

-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 extract_imputed_dfs(imputeObj)
 extract_imputed_dfs(imputeObj, c(1:3))
@@ -148,8 +103,7 @@ 

Examples -

+
- + + + - - diff --git a/main/reference/extract_params.html b/main/reference/extract_params.html index 76cb6ac7..94fe87d1 100644 --- a/main/reference/extract_params.html +++ b/main/reference/extract_params.html @@ -1,22 +1,7 @@ - - - - - - -Extract parameters from a MMRM model — extract_params • rbmi - - - - - - - - - - + +Extract parameters from a MMRM model — extract_params • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,27 +49,21 @@
-

Usage -

+

Usage

extract_params(fit)
-

Arguments -

+

Arguments

-
-
fit -
+
fit

an object created by mmrm::mmrm()

-
-
+ - +
-
- + + + - - diff --git a/main/reference/fit_mcmc.html b/main/reference/fit_mcmc.html index 9520dcc4..39494c84 100644 --- a/main/reference/fit_mcmc.html +++ b/main/reference/fit_mcmc.html @@ -1,30 +1,15 @@ - - - - - - -Fit the base imputation model using a Bayesian approach — fit_mcmc • rbmi - - - - - -Fit the base imputation model using a Bayesian approach — fit_mcmc • rbmi - - - - +and returns warnings in case of any detected issues."> Skip to contents @@ -40,38 +25,21 @@ + @@ -93,80 +61,61 @@
-

Usage -

+

Usage

fit_mcmc(designmat, outcome, group, subjid, visit, method, quiet = FALSE)
-

Arguments -

+

Arguments

-
-
designmat -
+
designmat

The design matrix of the fixed effects.

-
outcome -
+
outcome

The response variable. Must be numeric.

-
group -
+
group

Character vector containing the group variable.

-
subjid -
+
subjid

Character vector containing the subjects IDs.

-
visit -
+
visit

Character vector containing the visit variable.

-
method -
+
method

A method object as generated by method_bayes().

-
quiet -
+
quiet

Specify whether the stan sampling log should be printed to the console.

-
-
+
-

Value -

-

A named list composed by the following:

-
    -
  • samples: a named list containing the draws for each parameter. It corresponds to the output of extract_draws().

  • +

    Value

    +

    A named list composed by the following:

    • samples: a named list containing the draws for each parameter. It corresponds to the output of extract_draws().

    • fit: a stanfit object.

    • -
    -
+
-

Details -

+

Details

The Bayesian model assumes a multivariate normal likelihood function and weakly-informative priors for the model parameters: in particular, uniform priors are assumed for the regression coefficients and inverse-Wishart priors for the covariance matrices. The chain is initialized using the REML parameter estimates from MMRM as starting values.

-

The function performs the following steps:

-
    -
  1. Fit MMRM using a REML approach.

  2. +

    The function performs the following steps:

    1. Fit MMRM using a REML approach.

    2. Prepare the input data for the MCMC fit as described in the data{} block of the Stan file. See prepare_stan_data() for details.

    3. Run the MCMC according the input arguments and using as starting values the REML parameter estimates estimated at point 1.

    4. Performs diagnostics checks of the MCMC. See check_mcmc() for details.

    5. Extract the draws from the model fit.

    6. -
    -

    The chains perform method$n_samples draws by keeping one every method$burn_between iterations. Additionally +

The chains perform method$n_samples draws by keeping one every method$burn_between iterations. Additionally the first method$burn_in iterations are discarded. The total number of iterations will then be method$burn_in + method$burn_between*method$n_samples. The purpose of method$burn_in is to ensure that the samples are drawn from the stationary @@ -175,8 +124,7 @@

Details -

+ - + + + - - diff --git a/main/reference/fit_mmrm.html b/main/reference/fit_mmrm.html index e2769952..77f92d65 100644 --- a/main/reference/fit_mmrm.html +++ b/main/reference/fit_mmrm.html @@ -1,26 +1,11 @@ - - - - - - -Fit a MMRM model — fit_mmrm • rbmi - - - - - -Fit a MMRM model — fit_mmrm • rbmi - - - - +beta and sigma will not be present."> Skip to contents @@ -36,38 +21,21 @@ + @@ -87,8 +55,7 @@
-

Usage -

+

Usage

fit_mmrm(
   designmat,
   outcome,
@@ -102,61 +69,49 @@ 

Usage

-

Arguments -

+

Arguments

-
-
designmat -
+
designmat

a data.frame or matrix containing the covariates to use in the MMRM model. Dummy variables must already be expanded out, i.e. via stats::model.matrix(). Cannot contain any missing values

-
outcome -
+
outcome

a numeric vector. The outcome value to be regressed on in the MMRM model.

-
subjid -
+
subjid

a character / factor vector. The subject identifier used to link separate visits that belong to the same subject.

-
visit -
+
visit

a character / factor vector. Indicates which visit the outcome value occurred on.

-
group -
+
group

a character / factor vector. Indicates which treatment group the patient belongs to.

-
cov_struct -
+
cov_struct

a character value. Specifies which covariance structure to use. Must be one of "us" (default), "ad", "adh", "ar1", "ar1h", "cs", "csh", "toep", or "toeph")

-
REML -
+
REML

logical. Specifies whether restricted maximum likelihood should be used

-
same_cov -
+
same_cov

logical. Used to specify if a shared or individual covariance matrix should be used per group

-
-
+
- +
-
- + + + - - diff --git a/main/reference/generate_data_single.html b/main/reference/generate_data_single.html index cd3ecb09..b8f2cf4b 100644 --- a/main/reference/generate_data_single.html +++ b/main/reference/generate_data_single.html @@ -1,20 +1,5 @@ - - - - - - -Generate data for a single group — generate_data_single • rbmi - - - - - - - - - - + +Generate data for a single group — generate_data_single • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,51 +46,37 @@
-

Usage -

+

Usage

generate_data_single(pars_group, strategy_fun = NULL, distr_pars_ref = NULL)
-

Arguments -

+

Arguments

-
-
pars_group -
+
pars_group

A simul_pars object as generated by set_simul_pars(). It specifies the simulation parameters of the given group.

-
strategy_fun -
+
strategy_fun

Function implementing trajectories after the intercurrent event (ICE). Must be one of getStrategies(). See getStrategies() for details. If NULL then post-ICE outcomes are untouched.

-
distr_pars_ref -
-
-

Optional. Named list containing the simulation parameters of the -reference arm. It contains the following elements:

-
    -
  • mu: Numeric vector indicating the mean outcome trajectory assuming no ICEs. It should +

    distr_pars_ref
    +

    Optional. Named list containing the simulation parameters of the +reference arm. It contains the following elements:

    • mu: Numeric vector indicating the mean outcome trajectory assuming no ICEs. It should include the outcome at baseline.

    • sigma Covariance matrix of the outcome trajectory assuming no ICEs. If NULL, then these parameters are inherited from pars_group.

    • -
    -
    +
-
-
+
-

Value -

-

A data.frame containing the simulated data. It includes the following variables:

-
    -
  • id: Factor variable that specifies the id of each subject.

  • +

    Value

    +

    A data.frame containing the simulated data. It includes the following variables:

    • id: Factor variable that specifies the id of each subject.

    • visit: Factor variable that specifies the visit of each assessment. Visit 0 denotes the baseline visit.

    • group: Factor variable that specifies which treatment group each subject belongs to.

    • @@ -137,17 +91,14 @@

      Value by ICE2.

    • outcome: Numeric variable that specifies the longitudinal outcome including ICE1, ICE2 and the intermittent missing values.

    • -

    -
+
-

See also -

+

See also

- + - + + + - - diff --git a/main/reference/getStrategies.html b/main/reference/getStrategies.html index ffe57bb3..bbf28f1a 100644 --- a/main/reference/getStrategies.html +++ b/main/reference/getStrategies.html @@ -1,24 +1,9 @@ - - - - - - -Get imputation strategies — getStrategies • rbmi - - - - - -Get imputation strategies — getStrategies • rbmi - - - - +group and reference group per patient."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,27 +52,21 @@
-

Usage -

+

Usage

getStrategies(...)
-

Arguments -

+

Arguments

-
-
... -
+
...

User defined methods to be added to the return list. Input must be a function.

-
-
+
-

Details -

+

Details

By default Jump to Reference (JR), Copy Reference (CR), Copy Increments in Reference (CIR), Last Mean Carried Forward (LMCF) and Missing at Random (MAR) are defined.

@@ -121,8 +83,7 @@

Details
-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 getStrategies()
 getStrategies(
@@ -134,8 +95,7 @@ 

Examples -

+
-

+ + + - - diff --git a/main/reference/get_ESS.html b/main/reference/get_ESS.html index d8fc0030..adf3c957 100644 --- a/main/reference/get_ESS.html +++ b/main/reference/get_ESS.html @@ -1,20 +1,5 @@ - - - - - - -Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS • rbmi - - - - - - - - - - + +Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,32 +46,25 @@
-

Usage -

+

Usage

get_ESS(stan_fit)
-

Arguments -

+

Arguments

-
-
stan_fit -
+
stan_fit

A stanfit object.

-
-
+
-

Value -

+

Value

A named vector containing the ESS for each parameter of the model.

- +
-
- + + + - - diff --git a/main/reference/get_bootstrap_stack.html b/main/reference/get_bootstrap_stack.html index daac36c0..fabeecb5 100644 --- a/main/reference/get_bootstrap_stack.html +++ b/main/reference/get_bootstrap_stack.html @@ -1,22 +1,7 @@ - - - - - - -Creates a stack object populated with bootstrapped samples — get_bootstrap_stack • rbmi - - - - - - - - - - + +Creates a stack object populated with bootstrapped samples — get_bootstrap_stack • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,37 +49,29 @@
-

Usage -

+

Usage

get_bootstrap_stack(longdata, method, stack = Stack$new())
-

Arguments -

+

Arguments

-
-
longdata -
+
longdata

A longDataConstructor() object

-
method -
+
method

A method object

-
stack -
+
stack

A Stack() object (this is only exposed for unit testing purposes)

-
-
+ - +
-
- + + + - - diff --git a/main/reference/get_conditional_parameters.html b/main/reference/get_conditional_parameters.html index 71e3a47f..da26d544 100644 --- a/main/reference/get_conditional_parameters.html +++ b/main/reference/get_conditional_parameters.html @@ -1,22 +1,7 @@ - - - - - - -Derive conditional multivariate normal parameters — get_conditional_parameters • rbmi - - - - - - - - - - + +Derive conditional multivariate normal parameters — get_conditional_parameters • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,43 +49,32 @@
-

Usage -

+

Usage

get_conditional_parameters(pars, values)
-

Arguments -

+

Arguments

-
-
pars -
+
pars

a list with elements mu and sigma defining the mean vector and covariance matrix respectively.

-
values -
+
values

a vector of observed values to condition on, must be same length as pars$mu. Missing values must be represented by an NA.

-
-
+
-

Value -

-

A list with the conditional distribution parameters:

-
    -
  • mu - The conditional mean vector.

  • +

    Value

    +

    A list with the conditional distribution parameters:

    • mu - The conditional mean vector.

    • sigma - The conditional covariance matrix.

    • -
    -
+ - +
-
- + + + - - diff --git a/main/reference/get_delta_template.html b/main/reference/get_delta_template.html index 595498c7..244857b4 100644 --- a/main/reference/get_delta_template.html +++ b/main/reference/get_delta_template.html @@ -1,24 +1,9 @@ - - - - - - -Get delta utility variables — get_delta_template • rbmi - - - - - -Get delta utility variables — get_delta_template • rbmi - - - - +for defining delta. See delta_template() for full details."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,27 +52,21 @@
-

Usage -

+

Usage

get_delta_template(imputations)
-

Arguments -

+

Arguments

-
-
imputations -
+
imputations

an imputations object created by impute().

-
-
+ - +
-
- + + + - - diff --git a/main/reference/get_draws_mle.html b/main/reference/get_draws_mle.html index 1b7e3bc2..999f9c6e 100644 --- a/main/reference/get_draws_mle.html +++ b/main/reference/get_draws_mle.html @@ -1,22 +1,7 @@ - - - - - - -Fit the base imputation model on bootstrap samples — get_draws_mle • rbmi - - - - - - - - - - + +Fit the base imputation model on bootstrap samples — get_draws_mle • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,8 +49,7 @@
-

Usage -

+

Usage

get_draws_mle(
   longdata,
   method,
@@ -97,34 +64,27 @@ 

Usage

-

Arguments -

+

Arguments

-
-
longdata -
+
longdata

R6 longdata object containing all relevant input data information.

-
method -
+
method

A method object as generated by either method_approxbayes() or method_condmean() with argument type = "bootstrap".

-
sample_stack -
+
sample_stack

A stack object containing the subject ids to be used on each mmrm iteration.

-
n_target_samples -
+
n_target_samples

Number of samples needed to be created

-
first_sample_orig -
+
first_sample_orig

Logical. If TRUE the function returns method$n_samples + 1 samples where the first sample contains the parameter estimates from the original dataset and method$n_samples samples contain the parameter estimates from bootstrap samples. @@ -132,59 +92,45 @@

Argumentsuse_samp_ids - +
use_samp_ids

Logical. If TRUE, the sampled subject ids are returned. Otherwise the subject ids from the original dataset are returned. These values are used to tell impute() what subjects should be used to derive the imputed dataset.

-
failure_limit -
+
failure_limit

Number of failed samples that are allowed before throwing an error

-
ncores -
+
ncores

Number of processes to parallelise the job over

-
quiet -
+
quiet

Logical, If TRUE will suppress printing of progress information that is printed to the console.

-

-
+
-

Value -

-

A draws object which is a named list containing the following:

-
    -
  • data: R6 longdata object containing all relevant input data information.

  • +

    Value

    +

    A draws object which is a named list containing the following:

    • data: R6 longdata object containing all relevant input data information.

    • method: A method object as generated by either method_bayes(), method_approxbayes() or method_condmean().

    • -
    • -

      samples: list containing the estimated parameters of interest. -Each element of samples is a named list containing the following:

      -
        -
      • ids: vector of characters containing the ids of the subjects included in the original dataset.

      • +
      • samples: list containing the estimated parameters of interest. +Each element of samples is a named list containing the following:

        • ids: vector of characters containing the ids of the subjects included in the original dataset.

        • beta: numeric vector of estimated regression coefficients.

        • sigma: list of estimated covariance matrices (one for each level of vars$group).

        • theta: numeric vector of transformed covariances.

        • failed: Logical. TRUE if the model fit failed.

        • ids_samp: vector of characters containing the ids of the subjects included in the given sample.

        • -
        -
      • +
    • fit: if method_bayes() is chosen, returns the MCMC Stan fit object. Otherwise NULL.

    • n_failures: absolute number of failures of the model fit. Relevant only for method_condmean(type = "bootstrap"), method_approxbayes() and method_bmlmi().

    • formula: fixed effects formula object used for the model specification.

    • -
    -
+
-

Details -

+

Details

This function takes a Stack object which contains multiple lists of patient ids. The function takes this Stack and pulls a set ids and then constructs a dataset just consisting of these patients (i.e. potentially a bootstrap or a jackknife sample).

@@ -195,8 +141,7 @@

Details -

+ - + + + - - diff --git a/main/reference/get_ests_bmlmi.html b/main/reference/get_ests_bmlmi.html index d9c8215a..c5381fc4 100644 --- a/main/reference/get_ests_bmlmi.html +++ b/main/reference/get_ests_bmlmi.html @@ -1,24 +1,9 @@ - - - - - - -Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi • rbmi - - - - - -Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi • rbmi - - - - +Multiple Imputation (BMLMI)."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,51 +52,41 @@
-

Usage -

+

Usage

get_ests_bmlmi(ests, D)
-

Arguments -

+

Arguments

-
-
ests -
+
ests

numeric vector containing estimates from the analysis of the imputed datasets.

-
D -
+
D

numeric representing the number of imputations between each bootstrap sample in the BMLMI method.

-
-
+
-

Value -

+

Value

a list containing point estimate, standard error and degrees of freedom.

-

Details -

+

Details

ests must be provided in the following order: the firsts D elements are related to analyses from random imputation of one bootstrap sample. The second set of D elements (i.e. from D+1 to 2*D) are related to the second bootstrap sample and so on.

-

References -

+

References

Von Hippel, Paul T and Bartlett, Jonathan W8. Maximum likelihood multiple imputation: Faster imputations and consistent standard errors without posterior draws. 2021

- +
-
- + + + - - diff --git a/main/reference/get_example_data.html b/main/reference/get_example_data.html index fb703901..5c301c4f 100644 --- a/main/reference/get_example_data.html +++ b/main/reference/get_example_data.html @@ -1,22 +1,7 @@ - - - - - - -Simulate a realistic example dataset — get_example_data • rbmi - - - - - - - - - - + +Simulate a realistic example dataset — get_example_data • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,21 +49,17 @@
-

Usage -

+

Usage

get_example_data()
-

Details -

+

Details

get_example_data() simulates a 1:1 randomized trial of an active drug (intervention) versus placebo (control) with 100 subjects per group and 6 post-baseline assessments (bi-monthly visits until 12 months). One intercurrent event corresponding to treatment discontinuation is also simulated. -Specifically, data are simulated under the following assumptions:

-
+ - + - + + + - - diff --git a/main/reference/get_jackknife_stack.html b/main/reference/get_jackknife_stack.html index 886ac288..325c3dcf 100644 --- a/main/reference/get_jackknife_stack.html +++ b/main/reference/get_jackknife_stack.html @@ -1,22 +1,7 @@ - - - - - - -Creates a stack object populated with jackknife samples — get_jackknife_stack • rbmi - - - - - - - - - - + +Creates a stack object populated with jackknife samples — get_jackknife_stack • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,37 +49,29 @@
-

Usage -

+

Usage

get_jackknife_stack(longdata, method, stack = Stack$new())
-

Arguments -

+

Arguments

-
-
longdata -
+
longdata

A longDataConstructor() object

-
method -
+
method

A method object

-
stack -
+
stack

A Stack() object (this is only exposed for unit testing purposes)

-
-
+ - +
-
- + + + - - diff --git a/main/reference/get_mmrm_sample.html b/main/reference/get_mmrm_sample.html index 95d17e8f..445f7504 100644 --- a/main/reference/get_mmrm_sample.html +++ b/main/reference/get_mmrm_sample.html @@ -1,22 +1,7 @@ - - - - - - -Fit MMRM and returns parameter estimates — get_mmrm_sample • rbmi - - - - - - - - - - + +Fit MMRM and returns parameter estimates — get_mmrm_sample • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,51 +49,39 @@
-

Usage -

+

Usage

get_mmrm_sample(ids, longdata, method)
-

Arguments -

+

Arguments

-
-
ids -
+
ids

vector of characters containing the ids of the subjects.

-
longdata -
+
longdata

R6 longdata object containing all relevant input data information.

-
method -
+
method

A method object as generated by either method_approxbayes() or method_condmean().

-
-
+
-

Value -

-

A named list of class sample_single. It contains the following:

-
    -
  • ids vector of characters containing the ids of the subjects included in the original dataset.

  • +

    Value

    +

    A named list of class sample_single. It contains the following:

    • ids vector of characters containing the ids of the subjects included in the original dataset.

    • beta numeric vector of estimated regression coefficients.

    • sigma list of estimated covariance matrices (one for each level of vars$group).

    • theta numeric vector of transformed covariances.

    • failed logical. TRUE if the model fit failed.

    • ids_samp vector of characters containing the ids of the subjects included in the given sample.

    • -
    -
+ - +
-
- + + + - - diff --git a/main/reference/get_pattern_groups.html b/main/reference/get_pattern_groups.html index 7becf147..63e518df 100644 --- a/main/reference/get_pattern_groups.html +++ b/main/reference/get_pattern_groups.html @@ -1,24 +1,9 @@ - - - - - - -Determine patients missingness group — get_pattern_groups • rbmi - - - - - -Determine patients missingness group — get_pattern_groups • rbmi - - - - +the patient belongs to (based upon their missingness pattern and treatment group)"> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,35 +52,26 @@
-

Usage -

+

Usage

get_pattern_groups(ddat)
-

Arguments -

+

Arguments

-
-
ddat -
+
ddat

a data.frame with columns subjid, visit, group, is_avail

-
-
+
-

Details -

+

Details

-
    -
  • The column is_avail must be a character or numeric 0 or 1

  • -
-
+
  • The column is_avail must be a character or numeric 0 or 1

  • +
- + - + + + - - diff --git a/main/reference/get_pattern_groups_unique.html b/main/reference/get_pattern_groups_unique.html index 460d11d8..33a75f49 100644 --- a/main/reference/get_pattern_groups_unique.html +++ b/main/reference/get_pattern_groups_unique.html @@ -1,22 +1,7 @@ - - - - - - -Get Pattern Summary — get_pattern_groups_unique • rbmi - - - - - - - - - - + +Get Pattern Summary — get_pattern_groups_unique • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,39 +49,30 @@
-

Usage -

+

Usage

get_pattern_groups_unique(patterns)
-

Arguments -

+

Arguments

-
-
patterns -
+
patterns

A data.frame with the columns pgroup, pattern and group

-
-
+
-

Details -

+

Details

-
    -
  • The column pgroup must be a numeric vector indicating which pattern group the patient belongs to

  • +
    • The column pgroup must be a numeric vector indicating which pattern group the patient belongs to

    • The column pattern must be a character string of 0's or 1's. It must be identical for all rows within the same pgroup

    • The column group must be a character / numeric vector indicating which covariance group the observation belongs to. It must be identical within the same pgroup

    • -
    -
+ - + - + + + - - diff --git a/main/reference/get_pool_components.html b/main/reference/get_pool_components.html index eed98829..48885fdb 100644 --- a/main/reference/get_pool_components.html +++ b/main/reference/get_pool_components.html @@ -1,22 +1,7 @@ - - - - - - -Expected Pool Components — get_pool_components • rbmi - - - - - - - - - - + +Expected Pool Components — get_pool_components • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,28 +49,22 @@
-

Usage -

+

Usage

get_pool_components(x)
-

Arguments -

+

Arguments

-
-
x -
+
x

Character name of the analysis method, must one of either "rubin", "jackknife", "bootstrap" or "bmlmi".

-
-
+ - +
-
- + + + - - diff --git a/main/reference/get_session_hash.html b/main/reference/get_session_hash.html index eba196de..9c1ff8a3 100644 --- a/main/reference/get_session_hash.html +++ b/main/reference/get_session_hash.html @@ -1,20 +1,5 @@ - - - - - - -Get session hash — get_session_hash • rbmi - - - - - - - - - - + +Get session hash — get_session_hash • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,14 +46,12 @@
-

Usage -

+

Usage

get_session_hash()
- - + - + + + - - diff --git a/main/reference/get_stan_model.html b/main/reference/get_stan_model.html index 99b1526f..a201152e 100644 --- a/main/reference/get_stan_model.html +++ b/main/reference/get_stan_model.html @@ -1,20 +1,5 @@ - - - - - - -Get Compiled Stan Object — get_stan_model • rbmi - - - - - - - - - - + +Get Compiled Stan Object — get_stan_model • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,14 +46,12 @@
-

Usage -

+

Usage

get_stan_model()
- - + - + + + - - diff --git a/main/reference/get_visit_distribution_parameters.html b/main/reference/get_visit_distribution_parameters.html index 53cfd4a8..4364b55a 100644 --- a/main/reference/get_visit_distribution_parameters.html +++ b/main/reference/get_visit_distribution_parameters.html @@ -1,28 +1,13 @@ - - - - - - -Derive visit distribution parameters — get_visit_distribution_parameters • rbmi - - - - - -Derive visit distribution parameters — get_visit_distribution_parameters • rbmi - - - - +(namely list(list(mu = ..., sigma = ...), list(mu = ..., sigma = ...)))."> Skip to contents @@ -38,38 +23,21 @@ + @@ -90,41 +58,33 @@
-

Usage -

+

Usage

get_visit_distribution_parameters(dat, beta, sigma)
-

Arguments -

+

Arguments

-
-
dat -
+
dat

Patient level dataset, must be 1 row per visit. Column order must be in the same order as beta. The number of columns must match the length of beta

-
beta -
+
beta

List of model beta coefficients. There should be 1 element for each sample e.g. if there were 3 samples and the models each had 4 beta coefficients then this argument should be of the form list( c(1,2,3,4) , c(5,6,7,8), c(9,10,11,12)). All elements of beta must be the same length and must be the same length and order as dat.

-
sigma -
+
sigma

List of sigma. Must have the same number of entries as beta.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/has_class.html b/main/reference/has_class.html index 365dddf9..56875fd3 100644 --- a/main/reference/has_class.html +++ b/main/reference/has_class.html @@ -1,24 +1,9 @@ - - - - - - -Does object have a class ? — has_class • rbmi - - - - - -Does object have a class ? — has_class • rbmi - - - - +have."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,38 +52,30 @@
-

Usage -

+

Usage

has_class(x, cls)
-

Arguments -

+

Arguments

-
-
x -
+
x

the object we want to check the class of.

-
cls -
+
cls

the class we want to know if it has or not.

-
-
+
-

Value -

+

Value

TRUE if the object has the class. FALSE if the object does not have the class.

- +
-
- + + + - - diff --git a/main/reference/ife.html b/main/reference/ife.html index c11d4db7..82e6fd99 100644 --- a/main/reference/ife.html +++ b/main/reference/ife.html @@ -1,22 +1,7 @@ - - - - - - -if else — ife • rbmi - - - - - - - - - - + +if else — ife • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,44 +49,35 @@
-

Usage -

+

Usage

ife(x, a, b)
-

Arguments -

+

Arguments

-
-
x -
+
x

True / False

-
a -
+
a

value to return if True

-
b -
+
b

value to return if False

-
-
+
-

Details -

+

Details

By default ifelse() will convert factor variables to their numeric values which is often undesirable. This connivance function avoids that problem

- + - + + + - - diff --git a/main/reference/imputation_df.html b/main/reference/imputation_df.html index 6a0498bc..0c691450 100644 --- a/main/reference/imputation_df.html +++ b/main/reference/imputation_df.html @@ -1,20 +1,5 @@ - - - - - - -Create a valid imputation_df object — imputation_df • rbmi - - - - - - - - - - + +Create a valid imputation_df object — imputation_df • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,27 +46,21 @@
-

Usage -

+

Usage

imputation_df(...)
-

Arguments -

+

Arguments

-
-
... -
+
...

a list of imputation_single.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/imputation_list_df.html b/main/reference/imputation_list_df.html index 0f1bda91..92d7be33 100644 --- a/main/reference/imputation_list_df.html +++ b/main/reference/imputation_list_df.html @@ -1,20 +1,5 @@ - - - - - - -List of imputations_df — imputation_list_df • rbmi - - - - - - - - - - + +List of imputations_df — imputation_list_df • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,27 +46,21 @@
-

Usage -

+

Usage

imputation_list_df(...)
-

Arguments -

+

Arguments

-
-
... -
+
...

objects of class imputation_df

-
-
+ - +
-
- + + + - - diff --git a/main/reference/imputation_list_single.html b/main/reference/imputation_list_single.html index cb9bba1e..5979bd7d 100644 --- a/main/reference/imputation_list_single.html +++ b/main/reference/imputation_list_single.html @@ -1,20 +1,5 @@ - - - - - - -A collection of imputation_singles() grouped by a single subjid ID — imputation_list_single • rbmi - - - - - - - - - - + +A collection of imputation_singles() grouped by a single subjid ID — imputation_list_single • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,41 +46,32 @@
-

Usage -

+

Usage

imputation_list_single(imputations, D = 1)
-

Arguments -

+

Arguments

-
-
imputations -
+
imputations

a list of imputation_single() objects ordered so that repetitions are grouped sequentially

-
D -
-
-

the number of repetitions that were performed which determines how many columns +

D
+

the number of repetitions that were performed which determines how many columns the imputation matrix should have

This is a constructor function to create a imputation_list_single object which contains a matrix of imputation_single() objects grouped by a single id. The matrix is split so that it has D columns (i.e. for non-bmlmi methods this will always be 1)

The id attribute is determined by extracting the id attribute from the contributing -imputation_single() objects. An error is throw if multiple id are detected

-
+imputation_single() objects. An error is throw if multiple id are detected

-
-
+ - +
-
- + + + - - diff --git a/main/reference/imputation_single.html b/main/reference/imputation_single.html index b482892d..e7411481 100644 --- a/main/reference/imputation_single.html +++ b/main/reference/imputation_single.html @@ -1,20 +1,5 @@ - - - - - - -Create a valid imputation_single object — imputation_single • rbmi - - - - - - - - - - + +Create a valid imputation_single object — imputation_single • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,32 +46,25 @@
-

Usage -

+

Usage

imputation_single(id, values)
-

Arguments -

+

Arguments

-
-
id -
+
id

a character string specifying the subject id.

-
values -
+
values

a numeric vector indicating the imputed values.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/impute.html b/main/reference/impute.html index 5467fb33..15f047de 100644 --- a/main/reference/impute.html +++ b/main/reference/impute.html @@ -1,24 +1,9 @@ - - - - - - -Create imputed datasets — impute • rbmi - - - - - - - - - - +draws().'> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,8 +52,7 @@
-

Usage -

+

Usage

impute(
   draws,
   references = NULL,
@@ -111,18 +78,14 @@ 

Usage

-

Arguments -

+

Arguments

-
-
draws -
+
draws

A draws object created by draws().

-
references -
+
references

A named vector. Identifies the references to be used for reference-based imputation methods. Should be of the form c("Group1" = "Reference1", "Group2" = "Reference2"). If NULL (default), the references are assumed to be of the form @@ -130,24 +93,20 @@

Argumentsdraws) other than MAR is set.

-
update_strategy -
+
update_strategy

An optional data.frame. Updates the imputation method that was originally set via the data_ice option in draws(). See the details section for more information.

-
strategies -
+
strategies

A named list of functions. Defines the imputation functions to be used. The names of the list should mirror the values specified in strategy column of data_ice. Default = getStrategies(). See getStrategies() for more details.

-
-
+
-

Details -

+

Details

impute() uses the imputation model parameter estimates, as generated by draws(), to first calculate the marginal (multivariate normal) distribution of a subject's longitudinal outcome variable @@ -159,9 +118,7 @@

Detailsdraws().

The exact manner in how missing values are imputed from this conditional imputation distribution depends -on the method object that was provided to draws(), in particular:

-
    -
  • Bayes & Approximate Bayes: each imputed dataset contains 1 row per subject & visit +on the method object that was provided to draws(), in particular:

    • Bayes & Approximate Bayes: each imputed dataset contains 1 row per subject & visit from the original dataset with missing values imputed by taking a single random sample from the conditional imputation distribution.

    • Conditional Mean: each imputed dataset contains 1 row per subject & visit from the @@ -174,8 +131,7 @@

      Detailsdraws(). A total number of B*D imputed datasets is provided, where B is the number of bootstrapped datasets. Missing values are imputed by taking a random sample from the conditional imputation distribution.

    • -
    -

    The update_strategy argument can be used to update the imputation strategy that was +

The update_strategy argument can be used to update the imputation strategy that was originally set via the data_ice option in draws(). This avoids having to re-run the draws() function when changing the imputation strategy in certain circumstances (as detailed below). The data.frame provided to update_strategy argument must contain two columns, @@ -194,8 +150,7 @@

Details

-

References -

+

References

James R Carpenter, James H Roger, and Michael G Kenward. Analysis of longitudinal trials with protocol deviation: a framework for relevant, accessible assumptions, and inference via multiple imputation. Journal of Biopharmaceutical Statistics, @@ -203,8 +158,7 @@

References -

Examples -

+

Examples

if (FALSE) { # \dontrun{
 
 impute(
@@ -227,8 +181,7 @@ 

Examples -

+
- + + + - - diff --git a/main/reference/impute_data_individual.html b/main/reference/impute_data_individual.html index 0eacfacc..981a3460 100644 --- a/main/reference/impute_data_individual.html +++ b/main/reference/impute_data_individual.html @@ -1,22 +1,7 @@ - - - - - - -Impute data for a single subject — impute_data_individual • rbmi - - - - - - - - - - + +Impute data for a single subject — impute_data_individual • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,8 +49,7 @@
-

Usage -

+

Usage

impute_data_individual(
   id,
   index,
@@ -97,68 +64,55 @@ 

Usage

-

Arguments -

+

Arguments

-
-
id -
+
id

Character string identifying the subject.

-
index -
+
index

The sample indexes which the subject belongs to e.g c(1,1,1,2,2,4).

-
beta -
+
beta

A list of beta coefficients for each sample, i.e. beta[[1]] is the set of beta coefficients for the first sample.

-
sigma -
+
sigma

A list of the sigma coefficients for each sample split by group i.e. sigma[[1]][["A"]] would give the sigma coefficients for group A for the first sample.

-
data -
+
data

A longdata object created by longDataConstructor()

-
references -
+
references

A named vector. Identifies the references to be used when generating the imputed values. Should be of the form c("Group" = "Reference", "Group" = "Reference").

-
strategies -
+
strategies

A named list of functions. Defines the imputation functions to be used. The names of the list should mirror the values specified in method column of data_ice. Default = getStrategies(). See getStrategies() for more details.

-
condmean -
+
condmean

Logical. If TRUE will impute using the conditional mean values, if FALSE will impute by taking a random draw from the multivariate normal distribution.

-
n_imputations -
+
n_imputations

When condmean = FALSE numeric representing the number of random imputations to be performed for each sample. Default is 1 (one random imputation per sample).

-
-
+
-

Details -

+

Details

Note that this function performs all of the required imputations for a subject at the same time. I.e. if a subject is included in samples 1,3,5,9 then all imputations (using sample-dependent imputation model parameters) are performed in one step in order to avoid @@ -169,8 +123,7 @@

Details -

+ - + + + - - diff --git a/main/reference/impute_internal.html b/main/reference/impute_internal.html index 8f30aba7..2488bd77 100644 --- a/main/reference/impute_internal.html +++ b/main/reference/impute_internal.html @@ -1,22 +1,7 @@ - - - - - - -Create imputed datasets — impute_internal • rbmi - - - - - - - - - - + +Create imputed datasets — impute_internal • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,8 +49,7 @@
-

Usage -

+

Usage

impute_internal(
   draws,
   references = NULL,
@@ -93,18 +60,14 @@ 

Usage

-

Arguments -

+

Arguments

-
-
draws -
+
draws

A draws object created by draws().

-
references -
+
references

A named vector. Identifies the references to be used for reference-based imputation methods. Should be of the form c("Group1" = "Reference1", "Group2" = "Reference2"). If NULL (default), the references are assumed to be of the form @@ -112,31 +75,26 @@

Argumentsdraws) other than MAR is set.

-
update_strategy -
+
update_strategy

An optional data.frame. Updates the imputation method that was originally set via the data_ice option in draws(). See the details section for more information.

-
strategies -
+
strategies

A named list of functions. Defines the imputation functions to be used. The names of the list should mirror the values specified in strategy column of data_ice. Default = getStrategies(). See getStrategies() for more details.

-
condmean -
+
condmean

logical. If TRUE will impute using the conditional mean values, if values will impute by taking a random draw from the multivariate normal distribution.

-
-
+
- +
-
- + + + - - diff --git a/main/reference/impute_outcome.html b/main/reference/impute_outcome.html index 2a18611f..1e738141 100644 --- a/main/reference/impute_outcome.html +++ b/main/reference/impute_outcome.html @@ -1,20 +1,5 @@ - - - - - - -Sample outcome value — impute_outcome • rbmi - - - - - - - - - - + +Sample outcome value — impute_outcome • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,40 +46,32 @@
-

Usage -

+

Usage

impute_outcome(conditional_parameters, n_imputations = 1, condmean = FALSE)
-

Arguments -

+

Arguments

-
-
conditional_parameters -
+
conditional_parameters

a list with elements mu and sigma which contain the mean vector and covariance matrix to sample from.

-
n_imputations -
+
n_imputations

numeric representing the number of random samples from the multivariate normal distribution to be performed. Default is 1.

-
condmean -
+
condmean

should conditional mean imputation be performed (as opposed to random sampling)

-
-
+ - +
-
- + + + - - diff --git a/main/reference/index.html b/main/reference/index.html index d937a6c3..d4eb6b72 100644 --- a/main/reference/index.html +++ b/main/reference/index.html @@ -1,18 +1,5 @@ - - - - - - -Package index • rbmi - - - - - - - - + +Package index • rbmi Skip to contents @@ -28,38 +15,21 @@ + @@ -70,1095 +40,822 @@
-

All functions -

+

All functions

-
-
+
-
-
+
QR_decomp()
QR decomposition
-
-
-
+
Stack
R6 Class for a FIFO stack
-
-
-
+
add_class()
Add a class
-
-
-
+
adjust_trajectories()
Adjust trajectories due to the intercurrent event (ICE)
-
-
-
+
adjust_trajectories_single()
Adjust trajectory of a subject's outcome due to the intercurrent event (ICE)
-
-
-
+
analyse()
Analyse Multiple Imputed Datasets
-
-
-
+
ancova()
Analysis of Covariance
-
-
-
+
ancova_single()
Implements an Analysis of Covariance (ANCOVA)
-
-
-
+
antidepressant_data
Antidepressant trial data
-
-
-
+
apply_delta()
Applies delta adjustment
-
-
-
+
as_analysis()
Construct an analysis object
-
-
-
+
as_ascii_table()
as_ascii_table
-
-
-
+
as_class()
Set Class
-
-
-
+
as_cropped_char()
as_cropped_char
-
-
-
+
as_dataframe()
Convert object to dataframe
-
-
-
+
as_draws()
Creates a draws object
-
-
-
+
as_imputation()
Create an imputation object
-
-
-
+
as_indices()
Convert indicator to index
-
-
-
+
as_mmrm_df()
Creates a "MMRM" ready dataset
-
-
-
+
as_mmrm_formula()
Create MMRM formula
-
-
-
+
as_model_df()
Expand data.frame into a design matrix
-
-
-
+
as_simple_formula()
Creates a simple formula object from a string
-
-
-
+
as_stan_array()
As array
-
-
-
+
as_strata()
Create vector of Stratas
-
-
-
+
assert_variables_exist()
Assert that all variables exist within a dataset
-
-
-
+
char2fct()
Convert character variables to factor
-
-
-
+
check_ESS()
Diagnostics of the MCMC based on ESS
-
-
-
+
check_hmc_diagn()
Diagnostics of the MCMC based on HMC-related measures.
-
-
-
+
check_mcmc()
Diagnostics of the MCMC
-
-
-
+
compute_sigma()
Compute covariance matrix for some reference-based methods (JR, CIR)
-
-
-
+
convert_to_imputation_list_df()
Convert list of imputation_list_single() objects to an imputation_list_df() object (i.e. a list of imputation_df() objects's)
-
-
-
+
d_lagscale()
Calculate delta from a lagged scale coefficient
-
-
-
+
delta_template()
Create a delta data.frame template
-
-
-
+
draws()
Fit the base imputation model and get parameter estimates
-
-
-
+
eval_mmrm()
Evaluate a call to mmrm
-
-
-
+
expand() fill_locf() expand_locf()
Expand and fill in missing data.frame rows
-
-
-
+
extract_covariates()
Extract Variables from string vector
-
-
-
+
extract_data_nmar_as_na()
Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy)
-
-
-
+
extract_draws()
Extract draws from a stanfit object
-
-
-
+
extract_imputed_df()
Extract imputed dataset
-
-
-
+
extract_imputed_dfs()
Extract imputed datasets
-
-
-
+
extract_params()
Extract parameters from a MMRM model
-
-
-
+
fit_mcmc()
Fit the base imputation model using a Bayesian approach
-
-
-
+
fit_mmrm()
Fit a MMRM model
-
-
-
+
generate_data_single()
Generate data for a single group
-
-
-
+
getStrategies()
Get imputation strategies
-
-
-
+
get_ESS()
Extract the Effective Sample Size (ESS) from a stanfit object
-
-
-
+
get_bootstrap_stack()
Creates a stack object populated with bootstrapped samples
-
-
-
+
get_conditional_parameters()
Derive conditional multivariate normal parameters
-
-
-
+
get_delta_template()
Get delta utility variables
-
-
-
+
get_draws_mle()
Fit the base imputation model on bootstrap samples
-
-
-
+
get_ests_bmlmi()
Von Hippel and Bartlett pooling of BMLMI method
-
-
-
+
get_example_data()
Simulate a realistic example dataset
-
-
-
+
get_jackknife_stack()
Creates a stack object populated with jackknife samples
-
-
-
+
get_mmrm_sample()
Fit MMRM and returns parameter estimates
-
-
-
+
get_pattern_groups()
Determine patients missingness group
-
-
-
+
get_pattern_groups_unique()
Get Pattern Summary
-
-
-
+
get_pool_components()
Expected Pool Components
-
-
-
+
get_visit_distribution_parameters()
Derive visit distribution parameters
-
-
-
+
has_class()
Does object have a class ?
-
-
-
+
ife()
if else
-
-
-
+
imputation_df()
Create a valid imputation_df object
-
-
-
+
imputation_list_df()
List of imputations_df
-
-
-
+
imputation_list_single()
A collection of imputation_singles() grouped by a single subjid ID
-
-
-
+
imputation_single()
Create a valid imputation_single object
-
-
-
+
impute()
Create imputed datasets
-
-
-
+
impute_data_individual()
Impute data for a single subject
-
-
-
+
impute_internal()
Create imputed datasets
-
-
-
+
impute_outcome()
Sample outcome value
-
-
-
+
invert()
invert
-
-
-
+
invert_indexes()
Invert and derive indexes
-
-
-
+
is_absent()
Is value absent
-
-
-
+
is_char_fact()
Is character or factor
-
-
-
+
is_char_one()
Is single character
-
-
-
+
is_in_rbmi_development()
Is package in development mode?
-
-
-
+
is_num_char_fact()
Is character, factor or numeric
-
-
-
+
locf()
Last Observation Carried Forward
-
-
-
+
longDataConstructor
R6 Class for Storing / Accessing & Sampling Longitudinal Data
-
-
-
+
ls_design_equal() ls_design_counterfactual() ls_design_proportional()
Calculate design vector for the lsmeans
-
-
-
+
lsmeans()
Least Square Means
-
-
-
+
make_rbmi_cluster()
Create a rbmi ready cluster
-
-
-
+
method_bayes() method_approxbayes() method_condmean() method_bmlmi()
Set the multiple imputation methodology
-
-
-
+
par_lapply()
Parallelise Lapply
-
-
-
+
parametric_ci()
Calculate parametric confidence intervals
-
-
-
+
pool() as.data.frame(<pool>) print(<pool>)
Pool analysis results obtained from the imputed datasets
-
-
-
+
pool_bootstrap_normal()
Bootstrap Pooling via normal approximation
-
-
-
+
pool_bootstrap_percentile()
Bootstrap Pooling via Percentiles
-
-
-
+
pool_internal()
Internal Pool Methods
-
-
-
+
prepare_stan_data()
Prepare input data to run the Stan model
-
-
-
+
print(<analysis>)
Print analysis object
-
-
-
+
print(<draws>)
Print draws object
-
-
-
+
print(<imputation>)
Print imputation object
-
-
-
+
progressLogger
R6 Class for printing current sampling progress
-
-
-
+
pval_percentile()
P-value of percentile bootstrap
-
-
-
+
random_effects_expr()
Construct random effects formula
-
-
-
+
set_options()
rbmi settings
-
-
-
+
record()
Capture all Output
-
-
-
+
recursive_reduce()
recursive_reduce
-
-
-
+
remove_if_all_missing()
Remove subjects from dataset if they have no observed values
-
-
-
+
rubin_df()
Barnard and Rubin degrees of freedom adjustment
-
-
-
+
rubin_rules()
Combine estimates using Rubin's rules
-
-
-
+
sample_ids()
Sample Patient Ids
-
-
-
+
sample_list()
Create and validate a sample_list object
-
-
-
+
sample_mvnorm()
Sample random values from the multivariate normal distribution
-
-
-
+
sample_single()
Create object of sample_single class
-
-
-
+
scalerConstructor
R6 Class for scaling (and un-scaling) design matrices
-
-
-
+
set_simul_pars()
Set simulation parameters of a study group.
-
-
-
+
set_vars()
Set key variables
-
-
-
+
simulate_data()
Generate data
-
-
-
+
simulate_dropout()
Simulate drop-out
-
-
-
+
simulate_ice()
Simulate intercurrent event
-
-
-
+
simulate_test_data() as_vcov()
Create simulated datasets
-
-
-
+
sort_by()
-
Sort data.frame -
-
-
-
+
Sort data.frame
+
split_dim()
Transform array into list of arrays
-
-
-
+
split_imputations()
Split a flat list of imputation_single() into multiple imputation_df()'s by ID
-
-
-
+
str_contains()
Does a string contain a substring
-
-
-
+
strategy_MAR() strategy_JR() strategy_CR() strategy_CIR() strategy_LMCF()
Strategies
-
-
-
+
string_pad()
string_pad
-
-
-
+
transpose_imputations()
Transpose imputations
-
-
-
+
transpose_results()
Transpose results object
-
-
-
+
transpose_samples()
Transpose samples
-
-
-
+
validate()
Generic validation method
-
-
-
+
validate(<analysis>)
Validate analysis objects
-
-
-
+
validate(<draws>)
Validate draws object
-
-
-
+
validate(<is_mar>)
Validate is_mar for a given subject
-
-
-
+
validate(<ivars>)
-
Validate inputs for vars -
-
-
-
+
Validate inputs for vars
+
validate(<references>)
Validate user supplied references
-
-
-
+
validate(<sample_list>)
Validate sample_list object
-
-
-
+
validate(<sample_single>)
Validate sample_single object
-
-
-
+
validate(<simul_pars>)
Validate a simul_pars object
-
-
-
+
validate(<stan_data>)
Validate a stan_data object
-
-
-
+
validate_analyse_pars()
Validate analysis results
-
-
-
+
validate_datalong() validate_datalong_varExists() validate_datalong_types() validate_datalong_notMissing() validate_datalong_complete() validate_datalong_unifromStrata() validate_dataice()
Validate a longdata object
-
-
-
+
validate_strategies()
Validate user specified strategies
-
-
- - + +
-
- + + + - - diff --git a/main/reference/invert.html b/main/reference/invert.html index 048e7c38..7493f39b 100644 --- a/main/reference/invert.html +++ b/main/reference/invert.html @@ -1,22 +1,7 @@ - - - - - - -invert — invert • rbmi - - - - - - - - - - + +invert — invert • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,27 +49,21 @@
-

Usage -

+

Usage

invert(x)
-

Arguments -

+

Arguments

-
-
x -
+
x

list

-
-
+ - +
-
- + + + - - diff --git a/main/reference/invert_indexes.html b/main/reference/invert_indexes.html index 3863c237..2dd83343 100644 --- a/main/reference/invert_indexes.html +++ b/main/reference/invert_indexes.html @@ -1,24 +1,9 @@ - - - - - - -Invert and derive indexes — invert_indexes • rbmi - - - - - -Invert and derive indexes — invert_indexes • rbmi - - - - +the indexes of which original elements it occurred in."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,44 +52,29 @@
-

Usage -

+

Usage

invert_indexes(x)
-

Arguments -

+

Arguments

-
-
x -
+
x

list of elements to invert and calculate index from (see details).

-
-
+
-

Details -

+

Details

This functions purpose is best illustrated by an example:

input:

-

-
-
list( c("A", "B", "C"), c("A", "A", "B"))}
-

-
+

list( c("A", "B", "C"), c("A", "A", "B"))}

becomes:

-

-
-
list( "A" = c(1,2,2), "B" = c(1,2), "C" = 1 )
-

-
+

list( "A" = c(1,2,2), "B" = c(1,2), "C" = 1 )

- + - + + + - - diff --git a/main/reference/is_absent.html b/main/reference/is_absent.html index 7f5ff21b..e1d327af 100644 --- a/main/reference/is_absent.html +++ b/main/reference/is_absent.html @@ -1,24 +1,9 @@ - - - - - - -Is value absent — is_absent • rbmi - - - - - - - - - - +for x to be regarded as absent.'> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,37 +52,29 @@
-

Usage -

+

Usage

is_absent(x, na = TRUE, blank = TRUE)
-

Arguments -

+

Arguments

-
-
x -
+
x

a value to check if it is absent or not

-
na -
+
na

do NAs count as absent

-
blank -
+
blank

do blanks i.e. "" count as absent

-
-
+ - +
-
- + + + - - diff --git a/main/reference/is_char_fact.html b/main/reference/is_char_fact.html index e0e114e9..5bcb0539 100644 --- a/main/reference/is_char_fact.html +++ b/main/reference/is_char_fact.html @@ -1,20 +1,5 @@ - - - - - - -Is character or factor — is_char_fact • rbmi - - - - - - - - - - + +Is character or factor — is_char_fact • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,27 +46,21 @@
-

Usage -

+

Usage

is_char_fact(x)
-

Arguments -

+

Arguments

-
-
x -
+
x

a character or factor vector

-
-
+ - +
-
- + + + - - diff --git a/main/reference/is_char_one.html b/main/reference/is_char_one.html index a0eca886..7473de47 100644 --- a/main/reference/is_char_one.html +++ b/main/reference/is_char_one.html @@ -1,20 +1,5 @@ - - - - - - -Is single character — is_char_one • rbmi - - - - - - - - - - + +Is single character — is_char_one • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,27 +46,21 @@
-

Usage -

+

Usage

is_char_one(x)
-

Arguments -

+

Arguments

-
-
x -
+
x

a character vector

-
-
+ - +
-
- + + + - - diff --git a/main/reference/is_in_rbmi_development.html b/main/reference/is_in_rbmi_development.html index 7c7489c7..d96da7e3 100644 --- a/main/reference/is_in_rbmi_development.html +++ b/main/reference/is_in_rbmi_development.html @@ -1,24 +1,9 @@ - - - - - - -Is package in development mode? — is_in_rbmi_development • rbmi - - - - - -Is package in development mode? — is_in_rbmi_development • rbmi - - - - +Returns FALSE otherwise"> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,22 +52,19 @@
-

Usage -

+

Usage

is_in_rbmi_development()
-

Details -

+

Details

Main use of this function is in parallel processing to indicate whether the sub-processes need to load the current development version of the code or whether they should load the main installed package on the system

- + - + + + - - diff --git a/main/reference/is_num_char_fact.html b/main/reference/is_num_char_fact.html index 6ad2f7cd..7a31071c 100644 --- a/main/reference/is_num_char_fact.html +++ b/main/reference/is_num_char_fact.html @@ -1,20 +1,5 @@ - - - - - - -Is character, factor or numeric — is_num_char_fact • rbmi - - - - - - - - - - + +Is character, factor or numeric — is_num_char_fact • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,27 +46,21 @@
-

Usage -

+

Usage

is_num_char_fact(x)
-

Arguments -

+

Arguments

-
-
x -
+
x

a character, numeric or factor vector

-
-
+ - +
-
- + + + - - diff --git a/main/reference/locf.html b/main/reference/locf.html index 5f430a90..a3003fa5 100644 --- a/main/reference/locf.html +++ b/main/reference/locf.html @@ -1,20 +1,5 @@ - - - - - - -Last Observation Carried Forward — locf • rbmi - - - - - - - - - - + +Last Observation Carried Forward — locf • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,35 +46,28 @@
-

Usage -

+

Usage

locf(x)
-

Arguments -

+

Arguments

-
-
x -
+
x

a vector.

-
-
+
-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 locf(c(NA, 1, 2, 3, NA, 4)) # Returns c(NA, 1, 2, 3, 3, 4)
 } # }
 
- +
-
- + + + - - diff --git a/main/reference/longDataConstructor.html b/main/reference/longDataConstructor.html index 17d7910b..8be77f91 100644 --- a/main/reference/longDataConstructor.html +++ b/main/reference/longDataConstructor.html @@ -1,24 +1,9 @@ - - - - - - -R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor • rbmi - - - - - -R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor • rbmi - - - - +thus enabling efficient lookup."> Skip to contents @@ -34,38 +19,21 @@ + @@ -85,8 +53,7 @@
-

Details -

+

Details

The object also handles multiple other operations specific to rbmi such as defining whether an outcome value is MAR / Missing or not as well as tracking which imputation strategy is assigned to each subject.

@@ -95,12 +62,8 @@

Details

-

Public fields -

-

-
-
-
data
+

Public fields

+

data

The original dataset passed to the constructor (sorted by id and visit)

@@ -179,22 +142,16 @@

Public fieldsself$data[self$indexes[["pt3"]],]. This may seem redundant over filtering the data directly however it enables efficient bootstrap sampling of the data i.e.

-

-
-
indexes <- unlist(self$indexes[c("pt3", "pt3")])
-self$data[indexes,]
-

-
-

This list is populated during the object initialisation.

- +

indexes <- unlist(self$indexes[c("pt3", "pt3")])
+self$data[indexes,]

+

This list is populated during the object initialisation.

is_missing
@@ -212,20 +169,15 @@

Public fields -

Methods -

+

Methods

-

-
-
-

Method get_data() -

+


+

Method get_data()

Returns a data.frame based upon required subject IDs. Replaces missing -values with new ones if provided.

-
-

Usage -

-

-
-
longDataConstructor$get_data(
+values with new ones if provided.

+

Usage

+

longDataConstructor$get_data(
   obj = NULL,
   nmar.rm = FALSE,
   na.rm = FALSE,
   idmap = FALSE
-)
-

-
+)

-

Arguments -

-

-
-
-
obj
+

Arguments

+

obj

Either NULL, a character vector of subjects IDs or a imputation list object. See details.

@@ -286,13 +223,10 @@

Arguments -

Details -

+

Details

If obj is NULL then the full original dataset is returned.

If obj is a character vector then a new dataset consisting of just those subjects is returned; if the character vector contains duplicate entries then that subject will be @@ -300,16 +234,12 @@

Detailsimputation_df()) then the subject ids specified in the object will be returned and missing values will be filled in by those specified in the imputation list object. i.e.

-

-
-
obj <- imputation_df(
+

obj <- imputation_df(
   imputation_single( id = "pt1", values = c(1,2,3)),
   imputation_single( id = "pt1", values = c(4,5,6)),
   imputation_single( id = "pt3", values = c(7,8))
 )
-longdata$get_data(obj)
-

-
+longdata$get_data(obj)

Will return a data.frame consisting of all observations for pt1 twice and all of the observations for pt3 once. The first set of observations for pt1 will have missing values filled in with c(1,2,3) and the second set will be filled in by c(4,5,6). The @@ -322,198 +252,107 @@

Details -

Returns -

+

Returns

A data.frame.

-
-

-
-
-

Method add_subject() -

+


+

Method add_subject()

This function decomposes a patient data from self$data and populates all the corresponding lists i.e. self$is_missing, self$values, self$group, etc. -This function is only called upon the objects initialization.

-
-

Usage -

-

-
-
longDataConstructor$add_subject(id)
-

-
+This function is only called upon the objects initialization.

+

Usage

+

longDataConstructor$add_subject(id)

-

Arguments -

-

-
-
-
id
+

Arguments

+

id

Character subject id that exists within self$data.

-
-

-
+

-
-

-
-
-

Method validate_ids() -

-

Throws an error if any element of ids is not within the source data self$data.

-
-

Usage -

-

-
-
longDataConstructor$validate_ids(ids)
-

-
+


+

Method validate_ids()

+

Throws an error if any element of ids is not within the source data self$data.

+

Usage

+

longDataConstructor$validate_ids(ids)

-

Arguments -

-

-
-
-
ids
+

Arguments

+

ids

A character vector of ids.

-
-

-
+

-

Returns -

+

Returns

TRUE

-
-

-
-
-

Method sample_ids() -

+


+

Method sample_ids()

Performs random stratified sampling of patient ids (with replacement) Each patient has an equal weight of being picked within their strata (i.e is not dependent on -how many non-missing visits they had).

-
-

Usage -

-

-
-
longDataConstructor$sample_ids()
-

-
+how many non-missing visits they had).

+

Usage

+

longDataConstructor$sample_ids()

-

Returns -

+

Returns

Character vector of ids.

-
-

-
-
-

Method extract_by_id() -

+


+

Method extract_by_id()

Returns a list of key information for a given subject. Is a convenience wrapper -to save having to manually grab each element.

-
-

Usage -

-

-
-
longDataConstructor$extract_by_id(id)
-

-
+to save having to manually grab each element.

+

Usage

+

longDataConstructor$extract_by_id(id)

-

Arguments -

-

-
-
-
id
+

Arguments

+

id

Character subject id that exists within self$data.

-
-

-
+

-
-

-
-
-

Method update_strategies() -

+


+

Method update_strategies()

Convenience function to run self$set_strategies(dat_ice, update=TRUE) -kept for legacy reasons.

-
-

Usage -

-

-
-
longDataConstructor$update_strategies(dat_ice)
-

-
+kept for legacy reasons.

+

Usage

+

longDataConstructor$update_strategies(dat_ice)

-

Arguments -

-

-
-
-
dat_ice
+

Arguments

+

dat_ice

A data.frame containing ICE information see impute() for the format of this dataframe.

-
-

-
+

-
-

-
-
-

Method set_strategies() -

+


+

Method set_strategies()

Updates the self$strategies, self$is_mar, self$is_post_ice variables based upon the provided ICE -information.

-
-

Usage -

-

-
-
longDataConstructor$set_strategies(dat_ice = NULL, update = FALSE)
-

-
+information.

+

Usage

+

longDataConstructor$set_strategies(dat_ice = NULL, update = FALSE)

-

Arguments -

-

-
-
-
dat_ice
+

Arguments

+

dat_ice

a data.frame containing ICE information. See details.

@@ -521,13 +360,10 @@

Arguments -

Details -

+

Details

See draws() for the specification of dat_ice if update=FALSE. See impute() for the format of dat_ice if update=TRUE. If update=TRUE this function ensures that MAR strategies cannot be changed to non-MAR in the presence @@ -535,70 +371,36 @@

Details
-

Method check_has_data_at_each_visit() -

+


+

Method check_has_data_at_each_visit()

Ensures that all visits have at least 1 observed "MAR" observation. Throws an error if this criteria is not met. This is to ensure that the initial -MMRM can be resolved.

-
-

Usage -

-

-
-
longDataConstructor$check_has_data_at_each_visit()
-

-
+MMRM can be resolved.

+

Usage

+

longDataConstructor$check_has_data_at_each_visit()

-
-

-
-
-

Method set_strata() -

+


+

Method set_strata()

Populates the self$strata variable. If the user has specified stratification variables The first visit is used to determine the value of those variables. If no stratification variables -have been specified then everyone is defined as being in strata 1.

-
-

Usage -

-

-
-
longDataConstructor$set_strata()
-

-
+have been specified then everyone is defined as being in strata 1.

+

Usage

+

longDataConstructor$set_strata()

-
-

-
-
-

Method new() -

-

Constructor function.

-
-

Usage -

-

-
-
longDataConstructor$new(data, vars)
-

-
+


+

Method new()

+

Constructor function.

+

Usage

+

longDataConstructor$new(data, vars)

-

Arguments -

-

-
-
-
data
+

Arguments

+

data

longitudinal dataset.

@@ -606,41 +408,23 @@

Argumentsset_vars().

-

-

-
+

-
-

-
-
-

Method clone() -

-

The objects of this class are cloneable with this method.

-
-

Usage -

-

-
-
longDataConstructor$clone(deep = FALSE)
-

-
+


+

Method clone()

+

The objects of this class are cloneable with this method.

+

Usage

+

longDataConstructor$clone(deep = FALSE)

-

Arguments -

-

-
-
-
deep
+

Arguments

+

deep

Whether to make a deep clone.

-
-

-
+

@@ -648,8 +432,7 @@

Arguments -

+
-
+
+ + - - diff --git a/main/reference/ls_design.html b/main/reference/ls_design.html index b36cc428..74b2c1f3 100644 --- a/main/reference/ls_design.html +++ b/main/reference/ls_design.html @@ -1,30 +1,15 @@ - - - - - - -Calculate design vector for the lsmeans — ls_design • rbmi - - - - - -Calculate design vector for the lsmeans — ls_design • rbmi - - - - +in the actual dataset."> Skip to contents @@ -40,38 +25,21 @@ +

@@ -93,8 +61,7 @@
-

Usage -

+

Usage

ls_design_equal(data, frm, fix)
 
 ls_design_counterfactual(data, frm, fix)
@@ -103,31 +70,24 @@ 

Usage

-

Arguments -

+

Arguments

-
-
data -
+
data

A data.frame

-
frm -
+
frm

Formula used to fit the original model

-
fix -
+
fix

A named list of variables with fixed values

-
-
+
-
+
-
- + + + - - diff --git a/main/reference/lsmeans.html b/main/reference/lsmeans.html index 5f238ac9..8cc074cd 100644 --- a/main/reference/lsmeans.html +++ b/main/reference/lsmeans.html @@ -1,24 +1,9 @@ - - - - - - -Least Square Means — lsmeans • rbmi - - - - - -Least Square Means — lsmeans • rbmi - - - - +information."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,8 +52,7 @@
-

Usage -

+

Usage

lsmeans(
   model,
   ...,
@@ -94,40 +61,32 @@ 

Usage

-

Arguments -

+

Arguments

-
-
model -
+
model

A model created by lm.

-
... -
+
...

Fixes specific variables to specific values i.e. trt = 1 or age = 50. The name of the argument must be the name of the variable within the dataset.

-
.weights -
+
.weights

Character, either "counterfactual" (default), "equal", "proportional_em" or "proportional". Specifies the weighting strategy to be used when calculating the lsmeans. See the weighting section for more details.

-
-
+
-

Weighting -

+

Weighting

-

Counterfactual -

+

Counterfactual

For weights = "counterfactual" (the default) the lsmeans are obtained by @@ -135,11 +94,7 @@

Counterfactualemmeans this approach is equivalent to:

-

-
-
emmeans::emmeans(model, specs = "<treatment>", counterfactual = "<treatment>")
-

-
+

emmeans::emmeans(model, specs = "<treatment>", counterfactual = "<treatment>")

Note that to ensure backwards compatibility with previous versions of rbmi weights = "proportional" is an alias for weights = "counterfactual". To get results consistent with emmeans's weights = "proportional" @@ -147,49 +102,35 @@

Counterfactual
-

Equal -

+

Equal

For weights = "equal" the lsmeans are obtained by taking the model fitted -value of a hypothetical patient whose covariates are defined as follows:

-
    -
  • Continuous covariates are set to mean(X)

  • +value of a hypothetical patient whose covariates are defined as follows:

    • Continuous covariates are set to mean(X)

    • Dummy categorical variables are set to 1/N where N is the number of levels

    • Continuous * continuous interactions are set to mean(X) * mean(Y)

    • Continuous * categorical interactions are set to mean(X) * 1/N

    • Dummy categorical * categorical interactions are set to 1/N * 1/M

    • -
    -

    In comparison to emmeans this approach is equivalent to:

    -

    -
    -
    emmeans::emmeans(model, specs = "<treatment>", weights = "equal")
    -

    -
    +

In comparison to emmeans this approach is equivalent to:

+

emmeans::emmeans(model, specs = "<treatment>", weights = "equal")

-

Proportional -

+

Proportional

For weights = "proportional_em" the lsmeans are obtained as per weights = "equal" except instead of weighting each observation equally they are weighted by the proportion in which the given combination of categorical values occurred in the data. In comparison to emmeans this approach is equivalent to:

-

-
-
emmeans::emmeans(model, specs = "<treatment>", weights = "proportional")
-

-
+

emmeans::emmeans(model, specs = "<treatment>", weights = "proportional")

Note that this is not to be confused with weights = "proportional" which is an alias for weights = "counterfactual".

-

Fixing -

+

Fixing

@@ -201,15 +142,13 @@

Fixing in R via the emmeans package.

-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 mod <- lm(Sepal.Length ~ Species + Petal.Length, data = iris)
 lsmeans(mod)
@@ -220,8 +159,7 @@ 

Examples -

+
-
+ + + - - diff --git a/main/reference/make_rbmi_cluster.html b/main/reference/make_rbmi_cluster.html index f37f60d6..be4932c2 100644 --- a/main/reference/make_rbmi_cluster.html +++ b/main/reference/make_rbmi_cluster.html @@ -1,20 +1,5 @@ - - - - - - -Create a rbmi ready cluster — make_rbmi_cluster • rbmi - - - - - - - - - - + +Create a rbmi ready cluster — make_rbmi_cluster • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,45 +46,35 @@
-

Usage -

+

Usage

make_rbmi_cluster(ncores = 1, objects = NULL, packages = NULL)
-

Arguments -

+

Arguments

-
-
ncores -
+
ncores

Number of parallel processes to use or an existing cluster to make use of

-
objects -
+
objects

a named list of objects to export into the sub-processes

-
packages -
-
-

a character vector of libraries to load in the sub-processes

+
packages
+

a character vector of libraries to load in the sub-processes

This function is a wrapper around parallel::makePSOCKcluster() but takes care of configuring rbmi to be used in the sub-processes as well as loading user defined objects and libraries and setting the seed for reproducibility.

If ncores is 1 this function will return NULL.

If ncores is a cluster created via parallel::makeCluster() then this function -just takes care of inserting the relevant rbmi objects into the existing cluster.

-
+just takes care of inserting the relevant rbmi objects into the existing cluster.

-
-
+
-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 # Basic usage
 make_rbmi_cluster(5)
@@ -135,8 +93,7 @@ 

Examples -

+
- + + + - - diff --git a/main/reference/method.html b/main/reference/method.html index f1dbd8d5..967c6ca4 100644 --- a/main/reference/method.html +++ b/main/reference/method.html @@ -1,22 +1,7 @@ - - - - - - -Set the multiple imputation methodology — method • rbmi - - - - - - - - - - + +Set the multiple imputation methodology — method • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,8 +49,7 @@
-

Usage -

+

Usage

method_bayes(
   burn_in = 200,
   burn_between = 50,
@@ -119,88 +86,73 @@ 

Usage

-

Arguments -

+

Arguments

-
-
burn_in -
+
burn_in

a numeric that specifies how many observations should be discarded prior to extracting actual samples. Note that the sampler is initialized at the maximum likelihood estimates and a weakly informative prior is used thus in theory this value should not need to be that high.

-
burn_between -
+
burn_between

a numeric that specifies the "thinning" rate i.e. how many observations should be discarded between each sample. This is used to prevent issues associated with autocorrelation between the samples.

-
same_cov -
+
same_cov

a logical, if TRUE the imputation model will be fitted using a single shared covariance matrix for all observations. If FALSE a separate covariance matrix will be fit for each group as determined by the group argument of set_vars().

-
n_samples -
+
n_samples

a numeric that determines how many imputed datasets are generated. In the case of method_condmean(type = "jackknife") this argument must be set to NULL. See details.

-
seed -
+
seed

deprecated. Please use set.seed() instead.

-
covariance -
+
covariance

a character string that specifies the structure of the covariance matrix to be used in the imputation model. Must be one of "us" (default), "ad", "adh", "ar1", "ar1h", "cs", "csh", "toep", or "toeph"). See details.

-
threshold -
+
threshold

a numeric between 0 and 1, specifies the proportion of bootstrap datasets that can fail to produce valid samples before an error is thrown. See details.

-
REML -
+
REML

a logical indicating whether to use REML estimation rather than maximum likelihood.

-
type -
+
type

a character string that specifies the resampling method used to perform inference when a conditional mean imputation approach (set via method_condmean()) is used. Must be one of "bootstrap" or "jackknife".

-
B -
+
B

a numeric that determines the number of bootstrap samples for method_bmlmi.

-
D -
+
D

a numeric that determines the number of random imputations for each bootstrap sample. Needed for method_bmlmi().

-
-
+
-

Details -

+

Details

In the case of method_condmean(type = "bootstrap") there will be n_samples + 1 imputation models and datasets generated as the first sample will be based on the original dataset whilst the other n_samples samples will be @@ -208,9 +160,7 @@

Detailslength(unique(data$subjid)) + 1 imputation models and datasets generated. In both cases this is represented by n + 1 being displayed in the print message.

The user is able to specify different covariance structures using the the covariance -argument. Currently supported structures include:

-

For full details please see mmrm::cov_types().

Note that at present Bayesian methods only support unstructured.

In the case of method_condmean(type = "bootstrap"), method_approxbayes() and method_bmlmi() repeated bootstrap samples of the original dataset are taken with an MMRM fitted to each sample. @@ -242,8 +191,7 @@

Details -

+ - + + + - - diff --git a/main/reference/par_lapply.html b/main/reference/par_lapply.html index 657e53da..3d454be6 100644 --- a/main/reference/par_lapply.html +++ b/main/reference/par_lapply.html @@ -1,22 +1,7 @@ - - - - - - -Parallelise Lapply — par_lapply • rbmi - - - - - - - - - - + +Parallelise Lapply — par_lapply • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,42 +49,33 @@
-

Usage -

+

Usage

par_lapply(cl, fun, x, ...)
-

Arguments -

+

Arguments

-
-
cl -
+
cl

Cluster created by parallel::makeCluster() or NULL

-
fun -
+
fun

Function to be run

-
x -
+
x

object to be looped over

-
... -
+
...

extra arguements passed to fun

-
-
+ - +
-
- + + + - - diff --git a/main/reference/parametric_ci.html b/main/reference/parametric_ci.html index 0c30ebad..6e24a1a5 100644 --- a/main/reference/parametric_ci.html +++ b/main/reference/parametric_ci.html @@ -1,22 +1,7 @@ - - - - - - -Calculate parametric confidence intervals — parametric_ci • rbmi - - - - - - - - - - + +Calculate parametric confidence intervals — parametric_ci • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,59 +49,47 @@
-

Usage -

+

Usage

parametric_ci(point, se, alpha, alternative, qfun, pfun, ...)
-

Arguments -

+

Arguments

-
-
point -
+
point

The point estimate.

-
se -
+
se

The standard error of the point estimate. If using a non-"normal" distribution this should be set to 1.

-
alpha -
+
alpha

The type 1 error rate, should be a value between 0 and 1.

-
alternative -
+
alternative

a character string specifying the alternative hypothesis, must be one of "two.sided" (default), "greater" or "less".

-
qfun -
+
qfun

The quantile function for the assumed distribution i.e. qnorm.

-
pfun -
+
pfun

The CDF function for the assumed distribution i.e. pnorm.

-
... -
+
...

additional arguments passed on qfun and pfun i.e. df = 102.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/pool.html b/main/reference/pool.html index 1a055e53..70b98473 100644 --- a/main/reference/pool.html +++ b/main/reference/pool.html @@ -1,20 +1,5 @@ - - - - - - -Pool analysis results obtained from the imputed datasets — pool • rbmi - - - - - - - - - - + +Pool analysis results obtained from the imputed datasets — pool • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,8 +46,7 @@
-

Usage -

+

Usage

pool(
   results,
   conf.level = 0.95,
@@ -95,30 +62,24 @@ 

Usage

-

Arguments -

+

Arguments

-
-
results -
+
results

an analysis object created by analyse().

-
conf.level -
+
conf.level

confidence level of the returned confidence interval. Must be a single number between 0 and 1. Default is 0.95.

-
alternative -
+
alternative

a character string specifying the alternative hypothesis, must be one of "two.sided" (default), "greater" or "less".

-
type -
+
type

a character string of either "percentile" (default) or "normal". Determines what method should be used to calculate the bootstrap confidence intervals. See details. @@ -126,25 +87,19 @@

Argumentsdraws().

-
x -
+
x

a pool object generated by pool().

-
... -
+
...

not used.

-
-
+
-

Details -

+

Details

The calculation used to generate the point estimate, standard errors and confidence interval depends upon the method specified in the original -call to draws(); In particular:

-
+
-

References -

+

References

Bradley Efron and Robert J Tibshirani. An introduction to the bootstrap. CRC press, 1994. [Section 11]

Roderick J. A. Little and Donald B. Rubin. Statistical Analysis with Missing @@ -169,8 +122,7 @@

References -

+ - + + + - - diff --git a/main/reference/pool_bootstrap_normal.html b/main/reference/pool_bootstrap_normal.html index 163c29cc..b77eb45f 100644 --- a/main/reference/pool_bootstrap_normal.html +++ b/main/reference/pool_bootstrap_normal.html @@ -1,22 +1,7 @@ - - - - - - -Bootstrap Pooling via normal approximation — pool_bootstrap_normal • rbmi - - - - - - - - - - + +Bootstrap Pooling via normal approximation — pool_bootstrap_normal • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,45 +49,36 @@
-

Usage -

+

Usage

pool_bootstrap_normal(est, conf.level, alternative)
-

Arguments -

+

Arguments

-
-
est -
+
est

a numeric vector of point estimates from each bootstrap sample.

-
conf.level -
+
conf.level

confidence level of the returned confidence interval. Must be a single number between 0 and 1. Default is 0.95.

-
alternative -
+
alternative

a character string specifying the alternative hypothesis, must be one of "two.sided" (default), "greater" or "less".

-
-
+
-

Details -

+

Details

The point estimate is taken to be the first element of est. The remaining n-1 values of est are then used to generate the confidence intervals.

- + - + + + - - diff --git a/main/reference/pool_bootstrap_percentile.html b/main/reference/pool_bootstrap_percentile.html index ac9402db..82d1764d 100644 --- a/main/reference/pool_bootstrap_percentile.html +++ b/main/reference/pool_bootstrap_percentile.html @@ -1,24 +1,9 @@ - - - - - - -Bootstrap Pooling via Percentiles — pool_bootstrap_percentile • rbmi - - - - - - - - - - +see stats::quantile() for details.'> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,45 +52,36 @@
-

Usage -

+

Usage

pool_bootstrap_percentile(est, conf.level, alternative)
-

Arguments -

+

Arguments

-
-
est -
+
est

a numeric vector of point estimates from each bootstrap sample.

-
conf.level -
+
conf.level

confidence level of the returned confidence interval. Must be a single number between 0 and 1. Default is 0.95.

-
alternative -
+
alternative

a character string specifying the alternative hypothesis, must be one of "two.sided" (default), "greater" or "less".

-
-
+
-

Details -

+

Details

The point estimate is taken to be the first element of est. The remaining n-1 values of est are then used to generate the confidence intervals.

- + - + + + - - diff --git a/main/reference/pool_internal.html b/main/reference/pool_internal.html index 3860f9f0..298fd6b1 100644 --- a/main/reference/pool_internal.html +++ b/main/reference/pool_internal.html @@ -1,22 +1,7 @@ - - - - - - -Internal Pool Methods — pool_internal • rbmi - - - - - - - - - - + +Internal Pool Methods — pool_internal • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,8 +49,7 @@
-

Usage -

+

Usage

pool_internal(results, conf.level, alternative, type, D)
 
 # S3 method for class 'jackknife'
@@ -105,31 +72,25 @@ 

Usage

-

Arguments -

+

Arguments

-
-
results -
+
results

a list of results i.e. the x$results element of an analyse object created by analyse()).

-
conf.level -
+
conf.level

confidence level of the returned confidence interval. Must be a single number between 0 and 1. Default is 0.95.

-
alternative -
+
alternative

a character string specifying the alternative hypothesis, must be one of "two.sided" (default), "greater" or "less".

-
type -
+
type

a character string of either "percentile" (default) or "normal". Determines what method should be used to calculate the bootstrap confidence intervals. See details. @@ -137,16 +98,13 @@

Argumentsdraws().

-
D -
+
D

numeric representing the number of imputations between each bootstrap sample in the BMLMI method.

-
-
+
- +
-
- + + + - - diff --git a/main/reference/prepare_stan_data.html b/main/reference/prepare_stan_data.html index 7da64886..1e9b63e6 100644 --- a/main/reference/prepare_stan_data.html +++ b/main/reference/prepare_stan_data.html @@ -1,22 +1,7 @@ - - - - - - -Prepare input data to run the Stan model — prepare_stan_data • rbmi - - - - - - - - - - + +Prepare input data to run the Stan model — prepare_stan_data • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,49 +49,37 @@
-

Usage -

+

Usage

prepare_stan_data(ddat, subjid, visit, outcome, group)
-

Arguments -

+

Arguments

-
-
ddat -
+
ddat

A design matrix

-
subjid -
+
subjid

Character vector containing the subjects IDs.

-
visit -
+
visit

Vector containing the visits.

-
outcome -
+
outcome

Numeric vector containing the outcome variable.

-
group -
+
group

Vector containing the group variable.

-
-
+
-

Value -

-

A stan_data object. A named list as per data{} block of the related Stan file. In particular it returns:

-
    -
  • N - The number of rows in the design matrix

  • +

    Value

    +

    A stan_data object. A named list as per data{} block of the related Stan file. In particular it returns:

    • N - The number of rows in the design matrix

    • P - The number of columns in the design matrix

    • G - The number of distinct covariance matrix groups (i.e. length(unique(group)))

    • n_visit - The number of unique outcome visits

    • @@ -135,21 +91,16 @@

      Value
    • y - The outcome variable

    • Q - design matrix (after QR decomposition)

    • R - R matrix from the QR decomposition of the design matrix

    • -

    -
+
-

Details -

+

Details

-
    -
  • The group argument determines which covariance matrix group the subject belongs to. If you +

    • The group argument determines which covariance matrix group the subject belongs to. If you want all subjects to use a shared covariance matrix then set group to "1" for everyone.

    • -
    -
+ - + - + + + - - diff --git a/main/reference/print.analysis.html b/main/reference/print.analysis.html index fb6fecc5..24a658ed 100644 --- a/main/reference/print.analysis.html +++ b/main/reference/print.analysis.html @@ -1,20 +1,5 @@ - - - - - - -Print analysis object — print.analysis • rbmi - - - - - - - - - - + +Print analysis object — print.analysis • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,33 +46,26 @@
-

Usage -

+

Usage

# S3 method for class 'analysis'
 print(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

An analysis object generated by analyse().

-
... -
+
...

Not used.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/print.draws.html b/main/reference/print.draws.html index f057bf89..bb2d5a37 100644 --- a/main/reference/print.draws.html +++ b/main/reference/print.draws.html @@ -1,20 +1,5 @@ - - - - - - -Print draws object — print.draws • rbmi - - - - - - - - - - + +Print draws object — print.draws • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,33 +46,26 @@
-

Usage -

+

Usage

# S3 method for class 'draws'
 print(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

A draws object generated by draws().

-
... -
+
...

not used.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/print.imputation.html b/main/reference/print.imputation.html index 4954b02f..d64b11f8 100644 --- a/main/reference/print.imputation.html +++ b/main/reference/print.imputation.html @@ -1,20 +1,5 @@ - - - - - - -Print imputation object — print.imputation • rbmi - - - - - - - - - - + +Print imputation object — print.imputation • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,33 +46,26 @@
-

Usage -

+

Usage

# S3 method for class 'imputation'
 print(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

An imputation object generated by impute().

-
... -
+
...

Not used.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/progressLogger.html b/main/reference/progressLogger.html index 6b1b8219..12c12421 100644 --- a/main/reference/progressLogger.html +++ b/main/reference/progressLogger.html @@ -1,28 +1,13 @@ - - - - - - -R6 Class for printing current sampling progress — progressLogger • rbmi - - - - - -R6 Class for printing current sampling progress — progressLogger • rbmi - - - - +Use the quiet argument to prevent the object from printing anything at all"> Skip to contents @@ -38,38 +23,21 @@ + @@ -91,12 +59,8 @@
-

Public fields -

-

-
-
-
step
+

Public fields

+

step

real, percentage of iterations to allow before printing the progress to the console

@@ -119,48 +83,28 @@

Public fields -

Methods -

+

Methods

-

-
-
-

Method new() -

-

Create progressLogger object

-
-

Usage -

-

-
-
progressLogger$new(n_max, quiet = FALSE, step = 0.1)
-

-
+


+

Method new()

+

Create progressLogger object

+

Usage

+

progressLogger$new(n_max, quiet = FALSE, step = 0.1)

-

Arguments -

-

-
-
-
n_max
+

Arguments

+

n_max

integer, sets field n_max

@@ -172,94 +116,50 @@

Arguments
-

Method add() -

+


+

Method add()

Records that n more iterations have been completed this will add that number to the current step count (step_current) and will print a progress message to the log if the step limit (step) has been reached. -This function will do nothing if quiet has been set to TRUE

-
-

Usage -

-

-
-
progressLogger$add(n)
-

-
+This function will do nothing if quiet has been set to TRUE

+

Usage

+

progressLogger$add(n)

-

Arguments -

-

-
-
-
n
+

Arguments

+

n

the number of successfully complete iterations since add() was last called

-
-

-
+

-
-

-
-
-

Method print_progress() -

-

method to print the current state of progress

-
-

Usage -

-

-
-
progressLogger$print_progress()
-

-
+


+

Method print_progress()

+

method to print the current state of progress

+

Usage

+

progressLogger$print_progress()

-
-

-
-
-

Method clone() -

-

The objects of this class are cloneable with this method.

-
-

Usage -

-

-
-
progressLogger$clone(deep = FALSE)
-

-
+


+

Method clone()

+

The objects of this class are cloneable with this method.

+

Usage

+

progressLogger$clone(deep = FALSE)

-

Arguments -

-

-
-
-
deep
+

Arguments

+

deep

Whether to make a deep clone.

-
-

-
+

@@ -267,8 +167,7 @@

Arguments -

+
-
+

+ + - - diff --git a/main/reference/pval_percentile.html b/main/reference/pval_percentile.html index 38dcf04d..281911b4 100644 --- a/main/reference/pval_percentile.html +++ b/main/reference/pval_percentile.html @@ -1,22 +1,7 @@ - - - - - - -P-value of percentile bootstrap — pval_percentile • rbmi - - - - - - - - - - + +P-value of percentile bootstrap — pval_percentile • rbmi Skip to contents @@ -32,38 +17,21 @@ +
@@ -81,32 +49,25 @@
-

Usage -

+

Usage

pval_percentile(est)
-

Arguments -

+

Arguments

-
-
est -
+
est

a numeric vector of point estimates from each bootstrap sample.

-
-
+
-

Value -

+

Value

A named numeric vector of length 2 containing the p-value for H_0: theta=0 vs H_A: theta>0 ("pval_greater") and the p-value for H_0: theta=0 vs H_A: theta<0 ("pval_less").

-

Details -

+

Details

The p-value for H_0: theta=0 vs H_A: theta>0 is the value alpha for which q_alpha = 0. If there is at least one estimate equal to zero it returns the largest alpha such that q_alpha = 0. If all bootstrap estimates are > 0 it returns 0; if all bootstrap estimates are < 0 it returns 1. Analogous @@ -114,8 +75,7 @@

Details -

+
-
+ + + - - diff --git a/main/reference/random_effects_expr.html b/main/reference/random_effects_expr.html index e312dbb2..615089c1 100644 --- a/main/reference/random_effects_expr.html +++ b/main/reference/random_effects_expr.html @@ -1,22 +1,7 @@ - - - - - - -Construct random effects formula — random_effects_expr • rbmi - - - - - - - - - - + +Construct random effects formula — random_effects_expr • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,8 +49,7 @@
-

Usage -

+

Usage

random_effects_expr(
   cov_struct = c("us", "ad", "adh", "ar1", "ar1h", "cs", "csh", "toep", "toeph"),
   cov_by_group = FALSE
@@ -90,45 +57,30 @@ 

Usage

-

Arguments -

+

Arguments

-
-
cov_struct -
+
cov_struct

Character - The covariance structure to be used, must be one of "us" (default), "ad", "adh", "ar1", "ar1h", "cs", "csh", "toep", or "toeph")

-
cov_by_group -
+
cov_by_group

Boolean - Whenever or not to use separate covariances per each group level

-
-
+
-

Details -

+

Details

For example assuming the user specified a covariance structure of "us" and that no groups were provided this will return

-

-
-
us(visit | subjid)
-

-
+

us(visit | subjid)

If cov_by_group is set to FALSE then this indicates that separate covariance matrices are required per group and as such the following will be returned:

-

-
-
us( visit | group / subjid )
-

-
+

us( visit | group / subjid )

- + - + + + - - diff --git a/main/reference/rbmi-package.html b/main/reference/rbmi-package.html index 58000cf5..f704218b 100644 --- a/main/reference/rbmi-package.html +++ b/main/reference/rbmi-package.html @@ -1,16 +1,5 @@ - - - - - - -rbmi: Reference Based Multiple Imputation — rbmi-package • rbmi - - - - - - - - - - +vignette(topic= "quickstart", package = "rbmi")'> Skip to contents @@ -52,38 +37,21 @@ + @@ -99,49 +67,33 @@

The rbmi package is used to perform reference based multiple imputation. The package provides implementations for common, patient-specific imputation strategies whilst allowing the user to select between various standard Bayesian and frequentist approaches.

-

The package is designed around 4 core functions:

-
    -
  • draws() - Fits multiple imputation models

  • +

    The package is designed around 4 core functions:

    • draws() - Fits multiple imputation models

    • impute() - Imputes multiple datasets

    • analyse() - Analyses multiple datasets

    • pool() - Pools multiple results into a single statistic

    • -
    -

    To learn more about rbmi, please see the quickstart vignette:

    +

To learn more about rbmi, please see the quickstart vignette:

vignette(topic= "quickstart", package = "rbmi")

-

Author -

+

Author

Maintainer: Craig Gower-Page craig.gower-page@roche.com

-

Authors:

-
+ - +
-
- + + + - - diff --git a/main/reference/rbmi-settings.html b/main/reference/rbmi-settings.html index da5b66f7..470e8cde 100644 --- a/main/reference/rbmi-settings.html +++ b/main/reference/rbmi-settings.html @@ -1,16 +1,5 @@ - - - - - - -rbmi settings — rbmi-settings • rbmi - - - - - - - - - - +'> Skip to contents @@ -52,38 +37,21 @@ + @@ -98,15 +66,8 @@

Define settings that modify the behaviour of the rbmi package

Each of the following are the name of options that can be set via:

-

-
-
options(<option_name> = <value>)
-

-
-
-

-rbmi.cache_dir -

+

options(<option_name> = <value>)

+

rbmi.cache_dir

Default = tools::R_user_dir("rbmi", which = "cache")

@@ -117,23 +78,20 @@

-

Usage -

+

Usage

set_options()
-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 options(rbmi.cache_dir = "some/directory/path")
 } # }
 
-
+
-
- + + + - - diff --git a/main/reference/record.html b/main/reference/record.html index f5951913..3d125eef 100644 --- a/main/reference/record.html +++ b/main/reference/record.html @@ -1,24 +1,9 @@ - - - - - - -Capture all Output — record • rbmi - - - - - -Capture all Output — record • rbmi - - - - +character vectors."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,38 +52,28 @@
-

Usage -

+

Usage

record(expr)
-

Arguments -

+

Arguments

-
-
expr -
+
expr

An expression to be executed

-
-
+
-

Value -

-

A list containing

-
    -
  • results - The object returned by expr or list() if an error was thrown

  • +

    Value

    +

    A list containing

    • results - The object returned by expr or list() if an error was thrown

    • warnings - NULL or a character vector if warnings were thrown

    • errors - NULL or a string if an error was thrown

    • messages - NULL or a character vector if messages were produced

    • -
    -
+
-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 record({
   x <- 1
@@ -128,8 +86,7 @@ 

Examples -

+
- + + + - - diff --git a/main/reference/recursive_reduce.html b/main/reference/recursive_reduce.html index 4e6dc5f7..560c9898 100644 --- a/main/reference/recursive_reduce.html +++ b/main/reference/recursive_reduce.html @@ -1,22 +1,7 @@ - - - - - - -recursive_reduce — recursive_reduce • rbmi - - - - - - - - - - + +recursive_reduce — recursive_reduce • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,33 +49,26 @@
-

Usage -

+

Usage

recursive_reduce(.l, .f)
-

Arguments -

+

Arguments

-
-
.l -
+
.l

list of values to apply a function to

-
.f -
+
.f

function to apply to each each element of the list in turn i.e. .l[[1]] <- .f( .l[[1]] , .l[[2]]) ; .l[[1]] <- .f( .l[[1]] , .l[[3]])

-
-
+ - +
-
- + + + - - diff --git a/main/reference/remove_if_all_missing.html b/main/reference/remove_if_all_missing.html index 643a8fcc..00d92c87 100644 --- a/main/reference/remove_if_all_missing.html +++ b/main/reference/remove_if_all_missing.html @@ -1,24 +1,9 @@ - - - - - - -Remove subjects from dataset if they have no observed values — remove_if_all_missing • rbmi - - - - - -Remove subjects from dataset if they have no observed values — remove_if_all_missing • rbmi - - - - +values for outcome."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,27 +52,21 @@
-

Usage -

+

Usage

remove_if_all_missing(dat)
-

Arguments -

+

Arguments

-
-
dat -
+
dat

a data.frame

-
-
+ - +
-
- + + + - - diff --git a/main/reference/rubin_df.html b/main/reference/rubin_df.html index 1188cabc..a0973a51 100644 --- a/main/reference/rubin_df.html +++ b/main/reference/rubin_df.html @@ -1,20 +1,5 @@ - - - - - - -Barnard and Rubin degrees of freedom adjustment — rubin_df • rbmi - - - - - - - - - - + +Barnard and Rubin degrees of freedom adjustment — rubin_df • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,60 +46,48 @@
-

Usage -

+

Usage

rubin_df(v_com, var_b, var_t, M)
-

Arguments -

+

Arguments

-
-
v_com -
+
v_com

Positive number representing the degrees of freedom in the complete-data analysis.

-
var_b -
+
var_b

Between-variance of point estimate across multiply imputed datasets.

-
var_t -
+
var_t

Total-variance of point estimate according to Rubin's rules.

-
M -
+
M

Number of imputations.

-
-
+
-

Value -

+

Value

Degrees of freedom according to Barnard-Rubin formula. See Barnard-Rubin (1999).

-

Details -

+

Details

The computation takes into account limit cases where there is no missing data (i.e. the between-variance var_b is zero) or where the complete-data degrees of freedom is set to Inf. Moreover, if v_com is given as NA, the function returns Inf.

-

References -

+

References

Barnard, J. and Rubin, D.B. (1999). Small sample degrees of freedom with multiple imputation. Biometrika, 86, 948-955.

- +
-
- + + + - - diff --git a/main/reference/rubin_rules.html b/main/reference/rubin_rules.html index 11f82136..6d992687 100644 --- a/main/reference/rubin_rules.html +++ b/main/reference/rubin_rules.html @@ -1,20 +1,5 @@ - - - - - - -Combine estimates using Rubin's rules — rubin_rules • rbmi - - - - - - - - - - + +Combine estimates using Rubin's rules — rubin_rules • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,46 +46,34 @@
-

Usage -

+

Usage

rubin_rules(ests, ses, v_com)
-

Arguments -

+

Arguments

-
-
ests -
+
ests

Numeric vector containing the point estimates from the complete-data analyses.

-
ses -
+
ses

Numeric vector containing the standard errors from the complete-data analyses.

-
v_com -
+
v_com

Positive number representing the degrees of freedom in the complete-data analysis.

-
-
+
-

Value -

-

A list containing:

-
    -
  • est_point: the pooled point estimate according to Little-Rubin (2002).

  • +

    Value

    +

    A list containing:

    • est_point: the pooled point estimate according to Little-Rubin (2002).

    • var_t: total variance according to Little-Rubin (2002).

    • df: degrees of freedom according to Barnard-Rubin (1999).

    • -
    -
+
-

Details -

+

Details

rubin_rules applies Rubin's rules (Rubin, 1987) for pooling together the results from a multiple imputation procedure. The pooled point estimate est_point is is the average across the point estimates from the complete-data analyses (given by the input argument ests). @@ -127,22 +83,19 @@

Details

-

References -

+

References

Barnard, J. and Rubin, D.B. (1999). Small sample degrees of freedom with multiple imputation. Biometrika, 86, 948-955

Roderick J. A. Little and Donald B. Rubin. Statistical Analysis with Missing Data, Second Edition. John Wiley & Sons, Hoboken, New Jersey, 2002. [Section 5.4]

-

See also -

+

See also

rubin_df() for the degrees of freedom estimation.

- + - + + + - - diff --git a/main/reference/sample_ids.html b/main/reference/sample_ids.html index 6d0ba43d..f4a1f788 100644 --- a/main/reference/sample_ids.html +++ b/main/reference/sample_ids.html @@ -1,22 +1,7 @@ - - - - - - -Sample Patient Ids — sample_ids • rbmi - - - - - - - - - - + +Sample Patient Ids — sample_ids • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,41 +49,33 @@
-

Usage -

+

Usage

sample_ids(ids, strata = rep(1, length(ids)))
-

Arguments -

+

Arguments

-
-
ids -
+
ids

vector to sample from

-
strata -
+
strata

strata indicator, ids are sampled within each strata ensuring the that the numbers of each strata are maintained

-
-
+
-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 sample_ids( c("a", "b", "c", "d"), strata = c(1,1,2,2))
 } # }
 
- +
-
- + + + - - diff --git a/main/reference/sample_list.html b/main/reference/sample_list.html index 78268102..cabe7388 100644 --- a/main/reference/sample_list.html +++ b/main/reference/sample_list.html @@ -1,22 +1,7 @@ - - - - - - -Create and validate a sample_list object — sample_list • rbmi - - - - - - - - - - + +Create and validate a sample_list object — sample_list • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,27 +49,21 @@
-

Usage -

+

Usage

sample_list(...)
-

Arguments -

+

Arguments

-
-
... -
+
...

A list of sample_single objects.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/sample_mvnorm.html b/main/reference/sample_mvnorm.html index d15b8c60..f2217769 100644 --- a/main/reference/sample_mvnorm.html +++ b/main/reference/sample_mvnorm.html @@ -1,20 +1,5 @@ - - - - - - -Sample random values from the multivariate normal distribution — sample_mvnorm • rbmi - - - - - - - - - - + +Sample random values from the multivariate normal distribution — sample_mvnorm • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,38 +46,29 @@
-

Usage -

+

Usage

sample_mvnorm(mu, sigma)
-

Arguments -

+

Arguments

-
-
mu -
+
mu

mean vector

-
sigma -
-
-

covariance matrix

+
sigma
+

covariance matrix

Samples multivariate normal variables by multiplying univariate random normal variables by the cholesky decomposition of the covariance matrix.

-

If mu is length 1 then just uses rnorm instead.

-
+

If mu is length 1 then just uses rnorm instead.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/sample_single.html b/main/reference/sample_single.html index 7b8df36f..1e601376 100644 --- a/main/reference/sample_single.html +++ b/main/reference/sample_single.html @@ -1,22 +1,7 @@ - - - - - - -Create object of sample_single class — sample_single • rbmi - - - - - - - - - - + +Create object of sample_single class — sample_single • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,8 +49,7 @@
-

Usage -

+

Usage

sample_single(
   ids,
   beta = NA,
@@ -94,59 +61,45 @@ 

Usage

-

Arguments -

+

Arguments

-
-
ids -
+
ids

Vector of characters containing the ids of the subjects included in the original dataset.

-
beta -
+
beta

Numeric vector of estimated regression coefficients.

-
sigma -
+
sigma

List of estimated covariance matrices (one for each level of vars$group).

-
theta -
+
theta

Numeric vector of transformed covariances.

-
failed -
+
failed

Logical. TRUE if the model fit failed.

-
ids_samp -
+
ids_samp

Vector of characters containing the ids of the subjects included in the given sample.

-
-
+
-

Value -

-

A named list of class sample_single. It contains the following:

-
    -
  • ids vector of characters containing the ids of the subjects included in the original dataset.

  • +

    Value

    +

    A named list of class sample_single. It contains the following:

    • ids vector of characters containing the ids of the subjects included in the original dataset.

    • beta numeric vector of estimated regression coefficients.

    • sigma list of estimated covariance matrices (one for each level of vars$group).

    • theta numeric vector of transformed covariances.

    • failed logical. TRUE if the model fit failed.

    • ids_samp vector of characters containing the ids of the subjects included in the given sample.

    • -
    -
+ - +
-
- + + + - - diff --git a/main/reference/scalerConstructor.html b/main/reference/scalerConstructor.html index e24d5c04..33ccc548 100644 --- a/main/reference/scalerConstructor.html +++ b/main/reference/scalerConstructor.html @@ -1,22 +1,7 @@ - - - - - - -R6 Class for scaling (and un-scaling) design matrices — scalerConstructor • rbmi - - - - - - - - - - + +R6 Class for scaling (and un-scaling) design matrices — scalerConstructor • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -82,8 +50,7 @@
-

Details -

+

Details

The object initialisation is used to determine the relevant mean and SD's to scale by and then the scaling (and un-scaling) itself is performed by the relevant object @@ -94,12 +61,8 @@

Details

-

Public fields -

-

-
-
-
centre
+

Public fields

+

centre

Vector of column means. The first value is the outcome variable, all other variables are the predictors.

@@ -109,213 +72,124 @@

Public fields -

Methods -

+

Methods

-

-
-
-

Method new() -

+


+

Method new()

Uses dat to determine the relevant column means and standard deviations to use when scaling and un-scaling future datasets. Implicitly assumes that new datasets -have the same column order as dat

-
-

Usage -

-

-
- -

-
+have the same column order as dat

+

Usage

+

-

Arguments -

-

-
-
-
dat
+

Arguments

+

dat

A data.frame or matrix. All columns must be numeric (i.e dummy variables, must have already been expanded out).

-
-

-
+

-

Details -

+

Details

Categorical columns (as determined by those who's values are entirely 1 or 0) will not be scaled. This is achieved by setting the corresponding values of centre to 0 and scale to 1.

-
-

-
-
-

Method scale() -

+


+

Method scale()

Scales a dataset so that all continuous variables have a mean of 0 and a -standard deviation of 1.

-
-

Usage -

-

-
-
scalerConstructor$scale(dat)
-

-
+standard deviation of 1.

+

Usage

+

scalerConstructor$scale(dat)

-

Arguments -

-

-
-
-
dat
+

Arguments

+

dat

A data.frame or matrix whose columns are all numeric (i.e. dummy variables have all been expanded out) and whose columns are in the same order as the dataset used in the initialization function.

-
-

-
+

-
-

-
-
-

Method unscale_sigma() -

+


+

Method unscale_sigma()

Unscales a sigma value (or matrix) as estimated by a linear model using a design matrix scaled by this object. This function only works if the first column of the initialisation data.frame was the outcome -variable.

-
-

Usage -

-

-
-
scalerConstructor$unscale_sigma(sigma)
-

-
+variable.

+

Usage

+

scalerConstructor$unscale_sigma(sigma)

-

Arguments -

-

-
-
-
sigma
+

Arguments

+

sigma

A numeric value or matrix.

-
-

-
+

-

Returns -

+

Returns

A numeric value or matrix

-
-

-
-
-

Method unscale_beta() -

+


+

Method unscale_beta()

Unscales a beta value (or vector) as estimated by a linear model using a design matrix scaled by this object. This function only works if the first column of the initialization data.frame was the outcome -variable.

-
-

Usage -

-

-
-
scalerConstructor$unscale_beta(beta)
-

-
+variable.

+

Usage

+

scalerConstructor$unscale_beta(beta)

-

Arguments -

-

-
-
-
beta
+

Arguments

+

beta

A numeric vector of beta coefficients as estimated from a linear model.

-
-

-
+

-

Returns -

+

Returns

A numeric vector.

-
-

-
-
-

Method clone() -

-

The objects of this class are cloneable with this method.

-
-

Usage -

-

-
-
scalerConstructor$clone(deep = FALSE)
-

-
+


+

Method clone()

+

The objects of this class are cloneable with this method.

+

Usage

+

scalerConstructor$clone(deep = FALSE)

-

Arguments -

-

-
-
-
deep
+

Arguments

+

deep

Whether to make a deep clone.

-
-

-
+

@@ -323,8 +197,7 @@

Arguments -

+
-
+
+ + - - diff --git a/main/reference/set_simul_pars.html b/main/reference/set_simul_pars.html index c2991db8..43892d1d 100644 --- a/main/reference/set_simul_pars.html +++ b/main/reference/set_simul_pars.html @@ -1,32 +1,17 @@ - - - - - - -Set simulation parameters of a study group. — set_simul_pars • rbmi - - - - - -Set simulation parameters of a study group. — set_simul_pars • rbmi - - - - +condition related (NSDRC) reasons and outcome data after ICE2 is always missing."> Skip to contents @@ -42,38 +27,21 @@ +
@@ -96,8 +64,7 @@
-

Usage -

+

Usage

set_simul_pars(
   mu,
   sigma,
@@ -111,50 +78,41 @@ 

Usage

-

Arguments -

+

Arguments

-
-
mu -
+
mu

Numeric vector describing the mean outcome trajectory at each visit (including baseline) assuming no ICEs.

-
sigma -
+
sigma

Covariance matrix of the outcome trajectory assuming no ICEs.

-
n -
+
n

Number of subjects belonging to the group.

-
prob_ice1 -
+
prob_ice1

Numeric vector that specifies the probability of experiencing ICE1 (discontinuation from study treatment due to SDCR reasons) after each visit for a subject with observed outcome at that visit equal to the mean at baseline (mu[1]). If a single numeric is provided, then the same probability is applied to each visit.

-
or_outcome_ice1 -
+
or_outcome_ice1

Numeric value that specifies the odds ratio of experiencing ICE1 after each visit corresponding to a +1 higher value of the observed outcome at that visit.

-
prob_post_ice1_dropout -
+
prob_post_ice1_dropout

Numeric value that specifies the probability of study drop-out following ICE1. If a subject is simulated to drop-out after ICE1, all outcomes after ICE1 are set to missing.

-
prob_ice2 -
+
prob_ice2

Numeric that specifies an additional probability that a post-baseline visit is affected by study drop-out. Outcome data at the subject's first simulated visit affected by study drop-out and all subsequent visits are set to missing. This generates @@ -166,33 +124,27 @@

Argumentsprob_miss - +
prob_miss

Numeric value that specifies an additional probability for a given post-baseline observation to be missing. This can be used to produce "intermittent" missing values which are not associated with any ICE.

-

-
+
-

Value -

+

Value

A simul_pars object which is a named list containing the simulation parameters.

-

Details -

+

Details

For the details, please see simulate_data().

-

See also -

+

See also

-
+ - + + + - - diff --git a/main/reference/set_vars.html b/main/reference/set_vars.html index 99b7dfab..b95ee7eb 100644 --- a/main/reference/set_vars.html +++ b/main/reference/set_vars.html @@ -1,22 +1,7 @@ - - - - - - -Set key variables — set_vars • rbmi - - - - - - - - - - + +Set key variables — set_vars • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,8 +49,7 @@
-

Usage -

+

Usage

set_vars(
   subjid = "subjid",
   visit = "visit",
@@ -95,52 +62,41 @@ 

Usage

-

Arguments -

+

Arguments

-
-
subjid -
+
subjid

The name of the "Subject ID" variable. A length 1 character vector.

-
visit -
+
visit

The name of the "Visit" variable. A length 1 character vector.

-
outcome -
+
outcome

The name of the "Outcome" variable. A length 1 character vector.

-
group -
+
group

The name of the "Group" variable. A length 1 character vector.

-
covariates -
+
covariates

The name of any covariates to be used in the context of modeling. See details.

-
strata -
+
strata

The name of the any stratification variable to be used in the context of bootstrap sampling. See details.

-
strategy -
+
strategy

The name of the "strategy" variable. A length 1 character vector.

-
-
+
-

Details -

+

Details

In both draws() and ancova() the covariates argument can be specified to indicate which variables should be included in the imputation and analysis models respectively. If you wish to include interaction terms these need to be manually specified i.e. @@ -154,17 +110,13 @@

Detailsdata_ice data.frame. See draws() for more details.

-

See also -

- +

See also

+
-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 
 # Using CDISC variable names as an example
@@ -182,8 +134,7 @@ 

Examples -

+
- + + + - - diff --git a/main/reference/simulate_data.html b/main/reference/simulate_data.html index 16408178..bbf00dcc 100644 --- a/main/reference/simulate_data.html +++ b/main/reference/simulate_data.html @@ -1,32 +1,17 @@ - - - - - - -Generate data — simulate_data • rbmi - - - - - -Generate data — simulate_data • rbmi - - - - +condition related (NSDRC) reasons and outcome data after ICE2 is always missing."> Skip to contents @@ -42,38 +27,21 @@ + @@ -96,31 +64,25 @@
-

Usage -

+

Usage

simulate_data(pars_c, pars_t, post_ice1_traj, strategies = getStrategies())
-

Arguments -

+

Arguments

-
-
pars_c -
+
pars_c

A simul_pars object as generated by set_simul_pars(). It specifies the simulation parameters of the control arm.

-
pars_t -
+
pars_t

A simul_pars object as generated by set_simul_pars(). It specifies the simulation parameters of the treatment arm.

-
post_ice1_traj -
+
post_ice1_traj

A string which specifies how observed outcomes occurring after ICE1 are simulated. Must target a function included in strategies. Possible choices are: Missing At @@ -130,19 +92,14 @@

ArgumentsgetStrategies() for details.

-
strategies -
+
strategies

A named list of functions. Default equal to getStrategies(). See getStrategies() for details.

-
-
+
-

Value -

-

A data.frame containing the simulated data. It includes the following variables:

-
    -
  • id: Factor variable that specifies the id of each subject.

  • +

    Value

    +

    A data.frame containing the simulated data. It includes the following variables:

    • id: Factor variable that specifies the id of each subject.

    • visit: Factor variable that specifies the visit of each assessment. Visit 0 denotes the baseline visit.

    • group: Factor variable that specifies which treatment group each subject belongs to.

    • @@ -157,14 +114,10 @@

      Value by ICE2.

    • outcome: Numeric variable that specifies the longitudinal outcome including ICE1, ICE2 and the intermittent missing values.

    • -

    -
+
-

Details -

-

The data generation works as follows:

-

The probability of the ICE after each visit is modeled according to the following logistic regression model: -~ 1 + I(visit == 0) + ... + I(visit == n_visits-1) + I((x-alpha)) where:

-

Please note that the baseline outcome cannot be missing nor be affected by any ICEs.

- + - + + + - - diff --git a/main/reference/simulate_dropout.html b/main/reference/simulate_dropout.html index 9165eaac..190b1330 100644 --- a/main/reference/simulate_dropout.html +++ b/main/reference/simulate_dropout.html @@ -1,20 +1,5 @@ - - - - - - -Simulate drop-out — simulate_dropout • rbmi - - - - - - - - - - + +Simulate drop-out — simulate_dropout • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,47 +46,38 @@
-

Usage -

+

Usage

simulate_dropout(prob_dropout, ids, subset = rep(1, length(ids)))
-

Arguments -

+

Arguments

-
-
prob_dropout -
+
prob_dropout

Numeric that specifies the probability that a post-baseline visit is affected by study drop-out.

-
ids -
+
ids

Factor variable that specifies the id of each subject.

-
subset -
+
subset

Binary variable that specifies the subset that could be affected by drop-out. I.e. subset is a binary vector of length equal to the length of ids that takes value 1 if the corresponding visit could be affected by drop-out and 0 otherwise.

-
-
+
-

Value -

+

Value

A binary vector of length equal to the length of ids that takes value 1 if the corresponding outcome is affected by study drop-out.

-

Details -

+

Details

subset can be used to specify outcome values that cannot be affected by the drop-out. By default subset will be set to 1 for all the values except the values corresponding to the @@ -128,8 +87,7 @@

Details -

+ - + + + - - diff --git a/main/reference/simulate_ice.html b/main/reference/simulate_ice.html index 1df14839..aace4482 100644 --- a/main/reference/simulate_ice.html +++ b/main/reference/simulate_ice.html @@ -1,20 +1,5 @@ - - - - - - -Simulate intercurrent event — simulate_ice • rbmi - - - - - - - - - - + +Simulate intercurrent event — simulate_ice • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,65 +46,51 @@
-

Usage -

+

Usage

simulate_ice(outcome, visits, ids, prob_ice, or_outcome_ice, baseline_mean)
-

Arguments -

+

Arguments

-
-
outcome -
+
outcome

Numeric variable that specifies the longitudinal outcome for a single group.

-
visits -
+
visits

Factor variable that specifies the visit of each assessment.

-
ids -
+
ids

Factor variable that specifies the id of each subject.

-
prob_ice -
+
prob_ice

Numeric vector that specifies for each visit the probability of experiencing the ICE after the current visit for a subject with outcome equal to the mean at baseline. If a single numeric is provided, then the same probability is applied to each visit.

-
or_outcome_ice -
+
or_outcome_ice

Numeric value that specifies the odds ratio of the ICE corresponding to a +1 higher value of the outcome at the visit.

-
baseline_mean -
+
baseline_mean

Mean outcome value at baseline.

-
-
+
-

Value -

+

Value

A binary variable that takes value 1 if the corresponding outcome is affected by the ICE and 0 otherwise.

-

Details -

+

Details

The probability of the ICE after each visit is modeled according to the following logistic regression model: -~ 1 + I(visit == 0) + ... + I(visit == n_visits-1) + I((x-alpha)) where:

-
+ - + - + + + - - diff --git a/main/reference/simulate_test_data.html b/main/reference/simulate_test_data.html index 280b4830..4c10951c 100644 --- a/main/reference/simulate_test_data.html +++ b/main/reference/simulate_test_data.html @@ -1,22 +1,7 @@ - - - - - - -Create simulated datasets — simulate_test_data • rbmi - - - - - - - - - - + +Create simulated datasets — simulate_test_data • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,8 +49,7 @@
-

Usage -

+

Usage

simulate_test_data(
   n = 200,
   sd = c(3, 5, 7),
@@ -94,82 +61,58 @@ 

Usage

-

Arguments -

+

Arguments

-
-
n -
+
n

the number of subjects to sample. Total number of observations returned is thus n * length(sd)

-
sd -
+
sd

the standard deviations for the outcome at each visit. i.e. the square root of the diagonal of the covariance matrix for the outcome

-
cor -
+
cor

the correlation coefficients between the outcome values at each visit. See details.

-
mu -
+
mu

the coefficients to use to construct the mean outcome value at each visit. Must be a named list with elements int, age, sex, trt & visit. See details.

-
-
+
-

Details -

+

Details

The number of visits is determined by the size of the variance covariance matrix. i.e. if 3 standard deviation values are provided then 3 visits per patient will be created.

-

The covariates in the simulated dataset are produced as follows:

-
    -
  • Patients age is sampled at random from a N(0,1) distribution

  • +

    The covariates in the simulated dataset are produced as follows:

    • Patients age is sampled at random from a N(0,1) distribution

    • Patients sex is sampled at random with a 50/50 split

    • Patients group is sampled at random but fixed so that each group has n/2 patients

    • The outcome variable is sampled from a multivariate normal distribution, see below for details

    • -
    -

    The mean for the outcome variable is derived as:

    -

    -
    -
    outcome = Intercept + age + sex + visit + treatment
    -

    -
    +

The mean for the outcome variable is derived as:

+

outcome = Intercept + age + sex + visit + treatment

The coefficients for the intercept, age and sex are taken from mu$int, mu$age and mu$sex respectively, all of which must be a length 1 numeric.

Treatment and visit coefficients are taken from mu$trt and mu$visit respectively and must either be of length 1 (i.e. a constant affect across all visits) or equal to the number of visits (as determined by the length of sd). I.e. if you wanted a treatment slope of 5 and a visit slope of 1 you could specify:

-

-
-
mu = list(..., "trt" = c(0,5,10), "visit" = c(0,1,2))
-

-
+

mu = list(..., "trt" = c(0,5,10), "visit" = c(0,1,2))

The correlation matrix is constructed from cor as follows. Let cor = c(a, b, c, d, e, f) then the correlation matrix would be:

-

-
-
1  a  b  d
+

1  a  b  d
 a  1  c  e
 b  c  1  f
-d  e  f  1
-

-
+d e f 1

- + - + + + - - diff --git a/main/reference/sort_by.html b/main/reference/sort_by.html index 78c3dd27..73113f9f 100644 --- a/main/reference/sort_by.html +++ b/main/reference/sort_by.html @@ -1,20 +1,5 @@ - - - - - - -Sort data.frame — sort_by • rbmi - - - - - - - - - - + +Sort data.frame — sort_by • rbmi Skip to contents @@ -30,46 +15,28 @@ +
@@ -79,47 +46,38 @@
-

Usage -

+

Usage

sort_by(df, vars = NULL, decreasing = FALSE)
-

Arguments -

+

Arguments

-
-
df -
+
df

data.frame

-
vars -
+
vars

character vector of variables

-
decreasing -
+
decreasing

logical whether sort order should be in descending or ascending (default) order. Can be either a single logical value (in which case it is applied to all variables) or a vector which is the same length as vars

-
-
+
-

Examples -

+

Examples

if (FALSE) { # \dontrun{
 sort_by(iris, c("Sepal.Length", "Sepal.Width"), decreasing = c(TRUE, FALSE))
 } # }
 
- +
-
- + + + - - diff --git a/main/reference/split_dim.html b/main/reference/split_dim.html index d50c0bd2..8c5fbef7 100644 --- a/main/reference/split_dim.html +++ b/main/reference/split_dim.html @@ -1,22 +1,7 @@ - - - - - - -Transform array into list of arrays — split_dim • rbmi - - - - - - - - - - + +Transform array into list of arrays — split_dim • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,37 +49,29 @@
-

Usage -

+

Usage

split_dim(a, n)
-

Arguments -

+

Arguments

-
-
a -
+
a

Array with number of dimensions at least 2.

-
n -
+
n

Positive integer. Dimension of a to be listed.

-
-
+
-

Value -

+

Value

A list of length n of arrays with number of dimensions equal to the number of dimensions of a minus 1.

-

Details -

+

Details

For example, if a is a 3 dimensional array and n = 1, split_dim(a,n) returns a list of 2 dimensional arrays (i.e. a list of matrices) where each element of the list is a[i, , ], where @@ -120,33 +80,24 @@

Detailsinputs: a <- array( c(1,2,3,4,5,6,7,8,9,10,11,12), dim = c(3,2,2)), which means that:

-

-
-
a[1,,]     a[2,,]     a[3,,]
+

a[1,,]     a[2,,]     a[3,,]
 
 [,1] [,2]  [,1] [,2]  [,1] [,2]
 ---------  ---------  ---------
  1    7     2    8     3    9
- 4    10    5    11    6    12
-

-
+ 4 10 5 11 6 12

n <- 1

output of res <- split_dim(a,n) is a list of 3 elements:

-

-
-
res[[1]]   res[[2]]   res[[3]]
+

res[[1]]   res[[2]]   res[[3]]
 
 [,1] [,2]  [,1] [,2]  [,1] [,2]
 ---------  ---------  ---------
  1    7     2    8     3    9
- 4    10    5    11    6    12
-

-
+ 4 10 5 11 6 12

- + - + + + - - diff --git a/main/reference/split_imputations.html b/main/reference/split_imputations.html index 37596aae..2c456733 100644 --- a/main/reference/split_imputations.html +++ b/main/reference/split_imputations.html @@ -1,20 +1,5 @@ - - - - - - -Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations • rbmi - - - - - - - - - - + +Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,39 +46,30 @@
-

Usage -

+

Usage

split_imputations(list_of_singles, split_ids)
-

Arguments -

+

Arguments

-
-
list_of_singles -
+
list_of_singles

A list of imputation_single()'s

-
split_ids -
+
split_ids

A list with 1 element per required split. Each element must contain a vector of "ID"'s which correspond to the imputation_single() ID's that are required within that sample. The total number of ID's must by equal to the length of list_of_singles

-
-
+
-

Details -

+

Details

This function converts a list of imputations from being structured per patient to being structured per sample i.e. it converts

-

-
-
obj <- list(
+

obj <- list(
     imputation_single("Ben", numeric(0)),
     imputation_single("Ben", numeric(0)),
     imputation_single("Ben", numeric(0)),
@@ -123,13 +82,9 @@ 

Detailsindex <- list( c("Ben", "Harry", "Phil", "Tom"), c("Ben", "Ben", "Phil") -)

-

-
+)

Into:

-

-
-
output <- list(
+

output <- list(
     imputation_df(
         imputation_single(id = "Ben", values = numeric(0)),
         imputation_single(id = "Harry", values = c(1, 2)),
@@ -141,14 +96,11 @@ 

Details imputation_single(id = "Ben", values = numeric(0)), imputation_single(id = "Phil", values = c(5, 6)) ) -)

-

-
+)

- + - + + + - - diff --git a/main/reference/str_contains.html b/main/reference/str_contains.html index bec533dd..42bfe6d3 100644 --- a/main/reference/str_contains.html +++ b/main/reference/str_contains.html @@ -1,30 +1,15 @@ - - - - - - -Does a string contain a substring — str_contains • rbmi - - - - - - - - - - +'> Skip to contents @@ -40,38 +25,21 @@ + @@ -87,41 +55,30 @@

Returns a vector of TRUE/FALSE for each element of x if it contains any element in subs

i.e.

-

-
-
str_contains( c("ben", "tom", "harry"), c("e", "y"))
-[1] TRUE FALSE TRUE
-

-
+

str_contains( c("ben", "tom", "harry"), c("e", "y"))
+[1] TRUE FALSE TRUE

-

Usage -

+

Usage

str_contains(x, subs)
-

Arguments -

+

Arguments

-
-
x -
+
x

character vector

-
subs -
+
subs

a character vector of substrings to look for

-
-
+ - +
-
- + + + - - diff --git a/main/reference/strategies.html b/main/reference/strategies.html index 078f92dc..1690bd3b 100644 --- a/main/reference/strategies.html +++ b/main/reference/strategies.html @@ -1,26 +1,11 @@ - - - - - - -Strategies — strategies • rbmi - - - - - -Strategies — strategies • rbmi - - - - +the Missing-at-Random (MAR) assumption."> Skip to contents @@ -36,38 +21,21 @@ + @@ -87,8 +55,7 @@
-

Usage -

+

Usage

strategy_MAR(pars_group, pars_ref, index_mar)
 
 strategy_JR(pars_group, pars_ref, index_mar)
@@ -101,58 +68,43 @@ 

Usage

-

Arguments -

+

Arguments

-
-
pars_group -
+
pars_group

A list of parameters for the subject's group. See details.

-
pars_ref -
+
pars_ref

A list of parameters for the subject's reference group. See details.

-
index_mar -
+
index_mar

A logical vector indicating which visits meet the MAR assumption for the subject. I.e. this identifies the observations after a non-MAR intercurrent event (ICE).

-
-
+
-

Details -

+

Details

pars_group and pars_ref both must be a list containing elements mu and sigma. mu must be a numeric vector and sigma must be a square matrix symmetric covariance matrix with dimensions equal to the length of mu and index_mar. e.g.

-

-
-
list(
+

list(
     mu = c(1,2,3),
     sigma = matrix(c(4,3,2,3,5,4,2,4,6), nrow = 3, ncol = 3)
-)
-

-
+)

Users can define their own strategy functions and include them via the strategies argument to impute() using getStrategies(). That being said the following -strategies are available "out the box":

-
    -
  • Missing at Random (MAR)

  • +strategies are available "out the box":

    • Missing at Random (MAR)

    • Jump to Reference (JR)

    • Copy Reference (CR)

    • Copy Increments in Reference (CIR)

    • Last Mean Carried Forward (LMCF)

    • -
    -
+ - + - + + + - - diff --git a/main/reference/string_pad.html b/main/reference/string_pad.html index c0780c41..a3cfe2a5 100644 --- a/main/reference/string_pad.html +++ b/main/reference/string_pad.html @@ -1,22 +1,7 @@ - - - - - - -string_pad — string_pad • rbmi - - - - - - - - - - + +string_pad — string_pad • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,32 +49,25 @@
-

Usage -

+

Usage

string_pad(x, width)
-

Arguments -

+

Arguments

-
-
x -
+
x

string

-
width -
+
width

desired length

-
-
+ - +
-
- + + + - - diff --git a/main/reference/transpose_imputations.html b/main/reference/transpose_imputations.html index 6cfd6baf..8e5d7040 100644 --- a/main/reference/transpose_imputations.html +++ b/main/reference/transpose_imputations.html @@ -1,32 +1,17 @@ - - - - - - -Transpose imputations — transpose_imputations • rbmi - - - - - - - - - - +'> Skip to contents @@ -42,38 +27,21 @@ + @@ -87,52 +55,37 @@

Takes an imputation_df object and transposes it e.g.

-

-
-
list(
+

list(
     list(id = "a", values = c(1,2,3)),
     list(id = "b", values = c(4,5,6)
     )
-)
-

-
+)

-

Usage -

+

Usage

transpose_imputations(imputations)
-

Arguments -

+

Arguments

-
-
imputations -
+
imputations

An imputation_df object created by imputation_df()

-
-
+
-

Details -

+

Details

becomes

-

-
-
list(
+

list(
     ids = c("a", "b"),
     values = c(1,2,3,4,5,6)
-)
-

-
+)

- + - + + + - - diff --git a/main/reference/transpose_results.html b/main/reference/transpose_results.html index 92b7ee0e..8ce682e0 100644 --- a/main/reference/transpose_results.html +++ b/main/reference/transpose_results.html @@ -1,22 +1,7 @@ - - - - - - -Transpose results object — transpose_results • rbmi - - - - - - - - - - + +Transpose results object — transpose_results • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,36 +49,27 @@
-

Usage -

+

Usage

transpose_results(results, components)
-

Arguments -

+

Arguments

-
-
results -
+
results

A list of results.

-
components -
+
components

a character vector of components to extract (i.e. "est", "se").

-
-
+
-

Details -

+

Details

Essentially this function takes an object of the format:

-

-
-
x <- list(
+

x <- list(
     list(
         "trt1" = list(
             est = 1,
@@ -131,13 +90,9 @@ 

Details se = 8 ) ) -)

-

-
+)

and produces:

-

-
-
list(
+

list(
     trt1 = list(
         est = c(1,5),
         se = c(2,6)
@@ -146,14 +101,11 @@ 

Details est = c(3,7), se = c(4,8) ) -)

-

-
+)

- + - + + + - - diff --git a/main/reference/transpose_samples.html b/main/reference/transpose_samples.html index ca0144bd..489912bf 100644 --- a/main/reference/transpose_samples.html +++ b/main/reference/transpose_samples.html @@ -1,22 +1,7 @@ - - - - - - -Transpose samples — transpose_samples • rbmi - - - - - - - - - - + +Transpose samples — transpose_samples • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,27 +49,21 @@
-

Usage -

+

Usage

transpose_samples(samples)
-

Arguments -

+

Arguments

-
-
samples -
+
samples

A list of samples generated by draws().

-
-
+ - +
-
- + + + - - diff --git a/main/reference/validate.analysis.html b/main/reference/validate.analysis.html index 2ac355ad..22e6f75d 100644 --- a/main/reference/validate.analysis.html +++ b/main/reference/validate.analysis.html @@ -1,20 +1,5 @@ - - - - - - -Validate analysis objects — validate.analysis • rbmi - - - - - - - - - - + +Validate analysis objects — validate.analysis • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,33 +46,26 @@
-

Usage -

+

Usage

# S3 method for class 'analysis'
 validate(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

An analysis results object (of class "jackknife", "bootstrap", "rubin").

-
... -
+
...

Not used.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/validate.draws.html b/main/reference/validate.draws.html index 797989c2..2186ca1c 100644 --- a/main/reference/validate.draws.html +++ b/main/reference/validate.draws.html @@ -1,20 +1,5 @@ - - - - - - -Validate draws object — validate.draws • rbmi - - - - - - - - - - + +Validate draws object — validate.draws • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,33 +46,26 @@
-

Usage -

+

Usage

# S3 method for class 'draws'
 validate(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

A draws object generated by as_draws().

-
... -
+
...

Not used.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/validate.html b/main/reference/validate.html index 45161bdd..022adb64 100644 --- a/main/reference/validate.html +++ b/main/reference/validate.html @@ -1,24 +1,9 @@ - - - - - - -Generic validation method — validate • rbmi - - - - - -Generic validation method — validate • rbmi - - - - +have been violated. Will throw an error if checks do not pass."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,32 +52,25 @@
-

Usage -

+

Usage

validate(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

object to be validated.

-
... -
+
...

additional arguments to pass to the specific validation method.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/validate.is_mar.html b/main/reference/validate.is_mar.html index 18681d4a..23872505 100644 --- a/main/reference/validate.is_mar.html +++ b/main/reference/validate.is_mar.html @@ -1,24 +1,9 @@ - - - - - - -Validate is_mar for a given subject — validate.is_mar • rbmi - - - - - -Validate is_mar for a given subject — validate.is_mar • rbmi - - - - +observation is not allowed."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,38 +52,30 @@
-

Usage -

+

Usage

# S3 method for class 'is_mar'
 validate(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

Object of class is_mar. Logical vector indicating whether observations are MAR.

-
... -
+
...

Not used.

-
-
+
-

Value -

+

Value

Will error if there is an issue otherwise will return TRUE.

- +
-
- + + + - - diff --git a/main/reference/validate.ivars.html b/main/reference/validate.ivars.html index 7bb808e2..7a32bf79 100644 --- a/main/reference/validate.ivars.html +++ b/main/reference/validate.ivars.html @@ -1,22 +1,7 @@ - - - - - - -Validate inputs for vars — validate.ivars • rbmi - - - - - - - - - - + +Validate inputs for vars — validate.ivars • rbmi Skip to contents @@ -32,46 +17,28 @@ +
@@ -82,33 +49,26 @@
-

Usage -

+

Usage

# S3 method for class 'ivars'
 validate(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

named list indicating the names of key variables in the source dataset

-
... -
+
...

not used

-
-
+
- +
-
- + + + - - diff --git a/main/reference/validate.references.html b/main/reference/validate.references.html index 0292bf0a..b2562dc5 100644 --- a/main/reference/validate.references.html +++ b/main/reference/validate.references.html @@ -1,22 +1,7 @@ - - - - - - -Validate user supplied references — validate.references • rbmi - - - - - - - - - - + +Validate user supplied references — validate.references • rbmi Skip to contents @@ -32,38 +17,21 @@ + @@ -81,43 +49,34 @@
-

Usage -

+

Usage

# S3 method for class 'references'
 validate(x, control, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

named character vector.

-
control -
+
control

factor variable (should be the group variable from the source dataset).

-
... -
+
...

Not used.

-
-
+
-

Value -

+

Value

Will error if there is an issue otherwise will return TRUE.

- +
-
- + + + - - diff --git a/main/reference/validate.sample_list.html b/main/reference/validate.sample_list.html index e0647776..fc6cbc39 100644 --- a/main/reference/validate.sample_list.html +++ b/main/reference/validate.sample_list.html @@ -1,20 +1,5 @@ - - - - - - -Validate sample_list object — validate.sample_list • rbmi - - - - - - - - - - + +Validate sample_list object — validate.sample_list • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,33 +46,26 @@
-

Usage -

+

Usage

# S3 method for class 'sample_list'
 validate(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

A sample_list object generated by sample_list().

-
... -
+
...

Not used.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/validate.sample_single.html b/main/reference/validate.sample_single.html index 244bd26c..14be0f20 100644 --- a/main/reference/validate.sample_single.html +++ b/main/reference/validate.sample_single.html @@ -1,20 +1,5 @@ - - - - - - -Validate sample_single object — validate.sample_single • rbmi - - - - - - - - - - + +Validate sample_single object — validate.sample_single • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,33 +46,26 @@
-

Usage -

+

Usage

# S3 method for class 'sample_single'
 validate(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

A sample_single object generated by sample_single().

-
... -
+
...

Not used.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/validate.simul_pars.html b/main/reference/validate.simul_pars.html index e2906666..cb398975 100644 --- a/main/reference/validate.simul_pars.html +++ b/main/reference/validate.simul_pars.html @@ -1,20 +1,5 @@ - - - - - - -Validate a simul_pars object — validate.simul_pars • rbmi - - - - - - - - - - + +Validate a simul_pars object — validate.simul_pars • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,33 +46,26 @@
-

Usage -

+

Usage

# S3 method for class 'simul_pars'
 validate(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

An simul_pars object as generated by set_simul_pars().

-
... -
+
...

Not used.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/validate.stan_data.html b/main/reference/validate.stan_data.html index 522a1432..6daf9ca9 100644 --- a/main/reference/validate.stan_data.html +++ b/main/reference/validate.stan_data.html @@ -1,20 +1,5 @@ - - - - - - -Validate a stan_data object — validate.stan_data • rbmi - - - - - - - - - - + +Validate a stan_data object — validate.stan_data • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,33 +46,26 @@
-

Usage -

+

Usage

# S3 method for class 'stan_data'
 validate(x, ...)
-

Arguments -

+

Arguments

-
-
x -
+
x

A stan_data object.

-
... -
+
...

Not used.

-
-
+ - +
-
- + + + - - diff --git a/main/reference/validate_analyse_pars.html b/main/reference/validate_analyse_pars.html index b0cc9c63..67e8ff66 100644 --- a/main/reference/validate_analyse_pars.html +++ b/main/reference/validate_analyse_pars.html @@ -1,20 +1,5 @@ - - - - - - -Validate analysis results — validate_analyse_pars • rbmi - - - - - - - - - - + +Validate analysis results — validate_analyse_pars • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,34 +46,27 @@
-

Usage -

+

Usage

validate_analyse_pars(results, pars)
-

Arguments -

+

Arguments

-
-
results -
+
results

A list of results generated by the analysis fun used in analyse().

-
pars -
+
pars

A list of expected parameters in each of the analysis. lists i.e. c("est", "se", "df").

-
-
+ - +
-
- + + + - - diff --git a/main/reference/validate_datalong.html b/main/reference/validate_datalong.html index 25648003..5dbe9a0a 100644 --- a/main/reference/validate_datalong.html +++ b/main/reference/validate_datalong.html @@ -1,20 +1,5 @@ - - - - - - -Validate a longdata object — validate_datalong • rbmi - - - - - - - - - - + +Validate a longdata object — validate_datalong • rbmi Skip to contents @@ -30,38 +15,21 @@ + @@ -78,8 +46,7 @@
-

Usage -

+

Usage

validate_datalong(data, vars)
 
 validate_datalong_varExists(data, vars)
@@ -96,41 +63,31 @@ 

Usage

-

Arguments -

+

Arguments

-
-
data -
+
data

a data.frame containing the longitudinal outcome data + covariates for multiple subjects

-
vars -
+
vars

a vars object as created by set_vars()

-
data_ice -
+
data_ice

a data.frame containing the subjects ICE data. See draws() for details.

-
update -
+
update

logical, indicates if the ICE data is being set for the first time or if an update is being applied

-
-
+
-

Details -

+

Details

These functions are used to validate various different parts of the longdata object -to be used in draws(), impute(), analyse() and pool(). In particular:

-
+ - + - + + + - - diff --git a/main/reference/validate_strategies.html b/main/reference/validate_strategies.html index 04fcd45b..83200460 100644 --- a/main/reference/validate_strategies.html +++ b/main/reference/validate_strategies.html @@ -1,24 +1,9 @@ - - - - - - -Validate user specified strategies — validate_strategies • rbmi - - - - - -Validate user specified strategies — validate_strategies • rbmi - - - - +of reference have been defined."> Skip to contents @@ -34,38 +19,21 @@ + @@ -84,37 +52,29 @@
-

Usage -

+

Usage

validate_strategies(strategies, reference)
-

Arguments -

+

Arguments

-
-
strategies -
+
strategies

named list of strategies.

-
reference -
+
reference

list or character vector of strategies that need to be defined.

-
-
+
-

Value -

+

Value

Will throw an error if there is an issue otherwise will return TRUE.

- +
-
- + + + - - diff --git a/main/search.json b/main/search.json index e4bbf7de..906b1bbd 100644 --- a/main/search.json +++ b/main/search.json @@ -1 +1 @@ -[{"path":"/CONTRIBUTING.html","id":null,"dir":"","previous_headings":"","what":"Contributing to rbmi","title":"Contributing to rbmi","text":"file outlines propose make changes rbmi well providing details obscure aspects package’s development process.","code":""},{"path":"/CONTRIBUTING.html","id":"setup","dir":"","previous_headings":"","what":"Setup","title":"Contributing to rbmi","text":"order develop contribute rbmi need access C/C++ compiler. Windows install rtools macOS install Xcode. Likewise, also need install package’s development dependencies. can done launching R within project root executing:","code":"devtools::install_dev_deps()"},{"path":"/CONTRIBUTING.html","id":"code-changes","dir":"","previous_headings":"","what":"Code changes","title":"Contributing to rbmi","text":"want make code contribution, ’s good idea first file issue make sure someone team agrees ’s needed. ’ve found bug, please file issue illustrates bug minimal reprex (also help write unit test, needed).","code":""},{"path":"/CONTRIBUTING.html","id":"pull-request-process","dir":"","previous_headings":"Code changes","what":"Pull request process","title":"Contributing to rbmi","text":"project uses simple GitHub flow model development. , code changes done feature branch based main branch merged back main branch complete. Pull Requests accepted unless CI/CD checks passed. (See CI/CD section information). Pull Requests relating package’s core R code must accompanied corresponding unit test. pull requests containing changes core R code contain unit test demonstrate working intended accepted. (See Unit Testing section information). Pull Requests add lines changed NEWS.md file.","code":""},{"path":"/CONTRIBUTING.html","id":"coding-considerations","dir":"","previous_headings":"Code changes","what":"Coding Considerations","title":"Contributing to rbmi","text":"use roxygen2, Markdown syntax, documentation. Please ensure code conforms lintr. can check running lintr::lint(\"FILE NAME\") files modified ensuring findings kept possible. hard requirements following lintr’s conventions encourage developers follow guidance closely possible. project uses 4 space indents, contributions following accepted. project makes use S3 R6 OOP. Usage S4 OOP systems avoided unless absolutely necessary ensure consistency. said recommended stick S3 unless modification place R6 specific features required. current desire package keep dependency tree small possible. end discouraged adding additional packages “Depends” / “Imports” section unless absolutely essential. importing package just use single function consider just copying source code function instead, though please check licence include proper attribution/notices. expectations “Suggests” free use package vignettes / unit tests, though please mindful unnecessarily excessive .","code":""},{"path":"/CONTRIBUTING.html","id":"unit-testing--cicd","dir":"","previous_headings":"","what":"Unit Testing & CI/CD","title":"Contributing to rbmi","text":"project uses testthat perform unit testing combination GitHub Actions CI/CD.","code":""},{"path":"/CONTRIBUTING.html","id":"scheduled-testing","dir":"","previous_headings":"Unit Testing & CI/CD","what":"Scheduled Testing","title":"Contributing to rbmi","text":"Due stochastic nature package unit tests take considerable amount time execute. avoid issues usability, unit tests take couple seconds run deferred scheduled testing. tests run occasionally periodic basis (currently twice month) every pull request / push event. defer test scheduled build simply include skip_if_not(is_full_test()) top test_that() block .e. scheduled tests can also manually activated going “https://github.com/insightsengineering/rbmi” -> “Actions” -> “Bi-Weekly” -> “Run Workflow”. advisable releasing CRAN.","code":"test_that(\"some unit test\", { skip_if_not(is_full_test()) expect_equal(1,1) })"},{"path":"/CONTRIBUTING.html","id":"docker-images","dir":"","previous_headings":"Unit Testing & CI/CD","what":"Docker Images","title":"Contributing to rbmi","text":"support CI/CD, terms reducing setup time, Docker images created contains packages system dependencies required project. image can found : ghcr.io/insightsengineering/rbmi:latest image automatically re-built month contain latest version R packages. code create images can found misc/docker. build image locally run following project root directory:","code":"docker build -f misc/docker/Dockerfile -t rbmi:latest ."},{"path":"/CONTRIBUTING.html","id":"reproducibility-print-tests--snaps","dir":"","previous_headings":"Unit Testing & CI/CD","what":"Reproducibility, Print Tests & Snaps","title":"Contributing to rbmi","text":"particular issue testing package reproducibility. part handled well via set.seed() however stan/rstan guarantee reproducibility even seed run different hardware. issue surfaces testing print messages pool object displays treatment estimates thus identical run different machines. address issue pre-made pool objects generated stored R/sysdata.rda (generated data-raw/create_print_test_data.R). generated print messages compared expected values stored tests/testthat/_snaps/ (automatically created testthat::expect_snapshot())","code":""},{"path":"/CONTRIBUTING.html","id":"fitting-mmrms","dir":"","previous_headings":"","what":"Fitting MMRM’s","title":"Contributing to rbmi","text":"package currently uses mmrm package fit MMRM models. package still fairly new far proven stable, fast reliable. spot issues MMRM package please raise corresponding GitHub Repository - link mmrm package uses TMB uncommon see warnings either inconsistent versions TMB Matrix package compiled . order resolve may wish re-compile packages source using: Note need rtools installed Windows machine Xcode running macOS (somehow else access C/C++ compiler).","code":"install.packages(c(\"TMB\", \"mmrm\"), type = \"source\")"},{"path":"/CONTRIBUTING.html","id":"rstan","dir":"","previous_headings":"","what":"rstan","title":"Contributing to rbmi","text":"Bayesian models fitted package implemented via stan/rstan. code can found inst/stan/MMRM.stan. Note package automatically take care compiling code install run devtools::load_all(). Please note package won’t recompile code unless changed source code delete src directory.","code":""},{"path":"/CONTRIBUTING.html","id":"vignettes","dir":"","previous_headings":"","what":"Vignettes","title":"Contributing to rbmi","text":"CRAN imposes 10-minute run limit building, compiling testing package. keep limit vignettes pre-built; say simply changing source code automatically update vignettes, need manually re-build . need run: re-built need commit updated *.html files git repository. reference static vignette process works using “asis” vignette engine provided R.rsp. works getting R recognise vignettes files ending *.html.asis; builds simply copying corresponding files ending *.html relevent docs/ folder built package.","code":"Rscript vignettes/build.R"},{"path":"/CONTRIBUTING.html","id":"misc--local-folders","dir":"","previous_headings":"","what":"Misc & Local Folders","title":"Contributing to rbmi","text":"misc/ folder project used hold useful scripts, analyses, simulations & infrastructure code wish keep isn’t essential build deployment package. Feel free store additional stuff feel worth keeping. Likewise, local/ added .gitignore file meaning anything stored folder won’t committed repository. example, may find useful storing personal scripts testing generally exploring package development.","code":""},{"path":"/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"Apache License","title":"Apache License","text":"Version 2.0, January 2004 ","code":""},{"path":[]},{"path":"/LICENSE.html","id":"id_1-definitions","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"1. Definitions","title":"Apache License","text":"“License” shall mean terms conditions use, reproduction, distribution defined Sections 1 9 document. “Licensor” shall mean copyright owner entity authorized copyright owner granting License. “Legal Entity” shall mean union acting entity entities control, controlled , common control entity. purposes definition, “control” means () power, direct indirect, cause direction management entity, whether contract otherwise, (ii) ownership fifty percent (50%) outstanding shares, (iii) beneficial ownership entity. “” (“”) shall mean individual Legal Entity exercising permissions granted License. “Source” form shall mean preferred form making modifications, including limited software source code, documentation source, configuration files. “Object” form shall mean form resulting mechanical transformation translation Source form, including limited compiled object code, generated documentation, conversions media types. “Work” shall mean work authorship, whether Source Object form, made available License, indicated copyright notice included attached work (example provided Appendix ). “Derivative Works” shall mean work, whether Source Object form, based (derived ) Work editorial revisions, annotations, elaborations, modifications represent, whole, original work authorship. purposes License, Derivative Works shall include works remain separable , merely link (bind name) interfaces , Work Derivative Works thereof. “Contribution” shall mean work authorship, including original version Work modifications additions Work Derivative Works thereof, intentionally submitted Licensor inclusion Work copyright owner individual Legal Entity authorized submit behalf copyright owner. purposes definition, “submitted” means form electronic, verbal, written communication sent Licensor representatives, including limited communication electronic mailing lists, source code control systems, issue tracking systems managed , behalf , Licensor purpose discussing improving Work, excluding communication conspicuously marked otherwise designated writing copyright owner “Contribution.” “Contributor” shall mean Licensor individual Legal Entity behalf Contribution received Licensor subsequently incorporated within Work.","code":""},{"path":"/LICENSE.html","id":"id_2-grant-of-copyright-license","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"2. Grant of Copyright License","title":"Apache License","text":"Subject terms conditions License, Contributor hereby grants perpetual, worldwide, non-exclusive, -charge, royalty-free, irrevocable copyright license reproduce, prepare Derivative Works , publicly display, publicly perform, sublicense, distribute Work Derivative Works Source Object form.","code":""},{"path":"/LICENSE.html","id":"id_3-grant-of-patent-license","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"3. Grant of Patent License","title":"Apache License","text":"Subject terms conditions License, Contributor hereby grants perpetual, worldwide, non-exclusive, -charge, royalty-free, irrevocable (except stated section) patent license make, made, use, offer sell, sell, import, otherwise transfer Work, license applies patent claims licensable Contributor necessarily infringed Contribution(s) alone combination Contribution(s) Work Contribution(s) submitted. institute patent litigation entity (including cross-claim counterclaim lawsuit) alleging Work Contribution incorporated within Work constitutes direct contributory patent infringement, patent licenses granted License Work shall terminate date litigation filed.","code":""},{"path":"/LICENSE.html","id":"id_4-redistribution","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"4. Redistribution","title":"Apache License","text":"may reproduce distribute copies Work Derivative Works thereof medium, without modifications, Source Object form, provided meet following conditions: () must give recipients Work Derivative Works copy License; (b) must cause modified files carry prominent notices stating changed files; (c) must retain, Source form Derivative Works distribute, copyright, patent, trademark, attribution notices Source form Work, excluding notices pertain part Derivative Works; (d) Work includes “NOTICE” text file part distribution, Derivative Works distribute must include readable copy attribution notices contained within NOTICE file, excluding notices pertain part Derivative Works, least one following places: within NOTICE text file distributed part Derivative Works; within Source form documentation, provided along Derivative Works; , within display generated Derivative Works, wherever third-party notices normally appear. contents NOTICE file informational purposes modify License. may add attribution notices within Derivative Works distribute, alongside addendum NOTICE text Work, provided additional attribution notices construed modifying License. may add copyright statement modifications may provide additional different license terms conditions use, reproduction, distribution modifications, Derivative Works whole, provided use, reproduction, distribution Work otherwise complies conditions stated License.","code":""},{"path":"/LICENSE.html","id":"id_5-submission-of-contributions","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"5. Submission of Contributions","title":"Apache License","text":"Unless explicitly state otherwise, Contribution intentionally submitted inclusion Work Licensor shall terms conditions License, without additional terms conditions. Notwithstanding , nothing herein shall supersede modify terms separate license agreement may executed Licensor regarding Contributions.","code":""},{"path":"/LICENSE.html","id":"id_6-trademarks","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"6. Trademarks","title":"Apache License","text":"License grant permission use trade names, trademarks, service marks, product names Licensor, except required reasonable customary use describing origin Work reproducing content NOTICE file.","code":""},{"path":"/LICENSE.html","id":"id_7-disclaimer-of-warranty","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"7. Disclaimer of Warranty","title":"Apache License","text":"Unless required applicable law agreed writing, Licensor provides Work (Contributor provides Contributions) “” BASIS, WITHOUT WARRANTIES CONDITIONS KIND, either express implied, including, without limitation, warranties conditions TITLE, NON-INFRINGEMENT, MERCHANTABILITY, FITNESS PARTICULAR PURPOSE. solely responsible determining appropriateness using redistributing Work assume risks associated exercise permissions License.","code":""},{"path":"/LICENSE.html","id":"id_8-limitation-of-liability","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"8. Limitation of Liability","title":"Apache License","text":"event legal theory, whether tort (including negligence), contract, otherwise, unless required applicable law (deliberate grossly negligent acts) agreed writing, shall Contributor liable damages, including direct, indirect, special, incidental, consequential damages character arising result License use inability use Work (including limited damages loss goodwill, work stoppage, computer failure malfunction, commercial damages losses), even Contributor advised possibility damages.","code":""},{"path":"/LICENSE.html","id":"id_9-accepting-warranty-or-additional-liability","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"9. Accepting Warranty or Additional Liability","title":"Apache License","text":"redistributing Work Derivative Works thereof, may choose offer, charge fee , acceptance support, warranty, indemnity, liability obligations /rights consistent License. However, accepting obligations, may act behalf sole responsibility, behalf Contributor, agree indemnify, defend, hold Contributor harmless liability incurred , claims asserted , Contributor reason accepting warranty additional liability. END TERMS CONDITIONS","code":""},{"path":"/LICENSE.html","id":"appendix-how-to-apply-the-apache-license-to-your-work","dir":"","previous_headings":"","what":"APPENDIX: How to apply the Apache License to your work","title":"Apache License","text":"apply Apache License work, attach following boilerplate notice, fields enclosed brackets [] replaced identifying information. (Don’t include brackets!) text enclosed appropriate comment syntax file format. also recommend file class name description purpose included “printed page” copyright notice easier identification within third-party archives.","code":"Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."},{"path":"/articles/CondMean_Inference.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"rbmi: Inference with Conditional Mean Imputation","text":"described section 3.10.2 statistical specifications package (vignette(topic = \"stat_specs\", package = \"rbmi\")), two different types variance estimators proposed reference-based imputation methods statistical literature (Bartlett (2023)). first frequentist variance describes actual repeated sampling variability estimator results inference correct frequentist sense, .e. hypothesis tests accurate type error control confidence intervals correct coverage probabilities repeated sampling reference-based assumption correctly specified (Bartlett (2023), Wolbers et al. (2022)). Reference-based missing data assumption strong borrow information control arm imputation active arm. consequence, size frequentist standard errors treatment effects may decrease increasing amounts missing data. second -called “information-anchored” variance originally proposed context sensitivity analyses (Cro, Carpenter, Kenward (2019)). variance estimator based disentangling point estimation variance estimation altogether. resulting information-anchored variance typically similar variance missing--random (MAR) imputation increases increasing amounts missing data approximately rate MAR imputation. However, information-anchored variance reflect actual variability reference-based estimator resulting frequentist inference highly conservative resulting substantial power loss. Reference-based conditional mean imputation combined resampling method jackknife bootstrap first introduced Wolbers et al. (2022). approach naturally targets frequentist variance. information-anchored variance typically estimated using Rubin’s rules Bayesian multiple imputation applicable within conditional mean imputation framework. However, alternative information-anchored variance proposed Lu (2021) can easily obtained show . basic idea Lu (2021) obtain information-anchored variance via MAR imputation combined delta-adjustment delta selected data-driven way match reference-based estimator. conditional mean imputation, proposal Lu (2021) can implemented choosing delta-adjustment difference conditional mean imputation chosen reference-based assumption MAR original dataset. variance can obtained via jackknife bootstrap keeping delta-adjustment fixed. resulting variance estimate similar Rubin’s variance. Moreover, shown Cro, Carpenter, Kenward (2019), variance MAR-imputation combined delta-adjustment achieves even better information-anchoring properties Rubin’s variance reference-based imputation. Reference-based missing data assumptions strong borrow information control arm imputation active arm. vignette demonstrates first obtain frequentist inference using reference-based conditional mean imputation using rbmi, shows information-anchored inference can also easily implemented using package.","code":""},{"path":"/articles/CondMean_Inference.html","id":"data-and-model-specification","dir":"Articles","previous_headings":"","what":"Data and model specification","title":"rbmi: Inference with Conditional Mean Imputation","text":"use publicly available example dataset antidepressant clinical trial active drug versus placebo. relevant endpoint Hamilton 17-item depression rating scale (HAMD17) assessed baseline weeks 1, 2, 4, 6. Study drug discontinuation occurred 24% subjects active drug 26% subjects placebo. data study drug discontinuation missing single additional intermittent missing observation. consider imputation model mean change baseline HAMD17 score outcome (variable CHANGE dataset). following covariates included imputation model: treatment group (THERAPY), (categorical) visit (VISIT), treatment--visit interactions, baseline HAMD17 score (BASVAL), baseline HAMD17 score--visit interactions. common unstructured covariance matrix structure assumed groups. analysis model ANCOVA model treatment group primary factor adjustment baseline HAMD17 score. example, assume imputation strategy ICE “study-drug discontinuation” Jump Reference (JR) subjects imputation based conditional mean imputation combined jackknife resampling (bootstrap also selected).","code":""},{"path":"/articles/CondMean_Inference.html","id":"reference-based-conditional-mean-imputation---frequentist-inference","dir":"Articles","previous_headings":"","what":"Reference-based conditional mean imputation - frequentist inference","title":"rbmi: Inference with Conditional Mean Imputation","text":"Conditional mean imputation combined resampling method jackknife bootstrap naturally targets frequentist estimation standard error treatment effect, thus providing valid frequentist inference. provide code obtain frequentist inference reference-based conditional mean imputation using rbmi. code used section almost identical code quickstart vignette (vignette(topic = \"quickstart\", package = \"rbmi\")) except use conditional mean imputation combined jackknife (method_condmean(type = \"jackknife\")) rather Bayesian multiple imputation (method_bayes()). therefore refer vignette help files individual functions explanations details.","code":""},{"path":"/articles/CondMean_Inference.html","id":"draws","dir":"Articles","previous_headings":"3 Reference-based conditional mean imputation - frequentist inference","what":"Draws","title":"rbmi: Inference with Conditional Mean Imputation","text":"make use rbmi::expand_locf() expand dataset order one row per subject per visit missing outcomes denoted NA. construct data_ice, vars method input arguments first core rbmi function, draws(). Finally, call function draws() derive parameter estimates base imputation model full dataset leave-one-subject-samples.","code":"library(rbmi) library(dplyr) #> #> Attaching package: 'dplyr' #> The following objects are masked from 'package:stats': #> #> filter, lag #> The following objects are masked from 'package:base': #> #> intersect, setdiff, setequal, union dat <- antidepressant_data # Use expand_locf to add rows corresponding to visits with missing outcomes to # the dataset dat <- expand_locf( dat, PATIENT = levels(dat$PATIENT), # expand by PATIENT and VISIT VISIT = levels(dat$VISIT), vars = c(\"BASVAL\", \"THERAPY\"), # fill with LOCF BASVAL and THERAPY group = c(\"PATIENT\"), order = c(\"PATIENT\", \"VISIT\") ) # create data_ice and set the imputation strategy to JR for # each patient with at least one missing observation dat_ice <- dat %>% arrange(PATIENT, VISIT) %>% filter(is.na(CHANGE)) %>% group_by(PATIENT) %>% slice(1) %>% ungroup() %>% select(PATIENT, VISIT) %>% mutate(strategy = \"JR\") # In this dataset, subject 3618 has an intermittent missing values which # does not correspond to a study drug discontinuation. We therefore remove # this subject from `dat_ice`. (In the later imputation step, it will # automatically be imputed under the default MAR assumption.) dat_ice <- dat_ice[-which(dat_ice$PATIENT == 3618),] # Define the names of key variables in our dataset and # the covariates included in the imputation model using `set_vars()` vars <- set_vars( outcome = \"CHANGE\", visit = \"VISIT\", subjid = \"PATIENT\", group = \"THERAPY\", covariates = c(\"BASVAL*VISIT\", \"THERAPY*VISIT\") ) # Define which imputation method to use (here: conditional mean imputation # with jackknife as resampling) method <- method_condmean(type = \"jackknife\") # Create samples for the imputation parameters by running the draws() function drawObj <- draws( data = dat, data_ice = dat_ice, vars = vars, method = method, quiet = TRUE ) drawObj #> #> Draws Object #> ------------ #> Number of Samples: 1 + 172 #> Number of Failed Samples: 0 #> Model Formula: CHANGE ~ 1 + THERAPY + VISIT + BASVAL * VISIT + THERAPY * VISIT #> Imputation Type: condmean #> Method: #> name: Conditional Mean #> covariance: us #> threshold: 0.01 #> same_cov: TRUE #> REML: TRUE #> type: jackknife"},{"path":"/articles/CondMean_Inference.html","id":"impute","dir":"Articles","previous_headings":"3 Reference-based conditional mean imputation - frequentist inference","what":"Impute","title":"rbmi: Inference with Conditional Mean Imputation","text":"can use now function impute() perform imputation original dataset leave-one-samples using results obtained previous step.","code":"references <- c(\"DRUG\" = \"PLACEBO\", \"PLACEBO\" = \"PLACEBO\") imputeObj <- impute(drawObj, references) imputeObj #> #> Imputation Object #> ----------------- #> Number of Imputed Datasets: 1 + 172 #> Fraction of Missing Data (Original Dataset): #> 4: 0% #> 5: 8% #> 6: 13% #> 7: 25% #> References: #> DRUG -> PLACEBO #> PLACEBO -> PLACEBO"},{"path":"/articles/CondMean_Inference.html","id":"analyse","dir":"Articles","previous_headings":"3 Reference-based conditional mean imputation - frequentist inference","what":"Analyse","title":"rbmi: Inference with Conditional Mean Imputation","text":"datasets imputed, can call analyse() function apply complete-data analysis model (ANCOVA) imputed dataset.","code":"# Set analysis variables using `rbmi` function \"set_vars\" vars_an <- set_vars( group = vars$group, visit = vars$visit, outcome = vars$outcome, covariates = \"BASVAL\" ) # Analyse MAR imputation with derived delta adjustment anaObj <- analyse( imputeObj, rbmi::ancova, vars = vars_an ) anaObj #> #> Analysis Object #> --------------- #> Number of Results: 1 + 172 #> Analysis Function: rbmi::ancova #> Delta Applied: FALSE #> Analysis Estimates: #> trt_4 #> lsm_ref_4 #> lsm_alt_4 #> trt_5 #> lsm_ref_5 #> lsm_alt_5 #> trt_6 #> lsm_ref_6 #> lsm_alt_6 #> trt_7 #> lsm_ref_7 #> lsm_alt_7"},{"path":"/articles/CondMean_Inference.html","id":"pool","dir":"Articles","previous_headings":"3 Reference-based conditional mean imputation - frequentist inference","what":"Pool","title":"rbmi: Inference with Conditional Mean Imputation","text":"Finally, can extract treatment effect estimates perform inference using jackknife variance estimator. done calling pool() function. gives estimated treatment effect 2.13 (95% CI 0.44 3.81) last visit associated p-value 0.013.","code":"poolObj <- pool(anaObj) poolObj #> #> Pool Object #> ----------- #> Number of Results Combined: 1 + 172 #> Method: jackknife #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_4 -0.092 0.695 -1.453 1.27 0.895 #> lsm_ref_4 -1.616 0.588 -2.767 -0.464 0.006 #> lsm_alt_4 -1.708 0.396 -2.484 -0.931 <0.001 #> trt_5 1.305 0.878 -0.416 3.027 0.137 #> lsm_ref_5 -4.133 0.688 -5.481 -2.785 <0.001 #> lsm_alt_5 -2.828 0.604 -4.011 -1.645 <0.001 #> trt_6 1.929 0.862 0.239 3.619 0.025 #> lsm_ref_6 -6.088 0.671 -7.402 -4.773 <0.001 #> lsm_alt_6 -4.159 0.686 -5.503 -2.815 <0.001 #> trt_7 2.126 0.858 0.444 3.807 0.013 #> lsm_ref_7 -6.965 0.685 -8.307 -5.622 <0.001 #> lsm_alt_7 -4.839 0.762 -6.333 -3.346 <0.001 #> --------------------------------------------------"},{"path":"/articles/CondMean_Inference.html","id":"reference-based-conditional-mean-imputation---information-anchored-inference","dir":"Articles","previous_headings":"","what":"Reference-based conditional mean imputation - information-anchored inference","title":"rbmi: Inference with Conditional Mean Imputation","text":"section, present estimation process based conditional mean imputation combined jackknife can adapted obtain information-anchored variance following proposal Lu (2021).","code":""},{"path":"/articles/CondMean_Inference.html","id":"draws-1","dir":"Articles","previous_headings":"4 Reference-based conditional mean imputation - information-anchored inference","what":"Draws","title":"rbmi: Inference with Conditional Mean Imputation","text":"code pre-processing dataset “draws” step equivalent code provided frequentist inference. Please refer section details step.","code":"library(rbmi) library(dplyr) dat <- antidepressant_data # Use expand_locf to add rows corresponding to visits with missing outcomes to # the dataset dat <- expand_locf( dat, PATIENT = levels(dat$PATIENT), # expand by PATIENT and VISIT VISIT = levels(dat$VISIT), vars = c(\"BASVAL\", \"THERAPY\"), # fill with LOCF BASVAL and THERAPY group = c(\"PATIENT\"), order = c(\"PATIENT\", \"VISIT\") ) # create data_ice and set the imputation strategy to JR for # each patient with at least one missing observation dat_ice <- dat %>% arrange(PATIENT, VISIT) %>% filter(is.na(CHANGE)) %>% group_by(PATIENT) %>% slice(1) %>% ungroup() %>% select(PATIENT, VISIT) %>% mutate(strategy = \"JR\") # In this dataset, subject 3618 has an intermittent missing values which # does not correspond to a study drug discontinuation. We therefore remove # this subject from `dat_ice`. (In the later imputation step, it will # automatically be imputed under the default MAR assumption.) dat_ice <- dat_ice[-which(dat_ice$PATIENT == 3618),] # Define the names of key variables in our dataset and # the covariates included in the imputation model using `set_vars()` vars <- set_vars( outcome = \"CHANGE\", visit = \"VISIT\", subjid = \"PATIENT\", group = \"THERAPY\", covariates = c(\"BASVAL*VISIT\", \"THERAPY*VISIT\") ) # Define which imputation method to use (here: conditional mean imputation # with jackknife as resampling) method <- method_condmean(type = \"jackknife\") # Create samples for the imputation parameters by running the draws() function drawObj <- draws( data = dat, data_ice = dat_ice, vars = vars, method = method, quiet = TRUE ) drawObj"},{"path":"/articles/CondMean_Inference.html","id":"imputation-step-including-calculation-of-delta-adjustment","dir":"Articles","previous_headings":"4 Reference-based conditional mean imputation - information-anchored inference","what":"Imputation step including calculation of delta-adjustment","title":"rbmi: Inference with Conditional Mean Imputation","text":"proposal Lu (2021) replace reference-based imputation MAR imputation combined delta-adjustment delta selected data-driven way match reference-based estimator. rbmi, implemented first performing imputation defined reference-based imputation strategy (JR) well MAR separately. Second, delta-adjustment defined difference conditional mean imputation reference-based MAR imputation, respectively, original dataset. simplify implementation, written function get_delta_match_refBased performs step. function takes input arguments draws object, data_ice (.e. data.frame containing information intercurrent events imputation strategies), references, named vector identifies references used reference-based imputation methods. function returns list containing imputation objects reference-based MAR imputation, plus data.frame contains delta-adjustment.","code":"#' Get delta adjustment that matches reference-based imputation #' #' @param draws: A `draws` object created by `draws()`. #' @param data_ice: `data.frame` containing the information about the intercurrent #' events and the imputation strategies. Must represent the desired imputation #' strategy and not the MAR-variant. #' @param references: A named vector. Identifies the references to be used #' for reference-based imputation methods. #' #' @return #' The function returns a list containing the imputation objects under both #' reference-based and MAR imputation, plus a `data.frame` which contains the #' delta-adjustment. #' #' @seealso `draws()`, `impute()`. get_delta_match_refBased <- function(draws, data_ice, references) { # Impute according to `data_ice` imputeObj <- impute( draws = drawObj, update_strategy = data_ice, references = references ) vars <- imputeObj$data$vars # Access imputed dataset (index=1 for method_condmean(type = \"jackknife\")) cmi <- extract_imputed_dfs(imputeObj, index = 1, idmap = TRUE)[[1]] idmap <- attributes(cmi)$idmap cmi <- cmi[, c(vars$subjid, vars$visit, vars$outcome)] colnames(cmi)[colnames(cmi) == vars$outcome] <- \"y_imp\" # Map back original patients id since `rbmi` re-code ids to ensure id uniqueness cmi[[vars$subjid]] <- idmap[match(cmi[[vars$subjid]], names(idmap))] # Derive conditional mean imputations under MAR dat_ice_MAR <- data_ice dat_ice_MAR[[vars$strategy]] <- \"MAR\" # Impute under MAR # Note that in this specific context, it is desirable that an update # from a reference-based strategy to MAR uses the exact same data for # fitting the imputation models, i.e. that available post-ICE data are # omitted from the imputation model for both. This is the case when # using argument update_strategy in function impute(). # However, for other settings (i.e. if one is interested in switching to # a standard MAR imputation strategy altogether), this behavior is # undesirable and, consequently, the function throws a warning which # we suppress here. suppressWarnings( imputeObj_MAR <- impute( draws, update_strategy = dat_ice_MAR ) ) # Access imputed dataset (index=1 for method_condmean(type = \"jackknife\")) cmi_MAR <- extract_imputed_dfs(imputeObj_MAR, index = 1, idmap = TRUE)[[1]] idmap <- attributes(cmi_MAR)$idmap cmi_MAR <- cmi_MAR[, c(vars$subjid, vars$visit, vars$outcome)] colnames(cmi_MAR)[colnames(cmi_MAR) == vars$outcome] <- \"y_MAR\" # Map back original patients id since `rbmi` re-code ids to ensure id uniqueness cmi_MAR[[vars$subjid]] <- idmap[match(cmi_MAR[[vars$subjid]], names(idmap))] # Derive delta adjustment \"aligned with ref-based imputation\", # i.e. difference between ref-based imputation and MAR imputation delta_adjust <- merge(cmi, cmi_MAR, by = c(vars$subjid, vars$visit), all = TRUE) delta_adjust$delta <- delta_adjust$y_imp - delta_adjust$y_MAR ret_obj <- list( imputeObj = imputeObj, imputeObj_MAR = imputeObj_MAR, delta_adjust = delta_adjust ) return(ret_obj) } references <- c(\"DRUG\" = \"PLACEBO\", \"PLACEBO\" = \"PLACEBO\") res_delta_adjust <- get_delta_match_refBased(drawObj, dat_ice, references)"},{"path":"/articles/CondMean_Inference.html","id":"analyse-1","dir":"Articles","previous_headings":"4 Reference-based conditional mean imputation - information-anchored inference","what":"Analyse","title":"rbmi: Inference with Conditional Mean Imputation","text":"use function analyse() add delta-adjustment perform analysis imputed datasets MAR. analyse() take input argument imputations = res_delta_adjust$imputeObj_MAR, .e. imputation object corresponding MAR imputation (JR imputation). argument delta can used add delta-adjustment prior analysis set delta-adjustment obtained previous step: delta = res_delta_adjust$delta_adjust.","code":"# Set analysis variables using `rbmi` function \"set_vars\" vars_an <- set_vars( group = vars$group, visit = vars$visit, outcome = vars$outcome, covariates = \"BASVAL\" ) # Analyse MAR imputation with derived delta adjustment anaObj_MAR_delta <- analyse( res_delta_adjust$imputeObj_MAR, rbmi::ancova, delta = res_delta_adjust$delta_adjust, vars = vars_an )"},{"path":"/articles/CondMean_Inference.html","id":"pool-1","dir":"Articles","previous_headings":"4 Reference-based conditional mean imputation - information-anchored inference","what":"Pool","title":"rbmi: Inference with Conditional Mean Imputation","text":"can finally use pool() function extract treatment effect estimate (well estimated marginal means) visit apply jackknife variance estimator analysis estimates imputed leave-one-samples. gives estimated treatment effect 2.13 (95% CI -0.08 4.33) last visit associated p-value 0.058. Per construction delta-adjustment, point estimate identical frequentist analysis. However, standard error much larger (1.12 vs. 0.86). Indeed, information-anchored standard error (resulting inference) similar results Baysesian multiple imputation using Rubin’s rules standard error 1.13 reported quickstart vignette (vignette(topic = \"quickstart\", package = \"rbmi\"). note, shown e.g. Wolbers et al. (2022), hypothesis testing based information-anchored inference conservative, .e. actual type error much lower nominal value. Hence, confidence intervals \\(p\\)-values based information-anchored inference interpreted caution.","code":"poolObj_MAR_delta <- pool(anaObj_MAR_delta) poolObj_MAR_delta #> #> Pool Object #> ----------- #> Number of Results Combined: 1 + 172 #> Method: jackknife #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_4 -0.092 0.695 -1.453 1.27 0.895 #> lsm_ref_4 -1.616 0.588 -2.767 -0.464 0.006 #> lsm_alt_4 -1.708 0.396 -2.484 -0.931 <0.001 #> trt_5 1.305 0.944 -0.545 3.156 0.167 #> lsm_ref_5 -4.133 0.738 -5.579 -2.687 <0.001 #> lsm_alt_5 -2.828 0.603 -4.01 -1.646 <0.001 #> trt_6 1.929 0.993 -0.018 3.876 0.052 #> lsm_ref_6 -6.088 0.758 -7.574 -4.602 <0.001 #> lsm_alt_6 -4.159 0.686 -5.504 -2.813 <0.001 #> trt_7 2.126 1.123 -0.076 4.327 0.058 #> lsm_ref_7 -6.965 0.85 -8.63 -5.299 <0.001 #> lsm_alt_7 -4.839 0.763 -6.335 -3.343 <0.001 #> --------------------------------------------------"},{"path":[]},{"path":"/articles/FAQ.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"rbmi: Frequently Asked Questions","text":"document provides answers common questions rbmi package. intended read rbmi: Quickstart vignette.","code":""},{"path":"/articles/FAQ.html","id":"is-rbmi-validated","dir":"Articles","previous_headings":"1 Introduction","what":"Is rbmi validated?","title":"rbmi: Frequently Asked Questions","text":"regards software pharmaceutical industry, validation act ensuring software meets needs requirements users given conditions actual use. FDA provides general principles guidance validation leaves individual sponsors define specific validation processes. Therefore, individual R package can claim ‘validated’ independently, validation depends entire software stack specific processes company. said, core components validation process design specification (software supposed ) well testing / test results demonstrate design specification met. rbmi, design specification documented extensively, macro level vignettes literature publications, micro level detailed function manuals. supported extensive suite unit integration tests, ensure software consistently produces correct output across wide range input scenarios. documentation test coverage enable rbmi easily installed integrated R system, alignment system’s broader validation process.","code":""},{"path":"/articles/FAQ.html","id":"how-do-the-methods-in-rbmi-compare-to-the-mixed-model-for-repeated-measures-mmrm-implemented-in-the-mmrm-package","dir":"Articles","previous_headings":"1 Introduction","what":"How do the methods in rbmi compare to the mixed model for repeated measures (MMRM) implemented in the mmrm package?","title":"rbmi: Frequently Asked Questions","text":"rbmi designed complement , occasionally, replace standard MMRM analyses clinical trials longitudinal endpoints. Strengths rbmi compared standard MMRM model : rbmi designed allow analyses fully aligned estimand definition. facilitate , implements methods range different missing data assumptions including standard missing--random (MAR), extended MAR (via inclusion time-varying covariates), reference-based missingness, missing--random random (NMAR; via \\(\\delta\\)-adjustments). contrast, standard MMRM model valid standard MAR assumption always plausible. example, standard MAR assumption rather implausible implementing treatment policy strategy intercurrent event “treatment discontinuation” substantial proportion subjects lost--follow-discontinuation. \\(\\delta\\)-adjustment methods implemented rbmi can used sensitivity analyses primary MMRM- rbmi-type analysis. Weaknesses rbmi compared standard MMRM model : MMRM models de-facto standard analysis method decade. rbmi currently less established. rbmi computationally intensive using requires careful planning.","code":""},{"path":"/articles/FAQ.html","id":"how-does-rbmi-compare-to-general-purpose-software-for-multiple-imputation-mi-such-as-mice","dir":"Articles","previous_headings":"1 Introduction","what":"How does rbmi compare to general-purpose software for multiple imputation (MI) such as mice?","title":"rbmi: Frequently Asked Questions","text":"rbmi covers “MMRM-type” settings, .e. settings single longitudinal continuous outcome may missing visits hence require imputation. settings, several advantages general-purpose MI software: rbmi supports imputation range different missing data assumptions whereas general-purpose MI software mostly focused MAR-based imputation. particular, unclear implement jump reference (JR) copy increments reference (CIR) methods software. rbmi interface fully streamlined setting arguably makes implementation straightforward general-purpose MI software. MICE algorithm stochastic inference always based Rubin’s rules. contrast, method “conditional mean imputation plus jackknifing” (method=\"method_condmean(type = \"jackknife\")\") rbmi require tuning parameters, fully deterministic, provides frequentist-consistent inference also reference-based imputations (Rubin’s rule conservative leading actual type error rates can far nominal values). However, rbmi much limited functionality general-purpose MI software.","code":""},{"path":"/articles/FAQ.html","id":"how-to-handle-missing-data-in-baseline-covariates-in-rbmi","dir":"Articles","previous_headings":"1 Introduction","what":"How to handle missing data in baseline covariates in rbmi?","title":"rbmi: Frequently Asked Questions","text":"rbmi support imputation missing baseline covariates. Therefore, missing baseline covariates need handled outside rbmi. best approach handling missing baseline covariates needs made case--case basis context randomized trials, relatively simple approach often sufficient (White Thompson (2005)).","code":""},{"path":"/articles/FAQ.html","id":"why-does-rbmi-by-default-use-an-ancova-analysis-model-and-not-an-mmrm-analysis-model","dir":"Articles","previous_headings":"1 Introduction","what":"Why does rbmi by default use an ANCOVA analysis model and not an MMRM analysis model?","title":"rbmi: Frequently Asked Questions","text":"theoretical justification conditional mean imputation method requires analysis model leads point estimator linear function outcome vector (Wolbers et al. (2022)). case ANCOVA general MMRM models. imputation methods, ANCOVA MMRM valid analysis methods. MMRM analysis model implemented providing custom analysis function analyse() function. expalanations, also cite end section 2.4 conditional mean imputation paper (Wolbers et al. (2022)): proof relies fact ANCOVA estimator linear function outcome vector. complete data, ANCOVA estimator leads identical parameter estimates MMRM model longitudinal outcomes arbitrary common covariance structure across treatment groups treatment--visit interactions well covariate--visit-interactions included analysis model covariates,17 (p. 197). Hence, proof also applies MMRM models. expect conditional mean imputation also valid general MMRM model used analysis involved argument required formally justify .","code":""},{"path":"/articles/FAQ.html","id":"how-can-i-analyse-the-change-from-baseline-in-the-analysis-model-when-imputation-was-done-on-the-original-outcomes","dir":"Articles","previous_headings":"1 Introduction","what":"How can I analyse the change-from-baseline in the analysis model when imputation was done on the original outcomes?","title":"rbmi: Frequently Asked Questions","text":"can achieved using custom analysis functions outlined Section 7 Advanced Vignette. e.g.","code":"ancova_modified <- function(data, ...) { data2 <- data %>% mutate(ENDPOINT = ENDPOINT - BASELINE) rbmi::ancova(data2, ...) } anaObj <- rbmi::analyse( imputeObj, ancova_modified, vars = vars )"},{"path":"/articles/advanced.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"rbmi: Advanced Functionality","text":"purpose vignette provide overview advanced features rbmi package. sections vignette relatively self-contained, .e. readers able jump directly section covers functionality interested .","code":""},{"path":"/articles/advanced.html","id":"sec:dataSimul","dir":"Articles","previous_headings":"","what":"Data simulation using function simulate_data()","title":"rbmi: Advanced Functionality","text":"order demonstrate advanced functions first create simulated dataset rbmi function simulate_data(). simulate_data() function generates data randomized clinical trial longitudinal continuous outcomes two different types intercurrent events (ICEs). One intercurrent event (ICE1) may thought discontinuation study treatment due study drug condition related (SDCR) reasons. event (ICE2) may thought discontinuation study treatment due study drug condition related (NSDCR) reasons. purpose vignette, simulate data similarly simulation study reported Wolbers et al. (2022) (though change simulation parameters) include one ICE type (ICE1). Specifically, simulate 1:1 randomized trial active drug (intervention) versus placebo (control) 100 subjects per group 6 post-baseline assessments (bi-monthly visits 12 months) following assumptions: mean outcome trajectory placebo group increases linearly 50 baseline (visit 0) 60 visit 6, .e. slope 10 points/year. mean outcome trajectory intervention group identical placebo group visit 2. visit 2 onward, slope decreases 50% 5 points/year. covariance structure baseline follow-values groups implied random intercept slope model standard deviation 5 intercept slope, correlation 0.25. addition, independent residual error standard deviation 2.5 added assessment. probability study drug discontinuation visit calculated according logistic model depends observed outcome visit. Specifically, visit-wise discontinuation probability 2% 3% control intervention group, respectively, specified case observed outcome equal 50 (mean value baseline). odds discontinuation simulated increase +10% +1 point increase observed outcome. Study drug discontinuation simulated effect mean trajectory placebo group. intervention group, subjects discontinue follow slope mean trajectory placebo group time point onward. compatible copy increments reference (CIR) assumption. Study drop-study drug discontinuation visit occurs probability 50% leading missing outcome data time point onward. function simulate_data() requires 3 arguments (see function documentation help(simulate_data) details): pars_c: simulation parameters control group pars_t: simulation parameters intervention group post_ice1_traj: Specifies observed outcomes ICE1 simulated , report data according specifications can simulated function simulate_data():","code":"library(rbmi) library(dplyr) library(ggplot2) library(purrr) set.seed(122) n <- 100 time <- c(0, 2, 4, 6, 8, 10, 12) # Mean trajectory control muC <- c(50.0, 51.66667, 53.33333, 55.0, 56.66667, 58.33333, 60.0) # Mean trajectory intervention muT <- c(50.0, 51.66667, 53.33333, 54.16667, 55.0, 55.83333, 56.66667) # Create Sigma sd_error <- 2.5 covRE <- rbind( c(25.0, 6.25), c(6.25, 25.0) ) Sigma <- cbind(1, time / 12) %*% covRE %*% rbind(1, time / 12) + diag(sd_error^2, nrow = length(time)) # Set probability of discontinuation probDisc_C <- 0.02 probDisc_T <- 0.03 or_outcome <- 1.10 # +1 point increase => +10% odds of discontinuation # Set drop-out rate following discontinuation prob_dropout <- 0.5 # Set simulation parameters of the control group parsC <- set_simul_pars( mu = muC, sigma = Sigma, n = n, prob_ice1 = probDisc_C, or_outcome_ice1 = or_outcome, prob_post_ice1_dropout = prob_dropout ) # Set simulation parameters of the intervention group parsT <- parsC parsT$mu <- muT parsT$prob_ice1 <- probDisc_T # Set assumption about post-ice trajectory post_ice_traj <- \"CIR\" # Simulate data data <- simulate_data( pars_c = parsC, pars_t = parsT, post_ice1_traj = post_ice_traj ) head(data) #> id visit group outcome_bl outcome_noICE ind_ice1 ind_ice2 dropout_ice1 #> 1 id_1 0 Control 57.32704 57.32704 0 0 0 #> 2 id_1 1 Control 57.32704 54.69751 1 0 1 #> 3 id_1 2 Control 57.32704 58.60702 1 0 1 #> 4 id_1 3 Control 57.32704 61.50119 1 0 1 #> 5 id_1 4 Control 57.32704 56.68363 1 0 1 #> 6 id_1 5 Control 57.32704 66.14799 1 0 1 #> outcome #> 1 57.32704 #> 2 NA #> 3 NA #> 4 NA #> 5 NA #> 6 NA # As a simple descriptive of the simulated data, summarize the number of subjects with ICEs and missing data data %>% group_by(id) %>% summarise( group = group[1], any_ICE = (any(ind_ice1 == 1)), any_NA = any(is.na(outcome))) %>% group_by(group) %>% summarise( subjects_with_ICE = sum(any_ICE), subjects_with_missings = sum(any_NA) ) #> # A tibble: 2 × 3 #> group subjects_with_ICE subjects_with_missings #> #> 1 Control 18 8 #> 2 Intervention 25 14"},{"path":"/articles/advanced.html","id":"sec:postICEobs","dir":"Articles","previous_headings":"","what":"Handling of observed post-ICE data in rbmi under reference-based imputation","title":"rbmi: Advanced Functionality","text":"rbmi always uses non-missing outcome data input data set, .e. data never overwritten imputation step removed analysis step. implies data considered irrelevant treatment effect estimation (e.g. data ICE estimand specified hypothetical strategy), data need removed input data set user prior calling rbmi functions. imputation missing random (MAR) strategy, observed outcome data also included fitting base imputation model. However, ICEs handled using reference-based imputation methods (CIR, CR, JR), rbmi excludes observed post-ICE data base imputation model. data excluded, base imputation model mistakenly estimate mean trajectories based mixture observed pre- post-ICE data relevant reference-based imputations. However, observed post-ICE data added back data set fitting base imputation model included subsequent imputation analysis steps. Post-ICE data control reference group also excluded base imputation model user specifies reference-based imputation strategy ICEs. ensures ICE impact data included base imputation model regardless whether ICE occurred control intervention group. hand, imputation reference group based MAR assumption even reference-based imputation methods may preferable settings include post-ICE data control group base imputation model. can implemented specifying MAR strategy ICE control group reference-based strategy ICE intervention group. use latter approach example . simulated trial data section 2 assumed outcomes intervention group observed ICE “treatment discontinuation” follow increments observed control group. Thus imputation missing data intervention group treatment discontinuation might performed reference-based copy increments reference (CIR) assumption. Specifically, implement estimator following assumptions: endpoint interest change outcome baseline visit. imputation model includes treatment group, (categorical) visit, treatment--visit interactions, baseline outcome, baseline outcome--visit interactions covariates. imputation model assumes common unstructured covariance matrix treatment groups control group, missing data imputed MAR whereas intervention group, missing post-ICE data imputed CIR assumption analysis model endpoint imputed datasets separate ANCOVA model visit treatment group primary covariate adjustment baseline outcome value. illustration purposes, chose MI based approximate Bayesian posterior draws 20 random imputations demanding computational perspective. practical applications, number random imputations may need increased. Moreover, imputations also supported rbmi. guidance regarding choice imputation approach, refer user comparison implemented approaches Section 3.9 “Statistical Specifications” vignette (vignette(\"stat_specs\", package = \"rbmi\")). first report code set variables imputation analysis models. yet familiar syntax, recommend first check “quickstart” vignette (vignette(\"quickstart\", package = \"rbmi\")). chosen imputation method can set function method_approxbayes() follows: can now sequentially call 4 key functions rbmi perform multiple imputation. Please note management observed post-ICE data performed without additional complexity user. draws() automatically excludes post-ICE data handled reference-based method (keeps post-ICE data handled using MAR) using information provided argument data_ice. impute() impute truly missing data data[[vars$outcome]]. last output gives estimated difference -4.537 (95% CI -6.420 -2.655) two groups last visit associated p-value lower 0.001.","code":"# Create data_ice including the subject's first visit affected by the ICE and the imputation strategy # Imputation strategy for post-ICE data is CIR in the intervention group and MAR for the control group # (note that ICEs which are handled using MAR are optional and do not impact the analysis # because imputation of missing data under MAR is the default) data_ice_CIR <- data %>% group_by(id) %>% filter(ind_ice1 == 1) %>% # select visits with ICEs mutate(strategy = ifelse(group == \"Intervention\", \"CIR\", \"MAR\")) %>% summarise( visit = visit[1], # Select first visit affected by the ICE strategy = strategy[1] ) # Compute endpoint of interest: change from baseline and # remove rows corresponding to baseline visits data <- data %>% filter(visit != 0) %>% mutate( change = outcome - outcome_bl, visit = factor(visit, levels = unique(visit)) ) # Define key variables for the imputation and analysis models vars <- set_vars( subjid = \"id\", visit = \"visit\", outcome = \"change\", group = \"group\", covariates = c(\"visit*outcome_bl\", \"visit*group\"), strategy = \"strategy\" ) vars_an <- vars vars_an$covariates <- \"outcome_bl\" method <- method_approxbayes(n_sample = 20) draw_obj <- draws( data = data, data_ice = data_ice_CIR, vars = vars, method = method, quiet = TRUE, ncores = 2 ) impute_obj_CIR <- impute( draw_obj, references = c(\"Control\" = \"Control\", \"Intervention\" = \"Control\") ) ana_obj_CIR <- analyse( impute_obj_CIR, vars = vars_an ) pool_obj_CIR <- pool(ana_obj_CIR) pool_obj_CIR #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_1 -0.486 0.512 -1.496 0.524 0.343 #> lsm_ref_1 2.62 0.362 1.907 3.333 <0.001 #> lsm_alt_1 2.133 0.362 1.42 2.847 <0.001 #> trt_2 -0.066 0.542 -1.135 1.004 0.904 #> lsm_ref_2 3.707 0.384 2.95 4.464 <0.001 #> lsm_alt_2 3.641 0.383 2.885 4.397 <0.001 #> trt_3 -1.782 0.607 -2.979 -0.585 0.004 #> lsm_ref_3 5.841 0.428 4.997 6.685 <0.001 #> lsm_alt_3 4.059 0.428 3.214 4.904 <0.001 #> trt_4 -2.518 0.692 -3.884 -1.152 <0.001 #> lsm_ref_4 7.656 0.492 6.685 8.627 <0.001 #> lsm_alt_4 5.138 0.488 4.176 6.1 <0.001 #> trt_5 -3.658 0.856 -5.346 -1.97 <0.001 #> lsm_ref_5 9.558 0.598 8.379 10.737 <0.001 #> lsm_alt_5 5.9 0.608 4.699 7.101 <0.001 #> trt_6 -4.537 0.954 -6.42 -2.655 <0.001 #> lsm_ref_6 11.048 0.666 9.735 12.362 <0.001 #> lsm_alt_6 6.511 0.674 5.181 7.841 <0.001 #> --------------------------------------------------"},{"path":"/articles/advanced.html","id":"efficiently-changing-reference-based-imputation-strategies","dir":"Articles","previous_headings":"","what":"Efficiently changing reference-based imputation strategies","title":"rbmi: Advanced Functionality","text":"draws() function far computationally intensive function rbmi. settings, may important explore impact change reference-based imputation strategy results. change affect imputation model affect subsequent imputation step. order allow changes imputation strategy without re-run draws() function, function impute() additional argument update_strategies. However, please note functionality comes important limitations: described beginning Section 3, post-ICE outcomes included input dataset base imputation model imputation method MAR excluded reference-based imputation methods (CIR, CR, JR). Therefore, updata_strategies applied imputation strategy changed MAR non-MAR strategy presence observed post-ICE outcomes. Similarly, change non-MAR strategy MAR triggers warning presence observed post-ICE outcomes base imputation model fitted relevant data MAR. Finally, update_strategies applied timing ICEs changed (argument data_ice) addition imputation strategy. example, described analysis copy increments reference (CIR) assumption previous section. Let’s assume want change strategy jump reference imputation strategy sensitivity analysis. can efficiently implemented using update_strategies follows: imputations jump reference assumption, get estimated difference -4.360 (95% CI -6.238 -2.482) two groups last visit associated p-value <0.001.","code":"# Change ICE strategy from CIR to JR data_ice_JR <- data_ice_CIR %>% mutate(strategy = ifelse(strategy == \"CIR\", \"JR\", strategy)) impute_obj_JR <- impute( draw_obj, references = c(\"Control\" = \"Control\", \"Intervention\" = \"Control\"), update_strategy = data_ice_JR ) ana_obj_JR <- analyse( impute_obj_JR, vars = vars_an ) pool_obj_JR <- pool(ana_obj_JR) pool_obj_JR #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_1 -0.485 0.513 -1.496 0.526 0.346 #> lsm_ref_1 2.609 0.363 1.892 3.325 <0.001 #> lsm_alt_1 2.124 0.361 1.412 2.836 <0.001 #> trt_2 -0.06 0.535 -1.115 0.995 0.911 #> lsm_ref_2 3.694 0.378 2.948 4.441 <0.001 #> lsm_alt_2 3.634 0.381 2.882 4.387 <0.001 #> trt_3 -1.767 0.598 -2.948 -0.587 0.004 #> lsm_ref_3 5.845 0.422 5.012 6.677 <0.001 #> lsm_alt_3 4.077 0.432 3.225 4.93 <0.001 #> trt_4 -2.529 0.686 -3.883 -1.175 <0.001 #> lsm_ref_4 7.637 0.495 6.659 8.614 <0.001 #> lsm_alt_4 5.108 0.492 4.138 6.078 <0.001 #> trt_5 -3.523 0.856 -5.212 -1.833 <0.001 #> lsm_ref_5 9.554 0.61 8.351 10.758 <0.001 #> lsm_alt_5 6.032 0.611 4.827 7.237 <0.001 #> trt_6 -4.36 0.952 -6.238 -2.482 <0.001 #> lsm_ref_6 11.003 0.676 9.669 12.337 <0.001 #> lsm_alt_6 6.643 0.687 5.287 8 <0.001 #> --------------------------------------------------"},{"path":"/articles/advanced.html","id":"imputation-under-mar-with-time-varying-covariates","dir":"Articles","previous_headings":"","what":"Imputation under MAR with time-varying covariates","title":"rbmi: Advanced Functionality","text":"rbmi package supports inclusion time-varying covariates imputation model. particularly useful implementing -called retrieved dropout models. vignette “Implementation retrieved-dropout models using rbmi” (vignette(topic = \"retrieved_dropout\", package = \"rbmi\")) contains examples models.","code":""},{"path":"/articles/advanced.html","id":"custom-imputation-strategies","dir":"Articles","previous_headings":"","what":"Custom imputation strategies","title":"rbmi: Advanced Functionality","text":"following imputation strategies implemented rbmi: Missing Random (MAR) Jump Reference (JR) Copy Reference (CR) Copy Increments Reference (CIR) Last Mean Carried Forward (LMCF) addition, rbmi allows user implement imputation strategy. , user needs three things: Define function implementing new imputation strategy. Specify patients use strategy data_ice dataset provided draws(). Provide imputation strategy function impute(). imputation strategy function must take 3 arguments (pars_group, pars_ref, index_mar) calculates mean covariance matrix subject’s marginal imputation distribution applied subjects strategy applies. , pars_group contains predicted mean trajectory (pars_group$mu, numeric vector) covariance matrix (pars_group$sigma) subject conditional assigned treatment group covariates. pars_ref contains corresponding mean trajectory covariance matrix conditional reference group subject’s covariates. index_mar logical vector specifies visit whether visit unaffected ICE handled using non-MAR method . example, user can check CIR strategy implemented looking function strategy_CIR(). illustrate simple example, assume new strategy implemented follows: - marginal mean imputation distribution equal marginal mean trajectory subject according assigned group covariates ICE. - ICE marginal mean imputation distribution equal average visit-wise marginal means based subjects covariates assigned group reference group, respectively. - covariance matrix marginal imputation distribution, covariance matrix assigned group taken. , first need define imputation function example coded follows: example showing use: incorporate rbmi, data_ice needs updated strategy AVG specified visits affected ICE. Additionally, function needs provided impute() via getStrategies() function shown : , analysis proceed calling analyse() pool() .","code":"strategy_CIR #> function (pars_group, pars_ref, index_mar) #> { #> if (all(index_mar)) { #> return(pars_group) #> } #> else if (all(!index_mar)) { #> return(pars_ref) #> } #> mu <- pars_group$mu #> last_mar <- which(!index_mar)[1] - 1 #> increments_from_last_mar_ref <- pars_ref$mu[!index_mar] - #> pars_ref$mu[last_mar] #> mu[!index_mar] <- mu[last_mar] + increments_from_last_mar_ref #> sigma <- compute_sigma(sigma_group = pars_group$sigma, sigma_ref = pars_ref$sigma, #> index_mar = index_mar) #> pars <- list(mu = mu, sigma = sigma) #> return(pars) #> } #> #> strategy_AVG <- function(pars_group, pars_ref, index_mar) { mu_mean <- (pars_group$mu + pars_ref$mu) / 2 x <- pars_group x$mu[!index_mar] <- mu_mean[!index_mar] return(x) } pars_group <- list( mu = c(1, 2, 3), sigma = as_vcov(c(1, 3, 2), c(0.4, 0.5, 0.45)) ) pars_ref <- list( mu = c(5, 6, 7), sigma = as_vcov(c(2, 1, 1), c(0.7, 0.8, 0.5)) ) index_mar <- c(TRUE, TRUE, FALSE) strategy_AVG(pars_group, pars_ref, index_mar) #> $mu #> [1] 1 2 5 #> #> $sigma #> [,1] [,2] [,3] #> [1,] 1.0 1.2 1.0 #> [2,] 1.2 9.0 2.7 #> [3,] 1.0 2.7 4.0 data_ice_AVG <- data_ice_CIR %>% mutate(strategy = ifelse(strategy == \"CIR\", \"AVG\", strategy)) draw_obj <- draws( data = data, data_ice = data_ice_AVG, vars = vars, method = method, quiet = TRUE ) impute_obj <- impute( draw_obj, references = c(\"Control\" = \"Control\", \"Intervention\" = \"Control\"), strategies = getStrategies(AVG = strategy_AVG) )"},{"path":"/articles/advanced.html","id":"custom-analysis-functions","dir":"Articles","previous_headings":"","what":"Custom analysis functions","title":"rbmi: Advanced Functionality","text":"default rbmi analyse data using ancova() function. analysis function fits ANCOVA model outcomes visit separately, returns “treatment effect” estimate well corresponding least square means group. user wants perform different analysis, return different statistics analysis, can done using custom analysis function. Beware validity conditional mean imputation method formally established analysis functions corresponding linear models (ANCOVA) caution required applying alternative analysis functions method. custom analysis function must take data.frame first argument return named list element list containing minimum point estimate, called est. method method_bayes() method_approxbayes(), list must additionally contain standard error (element se) , available, degrees freedom complete-data analysis model (element df). simple example, replicate ANCOVA analysis last visit CIR-based imputations user-defined analysis function : second example, assume supplementary analysis user wants compare proportion subjects change baseline >10 points last visit treatment groups baseline outcome additional covariate. lead following basic analysis function: Note user wants rbmi use normal approximation pooled test statistics, degrees freedom need set df = NA (per example). degrees freedom complete data test statistics known degrees freedom set df = Inf, rbmi pools degrees freedom across imputed datasets according rule Barnard Rubin (see “Statistical Specifications” vignette (vignette(\"stat_specs\", package = \"rbmi\") details). According rule, infinite degrees freedom complete data analysis imply pooled degrees freedom also infinite. Rather, case pooled degrees freedom (M-1)/lambda^2, M number imputations lambda fraction missing information (see Barnard Rubin (1999) details).","code":"compare_change_lastvisit <- function(data, ...) { fit <- lm(change ~ group + outcome_bl, data = data, subset = (visit == 6) ) res <- list( trt = list( est = coef(fit)[\"groupIntervention\"], se = sqrt(vcov(fit)[\"groupIntervention\", \"groupIntervention\"]), df = df.residual(fit) ) ) return(res) } ana_obj_CIR6 <- analyse( impute_obj_CIR, fun = compare_change_lastvisit, vars = vars_an ) pool(ana_obj_CIR6) #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================= #> parameter est se lci uci pval #> ------------------------------------------------- #> trt -4.537 0.954 -6.42 -2.655 <0.001 #> ------------------------------------------------- compare_prop_lastvisit <- function(data, ...) { fit <- glm( I(change > 10) ~ group + outcome_bl, family = binomial(), data = data, subset = (visit == 6) ) res <- list( trt = list( est = coef(fit)[\"groupIntervention\"], se = sqrt(vcov(fit)[\"groupIntervention\", \"groupIntervention\"]), df = NA ) ) return(res) } ana_obj_prop <- analyse( impute_obj_CIR, fun = compare_prop_lastvisit, vars = vars_an ) pool_obj_prop <- pool(ana_obj_prop) pool_obj_prop #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================= #> parameter est se lci uci pval #> ------------------------------------------------- #> trt -1.052 0.314 -1.667 -0.438 0.001 #> ------------------------------------------------- tmp <- as.data.frame(pool_obj_prop) %>% mutate( OR = exp(est), OR.lci = exp(lci), OR.uci = exp(uci) ) %>% select(parameter, OR, OR.lci, OR.uci) tmp #> parameter OR OR.lci OR.uci #> 1 trt 0.3491078 0.188807 0.6455073"},{"path":"/articles/advanced.html","id":"sensitivity-analyses-delta-adjustments-and-tipping-point-analyses","dir":"Articles","previous_headings":"","what":"Sensitivity analyses: Delta adjustments and tipping point analyses","title":"rbmi: Advanced Functionality","text":"Delta-adjustments used impute missing data missing random (NMAR) assumption. reflects belief unobserved outcomes systematically “worse” (“better”) “comparable” observed outcomes. extensive discussion delta-adjustment methods, refer Cro et al. (2020). rbmi, marginal delta-adjustment approach implemented. means delta-adjustment applied dataset data imputation MAR reference-based missing data assumptions prior analysis imputed data. Sensitivity analysis using delta-adjustments can therefore performed without re-fit imputation model. rbmi, implemented via delta argument analyse() function.","code":""},{"path":"/articles/advanced.html","id":"simple-delta-adjustments-and-tipping-point-analyses","dir":"Articles","previous_headings":"8 Sensitivity analyses: Delta adjustments and tipping point analyses","what":"Simple delta adjustments and tipping point analyses","title":"rbmi: Advanced Functionality","text":"delta argument analyse() allows users modify outcome variable prior analysis. , user needs provide data.frame contains columns subject visit (identify observation adjusted) plus additional column called delta specifies value added outcomes prior analysis. delta_template() function supports user creating data.frame: creates skeleton data.frame containing one row per subject visit value delta set 0 observations: Note output delta_template() contains additional information can used properly re-set variable delta. example, assume user wants implement delta-adjustment imputed values CIR described section 3. Specifically, assume fixed “worsening adjustment” +5 points applied imputed values regardless treatment group. programmed follows: approach can used implement tipping point analysis. , apply different delta-adjustments imputed data control intervention group, respectively. Assume delta-adjustments less -5 points +15 points considered implausible clinical perspective. Therefore, vary delta-values group -5 +15 points investigate delta combinations lead “tipping” primary analysis result, defined analysis p-value \\(\\geq 0.05\\). According analysis, significant test result primary analysis CIR tipped non-significant result rather extreme delta-adjustments. Please note real analysis recommended use smaller step size grid used .","code":"dat_delta <- delta_template(imputations = impute_obj_CIR) head(dat_delta) #> id visit group is_mar is_missing is_post_ice strategy delta #> 1 id_1 1 Control TRUE TRUE TRUE MAR 0 #> 2 id_1 2 Control TRUE TRUE TRUE MAR 0 #> 3 id_1 3 Control TRUE TRUE TRUE MAR 0 #> 4 id_1 4 Control TRUE TRUE TRUE MAR 0 #> 5 id_1 5 Control TRUE TRUE TRUE MAR 0 #> 6 id_1 6 Control TRUE TRUE TRUE MAR 0 # Set delta-value to 5 for all imputed (previously missing) outcomes and 0 for all other outcomes dat_delta <- delta_template(imputations = impute_obj_CIR) %>% mutate(delta = is_missing * 5) # Repeat the analyses with the delta-adjusted values and pool results ana_delta <- analyse( impute_obj_CIR, delta = dat_delta, vars = vars_an ) pool(ana_delta) #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_1 -0.482 0.524 -1.516 0.552 0.359 #> lsm_ref_1 2.718 0.37 1.987 3.448 <0.001 #> lsm_alt_1 2.235 0.37 1.505 2.966 <0.001 #> trt_2 -0.016 0.56 -1.12 1.089 0.978 #> lsm_ref_2 3.907 0.396 3.125 4.688 <0.001 #> lsm_alt_2 3.891 0.395 3.111 4.671 <0.001 #> trt_3 -1.684 0.641 -2.948 -0.42 0.009 #> lsm_ref_3 6.092 0.452 5.201 6.983 <0.001 #> lsm_alt_3 4.408 0.452 3.515 5.3 <0.001 #> trt_4 -2.359 0.741 -3.821 -0.897 0.002 #> lsm_ref_4 7.951 0.526 6.913 8.99 <0.001 #> lsm_alt_4 5.593 0.522 4.563 6.623 <0.001 #> trt_5 -3.34 0.919 -5.153 -1.526 <0.001 #> lsm_ref_5 9.899 0.643 8.631 11.168 <0.001 #> lsm_alt_5 6.559 0.653 5.271 7.848 <0.001 #> trt_6 -4.21 1.026 -6.236 -2.184 <0.001 #> lsm_ref_6 11.435 0.718 10.019 12.851 <0.001 #> lsm_alt_6 7.225 0.725 5.793 8.656 <0.001 #> -------------------------------------------------- perform_tipp_analysis <- function(delta_control, delta_intervention, cl) { # Derive delta offset based on control and intervention specific deltas delta_df <- delta_df_init %>% mutate( delta_ctl = (group == \"Control\") * is_missing * delta_control, delta_int = (group == \"Intervention\") * is_missing * delta_intervention, delta = delta_ctl + delta_int ) ana_delta <- analyse( impute_obj_CIR, fun = compare_change_lastvisit, vars = vars_an, delta = delta_df, ncores = cl ) pool_delta <- as.data.frame(pool(ana_delta)) list( trt_effect_6 = pool_delta[[\"est\"]], pval_6 = pool_delta[[\"pval\"]] ) } # Get initial delta template delta_df_init <- delta_template(impute_obj_CIR) tipp_frame_grid <- expand.grid( delta_control = seq(-5, 15, by = 2), delta_intervention = seq(-5, 15, by = 2) ) %>% as_tibble() # parallelise to speed up computation cl <- make_rbmi_cluster(2) tipp_frame <- tipp_frame_grid %>% mutate( results_list = map2(delta_control, delta_intervention, perform_tipp_analysis, cl = cl), trt_effect_6 = map_dbl(results_list, \"trt_effect_6\"), pval_6 = map_dbl(results_list, \"pval_6\") ) %>% select(-results_list) %>% mutate( pval = cut( pval_6, c(0, 0.001, 0.01, 0.05, 0.2, 1), right = FALSE, labels = c(\"<0.001\", \"0.001 - <0.01\", \"0.01- <0.05\", \"0.05 - <0.20\", \">= 0.20\") ) ) # Close cluster when done with it parallel::stopCluster(cl) # Show delta values which lead to non-significant analysis results tipp_frame %>% filter(pval_6 >= 0.05) #> # A tibble: 3 × 5 #> delta_control delta_intervention trt_effect_6 pval_6 pval #> #> 1 -5 15 -1.99 0.0935 0.05 - <0.20 #> 2 -3 15 -2.15 0.0704 0.05 - <0.20 #> 3 -1 15 -2.31 0.0527 0.05 - <0.20 ggplot(tipp_frame, aes(delta_control, delta_intervention, fill = pval)) + geom_raster() + scale_fill_manual(values = c(\"darkgreen\", \"lightgreen\", \"lightyellow\", \"orange\", \"red\"))"},{"path":"/articles/advanced.html","id":"more-flexible-delta-adjustments-using-the-dlag-and-delta-arguments-of-delta_template","dir":"Articles","previous_headings":"8 Sensitivity analyses: Delta adjustments and tipping point analyses","what":"More flexible delta-adjustments using the dlag and delta arguments of delta_template()","title":"rbmi: Advanced Functionality","text":"far, discussed simple delta arguments add value imputed values. However, user may want apply flexible delta-adjustments missing values intercurrent event (ICE) vary magnitude delta adjustment depending far away visit question ICE visit. facilitate creation flexible delta-adjustments, delta_template() function two optional additional arguments delta dlag. delta argument specifies default amount delta applied post-ICE visit, whilst dlag specifies scaling coefficient applied based upon visits proximity first visit affected ICE. default, delta added unobserved (.e. imputed) post-ICE outcomes can changed setting optional argument missing_only = FALSE. usage delta dlag arguments best illustrated examples: Assume setting 4 visits user specified delta = c(5,6,7,8) dlag=c(1,2,3,4). subject first visit affected ICE visit 2, values delta dlag imply following delta offset: , subject delta offset 0 applied visit v1, 6 visit v2, 20 visit v3 44 visit v4. Assume instead, subject’s first visit affected ICE visit 3. , values delta dlag imply following delta offset: apply constant delta value +5 visits affected ICE regardless proximity first ICE visit, one set delta = c(5,5,5,5) dlag = c(1,0,0,0). Alternatively, may straightforward setting call delta_template() function without delta dlag arguments overwrite delta column resulting data.frame described previous section (additionally relying is_post_ice variable). Another way using arguments set delta difference time visits dlag amount delta per unit time. example, let’s say visits occur weeks 1, 5, 6 9 want delta 3 applied week ICE. simplicity, assume ICE occurs immediately subject’s last visit affected ICE. achieved setting delta = c(1,4,1,3) (difference weeks visit) dlag = c(3, 3, 3, 3). Assume subject’s first visit affected ICE visit v2, values delta dlag imply following delta offsets: wrap , show action simulated dataset section 2 imputed datasets based CIR assumption section 3. simulation setting specified follow-visits months 2, 4, 6, 8, 10, 12. Assume want apply delta-adjustment 1 every month ICE unobserved post-ICE visits intervention group . (E.g. ICE occurred immediately month 4 visit, total delta applied missing value month 10 visit 6.) program , first use delta dlag arguments delta_template() set corresponding template data.frame: Next, can use additional metadata variables provided delta_template() manually reset delta values control group back 0: Finally, can use delta data.frame apply desired delta offset analysis:","code":"v1 v2 v3 v4 -------------- 5 6 7 8 # delta assigned to each visit 0 1 2 3 # scaling starting from the first visit after the subjects ICE -------------- 0 6 14 24 # delta * scaling -------------- 0 6 20 44 # cumulative sum (i.e. delta) to be applied to each visit v1 v2 v3 v4 -------------- 5 6 7 8 # delta assigned to each visit 0 0 1 2 # scaling starting from the first visit after the subjects ICE -------------- 0 0 7 16 # delta * scaling -------------- 0 0 7 23 # cumulative sum (i.e. delta) to be applied to each visit v1 v2 v3 v4 -------------- 1 4 1 3 # delta assigned to each visit 0 3 3 3 # scaling starting from the first visit after the subjects ICE -------------- 0 12 3 9 # delta * scaling -------------- 0 12 15 24 # cumulative sum (i.e. delta) to be applied to each visit delta_df <- delta_template( impute_obj_CIR, delta = c(2, 2, 2, 2, 2, 2), dlag = c(1, 1, 1, 1, 1, 1) ) head(delta_df) #> id visit group is_mar is_missing is_post_ice strategy delta #> 1 id_1 1 Control TRUE TRUE TRUE MAR 2 #> 2 id_1 2 Control TRUE TRUE TRUE MAR 4 #> 3 id_1 3 Control TRUE TRUE TRUE MAR 6 #> 4 id_1 4 Control TRUE TRUE TRUE MAR 8 #> 5 id_1 5 Control TRUE TRUE TRUE MAR 10 #> 6 id_1 6 Control TRUE TRUE TRUE MAR 12 delta_df2 <- delta_df %>% mutate(delta = if_else(group == \"Control\", 0, delta)) head(delta_df2) #> id visit group is_mar is_missing is_post_ice strategy delta #> 1 id_1 1 Control TRUE TRUE TRUE MAR 0 #> 2 id_1 2 Control TRUE TRUE TRUE MAR 0 #> 3 id_1 3 Control TRUE TRUE TRUE MAR 0 #> 4 id_1 4 Control TRUE TRUE TRUE MAR 0 #> 5 id_1 5 Control TRUE TRUE TRUE MAR 0 #> 6 id_1 6 Control TRUE TRUE TRUE MAR 0 ana_delta <- analyse(impute_obj_CIR, delta = delta_df2, vars = vars_an) pool(ana_delta) #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_1 -0.446 0.514 -1.459 0.567 0.386 #> lsm_ref_1 2.62 0.363 1.904 3.335 <0.001 #> lsm_alt_1 2.173 0.363 1.458 2.889 <0.001 #> trt_2 0.072 0.546 -1.006 1.15 0.895 #> lsm_ref_2 3.708 0.387 2.945 4.471 <0.001 #> lsm_alt_2 3.78 0.386 3.018 4.542 <0.001 #> trt_3 -1.507 0.626 -2.743 -0.272 0.017 #> lsm_ref_3 5.844 0.441 4.973 6.714 <0.001 #> lsm_alt_3 4.336 0.442 3.464 5.209 <0.001 #> trt_4 -2.062 0.731 -3.504 -0.621 0.005 #> lsm_ref_4 7.658 0.519 6.634 8.682 <0.001 #> lsm_alt_4 5.596 0.515 4.58 6.612 <0.001 #> trt_5 -2.938 0.916 -4.746 -1.13 0.002 #> lsm_ref_5 9.558 0.641 8.293 10.823 <0.001 #> lsm_alt_5 6.62 0.651 5.335 7.905 <0.001 #> trt_6 -3.53 1.045 -5.591 -1.469 0.001 #> lsm_ref_6 11.045 0.73 9.604 12.486 <0.001 #> lsm_alt_6 7.515 0.738 6.058 8.971 <0.001 #> --------------------------------------------------"},{"path":[]},{"path":"/articles/quickstart.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"rbmi: Quickstart","text":"purpose vignette provide 15 minute quickstart guide core functions rbmi package. rbmi package consists 4 core functions (plus several helper functions) typically called sequence: draws() - fits imputation models stores parameters impute() - creates multiple imputed datasets analyse() - analyses multiple imputed datasets pool() - combines analysis results across imputed datasets single statistic example vignette makes use Bayesian multiuple imputation; functionality requires installation suggested package rstan.","code":"install.packages(\"rstan\")"},{"path":"/articles/quickstart.html","id":"the-data","dir":"Articles","previous_headings":"","what":"The Data","title":"rbmi: Quickstart","text":"use publicly available example dataset antidepressant clinical trial active drug versus placebo. relevant endpoint Hamilton 17-item depression rating scale (HAMD17) assessed baseline weeks 1, 2, 4, 6. Study drug discontinuation occurred 24% subjects active drug 26% subjects placebo. data study drug discontinuation missing single additional intermittent missing observation. consider imputation model mean change baseline HAMD17 score outcome (variable CHANGE dataset). following covariates included imputation model: treatment group (THERAPY), (categorical) visit (VISIT), treatment--visit interactions, baseline HAMD17 score (BASVAL), baseline HAMD17 score--visit interactions. common unstructured covariance matrix structure assumed groups. analysis model ANCOVA model treatment group primary factor adjustment baseline HAMD17 score. rbmi expects input dataset complete; , must one row per subject visit. Missing outcome values coded NA, missing covariate values allowed. dataset incomplete, expand_locf() helper function can used add missing rows, using LOCF imputation carry forward observed baseline covariate values visits missing outcomes. Rows corresponding missing outcomes present antidepressant trial dataset. address therefore use expand_locf() function follows:","code":"library(rbmi) library(dplyr) #> #> Attaching package: 'dplyr' #> The following objects are masked from 'package:stats': #> #> filter, lag #> The following objects are masked from 'package:base': #> #> intersect, setdiff, setequal, union data(\"antidepressant_data\") dat <- antidepressant_data # Use expand_locf to add rows corresponding to visits with missing outcomes to the dataset dat <- expand_locf( dat, PATIENT = levels(dat$PATIENT), # expand by PATIENT and VISIT VISIT = levels(dat$VISIT), vars = c(\"BASVAL\", \"THERAPY\"), # fill with LOCF BASVAL and THERAPY group = c(\"PATIENT\"), order = c(\"PATIENT\", \"VISIT\") )"},{"path":"/articles/quickstart.html","id":"draws","dir":"Articles","previous_headings":"","what":"Draws","title":"rbmi: Quickstart","text":"draws() function fits imputation models stores corresponding parameter estimates Bayesian posterior parameter draws. three main inputs draws() function : data - primary longitudinal data.frame containing outcome variable covariates. data_ice - data.frame specifies first visit affected intercurrent event (ICE) imputation strategy handling missing outcome data ICE. one ICE imputed non-MAR strategy allowed per subject. method - statistical method used fit imputation models create imputed datasets. antidepressant trial data, dataset data_ice provided. However, can derived , dataset, subject’s first visit affected ICE “study drug discontinuation” corresponds first terminal missing observation. first derive dateset data_ice create 150 Bayesian posterior draws imputation model parameters. example, assume imputation strategy ICE Jump Reference (JR) subjects 150 multiple imputed datasets using Bayesian posterior draws imputation model created. Note use set_vars() specifies names key variables within dataset imputation model. Additionally, note whilst vars$group vars$visit added terms imputation model default, interaction , thus inclusion group * visit list covariates. Available imputation methods include: Bayesian multiple imputation - method_bayes() Approximate Bayesian multiple imputation - method_approxbayes() Conditional mean imputation (bootstrap) - method_condmean(type = \"bootstrap\") Conditional mean imputation (jackknife) - method_condmean(type = \"jackknife\") Bootstrapped multiple imputation - method = method_bmlmi() comparison methods, refer stat_specs vignette (Section 3.10). “statistical specifications” vignette (Section 3.10): vignette(\"stat_specs\",package=\"rbmi\"). Available imputation strategies include: Missing Random - \"MAR\" Jump Reference - \"JR\" Copy Reference - \"CR\" Copy Increments Reference - \"CIR\" Last Mean Carried Forward - \"LMCF\"","code":"# create data_ice and set the imputation strategy to JR for # each patient with at least one missing observation dat_ice <- dat %>% arrange(PATIENT, VISIT) %>% filter(is.na(CHANGE)) %>% group_by(PATIENT) %>% slice(1) %>% ungroup() %>% select(PATIENT, VISIT) %>% mutate(strategy = \"JR\") # In this dataset, subject 3618 has an intermittent missing values which does not correspond # to a study drug discontinuation. We therefore remove this subject from `dat_ice`. # (In the later imputation step, it will automatically be imputed under the default MAR assumption.) dat_ice <- dat_ice[-which(dat_ice$PATIENT == 3618),] dat_ice #> # A tibble: 43 × 3 #> PATIENT VISIT strategy #> #> 1 1513 5 JR #> 2 1514 5 JR #> 3 1517 5 JR #> 4 1804 7 JR #> 5 2104 7 JR #> 6 2118 5 JR #> 7 2218 6 JR #> 8 2230 6 JR #> 9 2721 5 JR #> 10 2729 5 JR #> # ℹ 33 more rows # Define the names of key variables in our dataset and # the covariates included in the imputation model using `set_vars()` # Note that the covariates argument can also include interaction terms vars <- set_vars( outcome = \"CHANGE\", visit = \"VISIT\", subjid = \"PATIENT\", group = \"THERAPY\", covariates = c(\"BASVAL*VISIT\", \"THERAPY*VISIT\") ) # Define which imputation method to use (here: Bayesian multiple imputation with 150 imputed datsets) method <- method_bayes( burn_in = 200, burn_between = 5, n_samples = 150 ) # Create samples for the imputation parameters by running the draws() function set.seed(987) drawObj <- draws( data = dat, data_ice = dat_ice, vars = vars, method = method, quiet = TRUE ) drawObj #> #> Draws Object #> ------------ #> Number of Samples: 150 #> Number of Failed Samples: 0 #> Model Formula: CHANGE ~ 1 + THERAPY + VISIT + BASVAL * VISIT + THERAPY * VISIT #> Imputation Type: random #> Method: #> name: Bayes #> burn_in: 200 #> burn_between: 5 #> same_cov: TRUE #> n_samples: 150"},{"path":"/articles/quickstart.html","id":"impute","dir":"Articles","previous_headings":"","what":"Impute","title":"rbmi: Quickstart","text":"next step use parameters imputation model generate imputed datasets. done via impute() function. function two key inputs: imputation model output draws() reference groups relevant reference-based imputation methods. ’s usage thus: instance, specifying PLACEBO group reference group well DRUG group (standard imputation using reference-based methods). Generally speaking, need see directly interact imputed datasets. However, wish inspect , can extracted imputation object using extract_imputed_dfs() helper function, .e.: Note case method_bayes() method_approxbayes(), imputed datasets correspond random imputations original dataset. method_condmean(), first imputed dataset always correspond completed original dataset containing subjects. method_condmean(type=\"jackknife\"), remaining datasets correspond conditional mean imputations leave-one-subject-datasets, whereas method_condmean(type=\"bootstrap\"), subsequent dataset corresponds conditional mean imputation bootstrapped datasets. method_bmlmi(), imputed datasets correspond sets random imputations bootstrapped datasets.","code":"imputeObj <- impute( drawObj, references = c(\"DRUG\" = \"PLACEBO\", \"PLACEBO\" = \"PLACEBO\") ) imputeObj #> #> Imputation Object #> ----------------- #> Number of Imputed Datasets: 150 #> Fraction of Missing Data (Original Dataset): #> 4: 0% #> 5: 8% #> 6: 13% #> 7: 25% #> References: #> DRUG -> PLACEBO #> PLACEBO -> PLACEBO imputed_dfs <- extract_imputed_dfs(imputeObj) head(imputed_dfs[[10]], 12) # first 12 rows of 10th imputed dataset #> PATIENT HAMATOTL PGIIMP RELDAYS VISIT THERAPY GENDER POOLINV BASVAL #> 1 new_pt_1 21 2 7 4 DRUG F 006 32 #> 2 new_pt_1 19 2 14 5 DRUG F 006 32 #> 3 new_pt_1 21 3 28 6 DRUG F 006 32 #> 4 new_pt_1 17 4 42 7 DRUG F 006 32 #> 5 new_pt_2 18 3 7 4 PLACEBO F 006 14 #> 6 new_pt_2 18 2 15 5 PLACEBO F 006 14 #> 7 new_pt_2 14 3 29 6 PLACEBO F 006 14 #> 8 new_pt_2 8 2 42 7 PLACEBO F 006 14 #> 9 new_pt_3 18 3 7 4 DRUG F 006 21 #> 10 new_pt_3 17 3 14 5 DRUG F 006 21 #> 11 new_pt_3 12 3 28 6 DRUG F 006 21 #> 12 new_pt_3 9 3 44 7 DRUG F 006 21 #> HAMDTL17 CHANGE #> 1 21 -11 #> 2 20 -12 #> 3 19 -13 #> 4 17 -15 #> 5 11 -3 #> 6 14 0 #> 7 9 -5 #> 8 5 -9 #> 9 20 -1 #> 10 18 -3 #> 11 16 -5 #> 12 13 -8"},{"path":"/articles/quickstart.html","id":"analyse","dir":"Articles","previous_headings":"","what":"Analyse","title":"rbmi: Quickstart","text":"next step run analysis model imputed dataset. done defining analysis function calling analyse() apply function imputed dataset. vignette use ancova() function provided rbmi package fits separate ANCOVA model outcomes visit returns treatment effect estimate corresponding least square means group per visit. Note , similar draws(), ancova() function uses set_vars() function determines names key variables within data covariates (addition treatment group) analysis model adjusted. Please also note names analysis estimates contain “ref” “alt” refer two treatment arms. particular “ref” refers first factor level vars$group necessarily coincide control arm. example, since levels(dat[[vars$group]]) = c(\"DRUG\", PLACEBO), results associated “ref” correspond intervention arm, associated “alt” correspond control arm. Additionally, can use delta argument analyse() perform delta adjustments imputed datasets prior analysis. brief, implemented specifying data.frame contains amount adjustment added longitudinal outcome subject visit, .e.  data.frame must contain columns subjid, visit, delta. appreciated carrying procedure potentially tedious, therefore delta_template() helper function provided simplify . particular, delta_template() returns shell data.frame delta-adjustment set 0 patients. Additionally delta_template() adds several meta-variables onto shell data.frame can used manual derivation manipulation delta-adjustment. example lets say want add delta-value 5 imputed values (.e. values missing original dataset) drug arm. implemented follows:","code":"anaObj <- analyse( imputeObj, ancova, vars = set_vars( subjid = \"PATIENT\", outcome = \"CHANGE\", visit = \"VISIT\", group = \"THERAPY\", covariates = c(\"BASVAL\") ) ) anaObj #> #> Analysis Object #> --------------- #> Number of Results: 150 #> Analysis Function: ancova #> Delta Applied: FALSE #> Analysis Estimates: #> trt_4 #> lsm_ref_4 #> lsm_alt_4 #> trt_5 #> lsm_ref_5 #> lsm_alt_5 #> trt_6 #> lsm_ref_6 #> lsm_alt_6 #> trt_7 #> lsm_ref_7 #> lsm_alt_7 # For reference show the additional meta variables provided delta_template(imputeObj) %>% as_tibble() #> # A tibble: 688 × 8 #> PATIENT VISIT THERAPY is_mar is_missing is_post_ice strategy delta #> #> 1 1503 4 DRUG TRUE FALSE FALSE NA 0 #> 2 1503 5 DRUG TRUE FALSE FALSE NA 0 #> 3 1503 6 DRUG TRUE FALSE FALSE NA 0 #> 4 1503 7 DRUG TRUE FALSE FALSE NA 0 #> 5 1507 4 PLACEBO TRUE FALSE FALSE NA 0 #> 6 1507 5 PLACEBO TRUE FALSE FALSE NA 0 #> 7 1507 6 PLACEBO TRUE FALSE FALSE NA 0 #> 8 1507 7 PLACEBO TRUE FALSE FALSE NA 0 #> 9 1509 4 DRUG TRUE FALSE FALSE NA 0 #> 10 1509 5 DRUG TRUE FALSE FALSE NA 0 #> # ℹ 678 more rows delta_df <- delta_template(imputeObj) %>% as_tibble() %>% mutate(delta = if_else(THERAPY == \"DRUG\" & is_missing , 5, 0)) %>% select(PATIENT, VISIT, delta) delta_df #> # A tibble: 688 × 3 #> PATIENT VISIT delta #> #> 1 1503 4 0 #> 2 1503 5 0 #> 3 1503 6 0 #> 4 1503 7 0 #> 5 1507 4 0 #> 6 1507 5 0 #> 7 1507 6 0 #> 8 1507 7 0 #> 9 1509 4 0 #> 10 1509 5 0 #> # ℹ 678 more rows anaObj_delta <- analyse( imputeObj, ancova, delta = delta_df, vars = set_vars( subjid = \"PATIENT\", outcome = \"CHANGE\", visit = \"VISIT\", group = \"THERAPY\", covariates = c(\"BASVAL\") ) )"},{"path":"/articles/quickstart.html","id":"pool","dir":"Articles","previous_headings":"","what":"Pool","title":"rbmi: Quickstart","text":"Finally, pool() function can used summarise analysis results across multiple imputed datasets provide overall statistic standard error, confidence intervals p-value hypothesis test null hypothesis effect equal 0. Note pooling method automatically derived based method specified original call draws(): method_bayes() method_approxbayes() pooling inference based Rubin’s rules. method_condmean(type = \"bootstrap\") inference either based normal approximation using bootstrap standard error (pool(..., type = \"normal\")) bootstrap percentiles (pool(..., type = \"percentile\")). method_condmean(type = \"jackknife\") inference based normal approximation using jackknife estimate standard error. method = method_bmlmi() inference according methods described von Hippel Bartlett (see stat_specs vignette details) Since used Bayesian multiple imputation vignette, pool() function automatically use Rubin’s rules. table values shown print message poolObj can also extracted using .data.frame() function: outputs gives estimated difference 2.180 (95% CI -0.080 4.439) two groups last visit associated p-value 0.059.","code":"poolObj <- pool( anaObj, conf.level = 0.95, alternative = \"two.sided\" ) poolObj #> #> Pool Object #> ----------- #> Number of Results Combined: 150 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_4 -0.092 0.683 -1.439 1.256 0.893 #> lsm_ref_4 -1.616 0.486 -2.576 -0.656 0.001 #> lsm_alt_4 -1.708 0.475 -2.645 -0.77 <0.001 #> trt_5 1.332 0.925 -0.495 3.159 0.152 #> lsm_ref_5 -4.157 0.661 -5.462 -2.852 <0.001 #> lsm_alt_5 -2.825 0.646 -4.1 -1.55 <0.001 #> trt_6 1.927 1.005 -0.059 3.913 0.057 #> lsm_ref_6 -6.097 0.721 -7.522 -4.671 <0.001 #> lsm_alt_6 -4.17 0.7 -5.553 -2.786 <0.001 #> trt_7 2.18 1.143 -0.08 4.439 0.059 #> lsm_ref_7 -6.994 0.826 -8.628 -5.36 <0.001 #> lsm_alt_7 -4.815 0.791 -6.379 -3.25 <0.001 #> -------------------------------------------------- as.data.frame(poolObj) #> parameter est se lci uci pval #> 1 trt_4 -0.09180645 0.6826279 -1.43949684 1.2558839 8.931772e-01 #> 2 lsm_ref_4 -1.61581996 0.4862316 -2.57577141 -0.6558685 1.093708e-03 #> 3 lsm_alt_4 -1.70762640 0.4749573 -2.64531931 -0.7699335 4.262148e-04 #> 4 trt_5 1.33217342 0.9248889 -0.49452471 3.1588715 1.517381e-01 #> 5 lsm_ref_5 -4.15685743 0.6607638 -5.46196249 -2.8517524 2.982856e-09 #> 6 lsm_alt_5 -2.82468402 0.6455730 -4.09978956 -1.5495785 2.197441e-05 #> 7 trt_6 1.92723926 1.0050687 -0.05860912 3.9130876 5.706399e-02 #> 8 lsm_ref_6 -6.09679600 0.7213490 -7.52226719 -4.6713248 2.489617e-14 #> 9 lsm_alt_6 -4.16955674 0.7003707 -5.55341225 -2.7857012 1.784937e-08 #> 10 trt_7 2.17964370 1.1426199 -0.07965819 4.4389456 5.852211e-02 #> 11 lsm_ref_7 -6.99418014 0.8260358 -8.62803604 -5.3603242 4.048404e-14 #> 12 lsm_alt_7 -4.81453644 0.7913711 -6.37916058 -3.2499123 1.067031e-08"},{"path":"/articles/quickstart.html","id":"code","dir":"Articles","previous_headings":"","what":"Code","title":"rbmi: Quickstart","text":"report code presented vignette.","code":"library(rbmi) library(dplyr) data(\"antidepressant_data\") dat <- antidepressant_data # Use expand_locf to add rows corresponding to visits with missing outcomes to the dataset dat <- expand_locf( dat, PATIENT = levels(dat$PATIENT), # expand by PATIENT and VISIT VISIT = levels(dat$VISIT), vars = c(\"BASVAL\", \"THERAPY\"), # fill with LOCF BASVAL and THERAPY group = c(\"PATIENT\"), order = c(\"PATIENT\", \"VISIT\") ) # Create data_ice and set the imputation strategy to JR for # each patient with at least one missing observation dat_ice <- dat %>% arrange(PATIENT, VISIT) %>% filter(is.na(CHANGE)) %>% group_by(PATIENT) %>% slice(1) %>% ungroup() %>% select(PATIENT, VISIT) %>% mutate(strategy = \"JR\") # In this dataset, subject 3618 has an intermittent missing values which does not correspond # to a study drug discontinuation. We therefore remove this subject from `dat_ice`. # (In the later imputation step, it will automatically be imputed under the default MAR assumption.) dat_ice <- dat_ice[-which(dat_ice$PATIENT == 3618),] # Define the names of key variables in our dataset using `set_vars()` # and the covariates included in the imputation model # Note that the covariates argument can also include interaction terms vars <- set_vars( outcome = \"CHANGE\", visit = \"VISIT\", subjid = \"PATIENT\", group = \"THERAPY\", covariates = c(\"BASVAL*VISIT\", \"THERAPY*VISIT\") ) # Define which imputation method to use (here: Bayesian multiple imputation with 150 imputed datsets) method <- method_bayes( burn_in = 200, burn_between = 5, n_samples = 150 ) # Create samples for the imputation parameters by running the draws() function set.seed(987) drawObj <- draws( data = dat, data_ice = dat_ice, vars = vars, method = method, quiet = TRUE ) # Impute the data imputeObj <- impute( drawObj, references = c(\"DRUG\" = \"PLACEBO\", \"PLACEBO\" = \"PLACEBO\") ) # Fit the analysis model on each imputed dataset anaObj <- analyse( imputeObj, ancova, vars = set_vars( subjid = \"PATIENT\", outcome = \"CHANGE\", visit = \"VISIT\", group = \"THERAPY\", covariates = c(\"BASVAL\") ) ) # Apply a delta adjustment # Add a delta-value of 5 to all imputed values (i.e. those values # which were missing in the original dataset) in the drug arm. delta_df <- delta_template(imputeObj) %>% as_tibble() %>% mutate(delta = if_else(THERAPY == \"DRUG\" & is_missing , 5, 0)) %>% select(PATIENT, VISIT, delta) # Repeat the analyses with the adjusted values anaObj_delta <- analyse( imputeObj, ancova, delta = delta_df, vars = set_vars( subjid = \"PATIENT\", outcome = \"CHANGE\", visit = \"VISIT\", group = \"THERAPY\", covariates = c(\"BASVAL\") ) ) # Pool the results poolObj <- pool( anaObj, conf.level = 0.95, alternative = \"two.sided\" )"},{"path":"/articles/retrieved_dropout.html","id":"retrieved-dropout-models-in-a-nutshell","dir":"Articles","previous_headings":"","what":"Retrieved dropout models in a nutshell","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"Retrieved dropout models proposed analysis estimands using treatment policy strategy addressing ICE. models, missing outcomes multiply imputed conditional upon whether occur pre- post-ICE. Retrieved dropout models typically rely extended missing--random (MAR) assumption, .e., assume missing outcome data similar observed data subjects treatment group observed outcome history, ICE status. comprehensive description evaluation retrieved dropout models, refer Guizzaro et al. (2021), Polverejan Dragalin (2020), Noci et al. (2023), Drury et al. (2024), Bell et al. (2024). Broadly, publications find retrieved dropout models reduce bias compared alternative analysis approaches based imputation basic MAR assumption reference-based missing data assumption. However, several issues retrieved dropout models also highlighted. Retrieved dropout models require enough post-ICE data collected inform imputation model. Even relatively small amounts missingness, complex retrieved dropout models may face identifiability issues. Another drawback models general loss power relative reference-based imputation methods, becomes meaningful post-ICE observation percentages 50% increases accelerating rate percentage decreases (Bell et al. 2024).","code":""},{"path":"/articles/retrieved_dropout.html","id":"sec:dataSimul","dir":"Articles","previous_headings":"","what":"Data simulation using function simulate_data()","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"purposes vignette first create simulated dataset rbmi function simulate_data(). simulate_data() function generates data randomized clinical trial longitudinal continuous outcomes two different types ICEs. Specifically, simulate 1:1 randomized trial active drug (intervention) versus placebo (control) 100 subjects per group 4 post-baseline assessments (3-monthly visits 12 months): mean outcome trajectory placebo group increases linearly 50 baseline (visit 0) 60 visit 4, .e. slope 10 points/year (2.5 points every 3 months). mean outcome trajectory intervention group identical placebo group month 6. month 6 onward, slope decreases 50% 5 points/year (.e. 1.25 points every 3 months). covariance structure baseline follow-values groups implied random intercept slope model standard deviation 5 intercept slope, correlation 0.25. addition, independent residual error standard deviation 2.5 added assessment. probability intercurrent event study drug discontinuation visit calculated according logistic model depends observed outcome visit. Specifically, visit-wise discontinuation probability 3% 4% control intervention group, respectively, specified case observed outcome equal 50 (mean value baseline). odds discontinuation simulated increase +10% +1 point increase observed outcome. Study drug discontinuation simulated effect mean trajectory placebo group. intervention group, subjects discontinue follow slope mean trajectory placebo group time point onward. compatible copy increments reference (CIR) assumption. Study dropout study drug discontinuation visit occurs probability 50% leading missing outcome data time point onward. function simulate_data() requires 3 arguments (see function documentation help(simulate_data) details): pars_c: simulation parameters control group. pars_t: simulation parameters intervention group. post_ice1_traj: Specifies observed outcomes ICE1 simulated. , report data according specifications can simulated function simulate_data(): frequency ICE proportion data collected ICE impacts variance treatment effect retrieved dropout models. example, large proportion ICE combined small proportion data collected ICE might result substantial variance inflation, especially complex retrieved dropout models. proportion subjects ICE proportion subjects withdrew simulated study summarized : study 23% study participants discontinued study treatment control arm 24% intervention arm. Approximately half participants discontinued treatment dropped-study discontinuation visit leading missing outcomes subsequent visits.","code":"library(rbmi) library(dplyr) ## ## Attaching package: 'dplyr' ## The following objects are masked from 'package:stats': ## ## filter, lag ## The following objects are masked from 'package:base': ## ## intersect, setdiff, setequal, union set.seed(1392) time <- c(0, 3, 6, 9, 12) # Mean trajectory control muC <- c(50.0, 52.5, 55.0, 57.5, 60.0) # Mean trajectory intervention muT <- c(50.0, 52.5, 55.0, 56.25, 57.50) # Create Sigma sd_error <- 2.5 covRE <- rbind( c(25.0, 6.25), c(6.25, 25.0) ) Sigma <- cbind(1, time / 12) %*% covRE %*% rbind(1, time / 12) + diag(sd_error^2, nrow = length(time)) # Set simulation parameters of the control group parsC <- set_simul_pars( mu = muC, sigma = Sigma, n = 100, # sample size prob_ice1 = 0.03, # prob of discontinuation for outcome equal to 50 or_outcome_ice1 = 1.10, # +1 point increase => +10% odds of discontinuation prob_post_ice1_dropout = 0.5 # dropout rate following discontinuation ) # Set simulation parameters of the intervention group parsT <- parsC parsT$mu <- muT parsT$prob_ice1 <- 0.04 # Simulate data data <- simulate_data( pars_c = parsC, pars_t = parsT, post_ice1_traj = \"CIR\" # Assumption about post-ice trajectory ) %>% select(-c(outcome_noICE, ind_ice2)) # remove unncessary columns head(data) ## id visit group outcome_bl ind_ice1 dropout_ice1 outcome ## 1 id_1 0 Control 53.35397 0 0 53.35397 ## 2 id_1 1 Control 53.35397 0 0 55.15100 ## 3 id_1 2 Control 53.35397 0 0 59.81038 ## 4 id_1 3 Control 53.35397 0 0 61.59709 ## 5 id_1 4 Control 53.35397 0 0 67.08044 ## 6 id_2 0 Control 53.31025 0 0 53.31025 # Compute endpoint of interest: change from baseline data <- data %>% filter(visit != \"0\") %>% mutate( change = outcome - outcome_bl, visit = factor(visit, levels = unique(visit)) ) data %>% group_by(visit) %>% summarise( freq_disc_ctrl = mean(ind_ice1[group == \"Control\"] == 1), freq_dropout_ctrl = mean(dropout_ice1[group == \"Control\"] == 1), freq_disc_interv = mean(ind_ice1[group == \"Intervention\"] == 1), freq_dropout_interv = mean(dropout_ice1[group == \"Intervention\"] == 1) ) ## # A tibble: 4 × 5 ## visit freq_disc_ctrl freq_dropout_ctrl freq_disc_interv freq_dropout_interv ## ## 1 1 0.03 0.01 0.06 0.03 ## 2 2 0.1 0.03 0.1 0.04 ## 3 3 0.19 0.09 0.17 0.06 ## 4 4 0.23 0.12 0.24 0.1"},{"path":"/articles/retrieved_dropout.html","id":"estimators-based-on-retrieved-dropout-models","dir":"Articles","previous_headings":"","what":"Estimators based on retrieved dropout models","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"consider retrieved dropout methods model pre- post-ICE outcomes jointly including time-varying ICE indicators imputation model, .e. allow occurrence ICE impact mean structure covariance matrix. Imputation missing outcomes performed MAR assumption including observed data. analysis completed data, use standard ANCOVA model outcome follow-visit, respectively, treatment assignment main covariate adjustment baseline outcome. Specifically, consider following imputation models: Imputation basic MAR assumption (basic MAR): model ignores whether outcome observed pre- post-ICE, .e. retrieved dropout model. Rather, asymptotically equivalent standard MMRM model analogous “MI1” model Bell et al. (2024). difference “MI1” model rbmi based sequential imputation rather, missing outcomes imputed simultaneously based MMRM-type imputation model. include baseline outcome visit treatment group visit interaction terms imputation model form: change ~ outcome_bl*visit + group*visit. Retrieved dropout model 1 (RD1): model uses following imputation model: change ~ outcome_bl*visit + group*visit + time_since_ice1*group, time_since_ice1 set 0 treatment discontinuation time treatment discontinuation (months) subsequent visits. implies change slope outcome trajectories ICE, modeled separately treatment arm. model similar “TV2-MAR” estimator Noci et al. (2023). Compared basic MAR model, model requires estimation 2 additional parameters. Retrieved dropout model 2 (RD2): model uses following imputation model: change ~ outcome_bl*visit + group*visit + ind_ice1*group*visit. assumes constant shift outcomes ICE, modeled separately treatment arm visit. model analogous “MI2” model Bell et al. (2024). Compared basic MAR model, model requires estimation 2 times “number visits” additional parameters. makes different though rather weaker assumptions RD1 model might also harder fit post-ICE data collection sparse visits.","code":""},{"path":"/articles/retrieved_dropout.html","id":"implementation-of-the-defined-retrieved-dropout-models-in-rbmi","dir":"Articles","previous_headings":"","what":"Implementation of the defined retrieved dropout models in rbmi","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"rbmi supports inclusion time-varying covariates imputation model. requirement time-varying covariate non-missing visits including outcome might missing. Imputation performed (extended) MAR assumption. Therefore, imputation approaches implemented rbmi valid yield comparable estimators standard errors. vignette, used conditional mean imputation approach combined jackknife.","code":""},{"path":"/articles/retrieved_dropout.html","id":"basic-mar-model","dir":"Articles","previous_headings":"4 Implementation of the defined retrieved dropout models in rbmi","what":"Basic MAR model","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"","code":"# Define key variables for the imputation and analysis models vars <- set_vars( subjid = \"id\", visit = \"visit\", outcome = \"change\", group = \"group\", covariates = c(\"outcome_bl*visit\", \"group*visit\") ) vars_an <- vars vars_an$covariates <- \"outcome_bl\" # Define imputation method method <- method_condmean(type = \"jackknife\") draw_obj <- draws( data = data, data_ice = NULL, vars = vars, method = method, quiet = TRUE ) impute_obj <- impute( draw_obj ) ana_obj <- analyse( impute_obj, vars = vars_an ) pool_obj_basicMAR <- pool(ana_obj) pool_obj_basicMAR ## ## Pool Object ## ----------- ## Number of Results Combined: 1 + 200 ## Method: jackknife ## Confidence Level: 0.95 ## Alternative: two.sided ## ## Results: ## ## ================================================== ## parameter est se lci uci pval ## -------------------------------------------------- ## trt_1 -0.991 0.557 -2.083 0.101 0.075 ## lsm_ref_1 3.117 0.401 2.331 3.902 <0.001 ## lsm_alt_1 2.126 0.391 1.36 2.892 <0.001 ## trt_2 -0.937 0.611 -2.134 0.26 0.125 ## lsm_ref_2 5.814 0.447 4.938 6.69 <0.001 ## lsm_alt_2 4.877 0.414 4.066 5.688 <0.001 ## trt_3 -1.491 0.743 -2.948 -0.034 0.045 ## lsm_ref_3 7.725 0.526 6.694 8.757 <0.001 ## lsm_alt_3 6.234 0.522 5.211 7.258 <0.001 ## trt_4 -2.872 0.945 -4.723 -1.02 0.002 ## lsm_ref_4 10.787 0.661 9.491 12.083 <0.001 ## lsm_alt_4 7.915 0.67 6.603 9.228 <0.001 ## --------------------------------------------------"},{"path":"/articles/retrieved_dropout.html","id":"retrieved-dropout-model-1-rd1","dir":"Articles","previous_headings":"4 Implementation of the defined retrieved dropout models in rbmi","what":"Retrieved dropout model 1 (RD1)","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"","code":"# derive variable \"time_since_ice1\" (time since ICE in months) data <- data %>% group_by(id) %>% mutate(time_since_ice1 = cumsum(ind_ice1)*3) vars$covariates <- c(\"outcome_bl*visit\", \"group*visit\", \"time_since_ice1*group\") draw_obj <- draws( data = data, data_ice = NULL, vars = vars, method = method, quiet = TRUE ) impute_obj <- impute( draw_obj ) ana_obj <- analyse( impute_obj, vars = vars_an ) pool_obj_RD1 <- pool(ana_obj) pool_obj_RD1 ## ## Pool Object ## ----------- ## Number of Results Combined: 1 + 200 ## Method: jackknife ## Confidence Level: 0.95 ## Alternative: two.sided ## ## Results: ## ## ================================================== ## parameter est se lci uci pval ## -------------------------------------------------- ## trt_1 -0.931 0.558 -2.025 0.163 0.095 ## lsm_ref_1 3.119 0.4 2.334 3.903 <0.001 ## lsm_alt_1 2.188 0.393 1.419 2.957 <0.001 ## trt_2 -0.805 0.616 -2.013 0.403 0.192 ## lsm_ref_2 5.822 0.445 4.949 6.695 <0.001 ## lsm_alt_2 5.017 0.424 4.186 5.849 <0.001 ## trt_3 -1.263 0.758 -2.748 0.222 0.096 ## lsm_ref_3 7.749 0.52 6.729 8.768 <0.001 ## lsm_alt_3 6.486 0.549 5.41 7.562 <0.001 ## trt_4 -2.506 0.969 -4.406 -0.606 0.01 ## lsm_ref_4 10.837 0.653 9.558 12.116 <0.001 ## lsm_alt_4 8.331 0.718 6.924 9.737 <0.001 ## --------------------------------------------------"},{"path":"/articles/retrieved_dropout.html","id":"retrieved-dropout-model-2-rd2","dir":"Articles","previous_headings":"4 Implementation of the defined retrieved dropout models in rbmi","what":"Retrieved dropout model 2 (RD2)","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"","code":"vars$covariates <- c(\"outcome_bl*visit\", \"group*visit\", \"ind_ice1*group*visit\") draw_obj <- draws( data = data, data_ice = NULL, vars = vars, method = method, quiet = TRUE ) impute_obj <- impute( draw_obj ) ana_obj <- analyse( impute_obj, vars = vars_an ) pool_obj_RD2 <- pool(ana_obj) pool_obj_RD2 ## ## Pool Object ## ----------- ## Number of Results Combined: 1 + 200 ## Method: jackknife ## Confidence Level: 0.95 ## Alternative: two.sided ## ## Results: ## ## ================================================== ## parameter est se lci uci pval ## -------------------------------------------------- ## trt_1 -0.927 0.558 -2.021 0.167 0.097 ## lsm_ref_1 3.125 0.4 2.341 3.908 <0.001 ## lsm_alt_1 2.198 0.395 1.424 2.972 <0.001 ## trt_2 -0.889 0.612 -2.089 0.311 0.146 ## lsm_ref_2 5.837 0.443 4.97 6.705 <0.001 ## lsm_alt_2 4.948 0.421 4.124 5.772 <0.001 ## trt_3 -1.305 0.757 -2.788 0.178 0.085 ## lsm_ref_3 7.648 0.54 6.59 8.707 <0.001 ## lsm_alt_3 6.343 0.528 5.308 7.378 <0.001 ## trt_4 -2.617 0.975 -4.528 -0.706 0.007 ## lsm_ref_4 10.883 0.665 9.58 12.186 <0.001 ## lsm_alt_4 8.267 0.715 6.866 9.667 <0.001 ## --------------------------------------------------"},{"path":"/articles/retrieved_dropout.html","id":"brief-summary-of-results","dir":"Articles","previous_headings":"4 Implementation of the defined retrieved dropout models in rbmi","what":"Brief summary of results","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"point estimators treatment effect last visit -2.872, -2.506, -2.617 basic MAR, RD1, RD2 estimators, respectively, .e. slightly smaller retrieved dropout models compared basic MAR model. corresponding standard errors 3 estimators 0.945, 0.969, 0.975, .e. slightly larger retrieved dropout models compared basic MAR model.","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"scope-of-this-document","dir":"Articles","previous_headings":"","what":"Scope of this document","title":"rbmi: Statistical Specifications","text":"document describes statistical methods implemented rbmi R package standard reference-based multiple imputation continuous longitudinal outcomes. package implements three classes multiple imputation (MI) approaches: Conventional MI methods based Bayesian (approximate Bayesian) posterior draws model parameters combined Rubin’s rules make inferences described Carpenter, Roger, Kenward (2013) Cro et al. (2020). Conditional mean imputation methods combined re-sampling techniques described Wolbers et al. (2022). Bootstrapped MI methods described von Hippel Bartlett (2021). document structured follows: first provide informal introduction estimands corresponding treatment effect estimation based MI (section 2). core document consists section 3 describes statistical methodology detail also contains comparison implemented approaches (section 3.10). link theory functions included package rbmi described section 4. conclude comparison package alternative software implementations reference-based imputation methods (section 5).","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"estimands","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods","what":"Estimands","title":"rbmi: Statistical Specifications","text":"ICH E9(R1) addendum estimands sensitivity analyses describes systematic approach ensure alignment among clinical trial objectives, trial execution/conduct, statistical analyses, interpretation results (ICH E9 working group (2019)). per addendum, estimand precise description treatment effect reflecting clinical question posed trial objective summarizes population-level outcomes patients different treatment conditions compared. One important attribute estimand list possible intercurrent events (ICEs), .e. events occurring treatment initiation affect either interpretation existence measurements associated clinical question interest, definition appropriate strategies deal ICEs. three relevant strategies purpose document hypothetical strategy, treatment policy strategy, composite strategy. hypothetical strategy, scenario envisaged ICE occur. scenario, endpoint values ICE directly observable treated using models missing data. treatment policy strategy, treatment effect presence ICEs targeted analyses based observed outcomes regardless whether subject ICE . composite strategy, ICE included component endpoint.","code":""},{"path":"/articles/stat_specs.html","id":"alignment-between-the-estimand-and-the-estimation-method","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods","what":"Alignment between the estimand and the estimation method","title":"rbmi: Statistical Specifications","text":"ICH E9(R1) addendum distinguishes ICEs missing data (ICH E9 working group (2019)). Whereas ICEs treatment discontinuations reflect clinical practice, amount missing data can minimized conduct clinical trial. However, many connections missing data ICEs. example, often difficult retain subjects clinical trial treatment discontinuation subject’s dropout trial leads missing data. another example, outcome values ICEs addressed using hypothetical strateg directly observable hypothetical scenario. Consequently, observed outcome values ICEs typically discarded treated missing data. addendum proposes estimation methods address problem presented missing data selected align estimand. recent overview methods align estimator estimand Mallinckrodt et al. (2020). short introduction estimation methods studies longitudinal endpoints can also found Wolbers et al. (2022). One prominent statistical method purpose multiple imputation (MI), target rbmi package.","code":""},{"path":"/articles/stat_specs.html","id":"missing-data-prior-to-ices","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods > 2.2 Alignment between the estimand and the estimation method","what":"Missing data prior to ICEs","title":"rbmi: Statistical Specifications","text":"Missing data may occur subjects without ICE prior occurrence ICE. missing outcomes associated ICE, often plausible impute missing--random (MAR) assumption using standard MMRM imputation model longitudinal outcomes. Informally, MAR occurs missing data can fully accounted baseline variables included model observed longitudinal outcomes, model correctly specified.","code":""},{"path":"/articles/stat_specs.html","id":"implementation-of-the-hypothetical-strategy","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods > 2.2 Alignment between the estimand and the estimation method","what":"Implementation of the hypothetical strategy","title":"rbmi: Statistical Specifications","text":"MAR imputation model described often also good starting point imputing data ICE handled using hypothetical strategy (Mallinckrodt et al. (2020)). Informally, assumes unobserved values ICE similar observed data subjects ICE remained follow-. However, situations, may reasonable assume missingness “informative” indicates systematically better worse outcome observed subjects. situations, MNAR imputation \\(\\delta\\)-adjustment explored sensitivity analysis. \\(\\delta\\)-adjustments add fixed random quantity imputations order make imputed outcomes systematically worse better observed described Cro et al. (2020). rbmi fixed \\(\\delta\\)-adjustments implemented.","code":""},{"path":"/articles/stat_specs.html","id":"implementation-of-the-treatment-policy-strategy","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods > 2.2 Alignment between the estimand and the estimation method","what":"Implementation of the treatment policy strategy","title":"rbmi: Statistical Specifications","text":"Ideally, data collection continues ICE handled treatment policy strategy missing data arises. Indeed, post-ICE data increasingly systematically collected RCTs. However, despite best efforts, missing data ICE study treatment discontinuation may still occur subject drops study discontinuation. difficult give definite recommendations regarding implementation treatment policy strategy presence missing data stage optimal method highly context dependent topic ongoing statistical research. ICEs thought negligible effect efficacy outcomes, standard MAR-based imputation ignores whether outcome observed pre- post-ICE may appropriate. contrast, ICE treatment discontinuation may expected substantial impact efficacy outcomes. settings, MAR assumption may still plausible conditioning subject’s time-varying treatment status (Guizzaro et al. (2021)). case, one option impute missing post-discontinuation data based subjects also discontinued treatment continued followed . Another option may require somewhat less post-discontinuation data include subjects imputation procedure model post-discontinuation data using time-varying treatment status indicators (Guizzaro et al. (2021), Polverejan Dragalin (2020), Noci et al. (2023), Drury et al. (2024), Bell et al. (2024)). approach, post-ICE outcomes included every step analysis, including fitting imputation model. assumes ICEs may impact post-ICE outcomes otherwise missingness non-informative. approach also assumes time-varying covariates contain missing values, deviations outcomes ICE correctly modeled time-varying covariates, sufficient post-ICE data available inform regression coefficients time-varying covariates. resulting imputation models called “retrieved dropout models” statistical literature. models tend less bias alternative analysis approaches based imputation basic MAR assumption reference-based missing data assumption. However, retrieved dropout models associated inflated standard errors associated treatment effect estimators detrimental effect study power. particular, observed post-ICE observation percentages falls 50%, power loss can quite dramatic (Bell et al. 2024). illustrate implementation retrieved dropout models vignette “Implementation retrieved-dropout models using rbmi” (vignette(topic = \"retrieved_dropout\", package = \"rbmi\")). trial settings, subjects discontinue randomized treatment. settings, treatment discontinuation rates higher difficult retain subjects trial treatment discontinuation leading sparse data collection treatment discontinuation. settings, amount available data treatment discontinuation may insufficient inform imputation model explicitly models post-discontinuation data. Depending disease area anticipated mechanism action intervention, may plausible assume subjects intervention group behave similarly subjects control group ICE treatment discontinuation. case, reference-based imputation methods option (Mallinckrodt et al. (2020)). Reference-based imputation methods formalize idea impute missing data intervention group based data control reference group. general description review reference-based imputation methods, refer Carpenter, Roger, Kenward (2013), Cro et al. (2020), . White, Royes, Best (2020) Wolbers et al. (2022). technical description implemented statistical methodology reference-based imputation, refer section 3 (particular section 3.4).","code":""},{"path":"/articles/stat_specs.html","id":"implementation-of-the-composite-strategy","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods > 2.2 Alignment between the estimand and the estimation method","what":"Implementation of the composite strategy","title":"rbmi: Statistical Specifications","text":"composite strategy typically applied binary time--event outcomes can also used continuous outcomes ascribing suitably unfavorable value patients experience ICEs composite strategy defined. One possibility implement use MI \\(\\delta\\)-adjustment post-ICE data described Darken et al. (2020).","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"sec:methodsOverview","dir":"Articles","previous_headings":"3 Statistical methodology","what":"Overview of the imputation procedure","title":"rbmi: Statistical Specifications","text":"Analyses datasets missing data always rely missing data assumptions. methods described can used produce valid imputations MAR assumption reference-based imputation assumptions. MNAR imputation based fixed \\(\\delta\\)-adjustments typically used sensitivity analyses tipping-point analyses also supported. Three general imputation approaches implemented rbmi: Conventional MI based Bayesian (approximate Bayesian) posterior draws imputation model combined Rubin’s rules inference described Carpenter, Roger, Kenward (2013) Cro et al. (2020). Conditional mean imputation based REML estimate imputation model combined resampling techniques (jackknife bootstrap) inference described Wolbers et al. (2022). Bootstrapped MI methods based REML estimates imputation model described von Hippel Bartlett (2021).","code":""},{"path":"/articles/stat_specs.html","id":"conventional-mi","dir":"Articles","previous_headings":"3 Statistical methodology > 3.1 Overview of the imputation procedure","what":"Conventional MI","title":"rbmi: Statistical Specifications","text":"Conventional MI approaches include following steps: Base imputation model fitting step (Section 3.3) Fit Bayesian multivariate normal mixed model repeated measures (MMRM) observed longitudinal outcomes exclusion data ICEs reference-based missing data imputation desired (Section 3.3.3). Draw \\(M\\) posterior samples estimated parameters (regression coefficients covariance matrices) model. Alternatively, \\(M\\) approximate posterior draws posterior distribution can sampled repeatedly applying conventional restricted maximum-likelihood (REML) parameter estimation MMRM model nonparametric bootstrap samples original dataset (Section 3.3.4). Imputation step (Section 3.4) Take single sample \\(m\\) (\\(m\\1,\\ldots, M)\\) posterior distribution imputation model parameters. subject, use sampled parameters defined imputation strategy determine mean covariance matrix describing subject’s marginal outcome distribution longitudinal outcome assessments (.e. observed missing outcomes). subjects, construct conditional multivariate normal distribution missing outcomes given observed outcomes (including observed outcomes ICEs reference-based assumption desired). subject, draw single sample conditional distribution impute missing outcomes leading complete imputed dataset. sensitivity analyses, pre-defined \\(\\delta\\)-adjustment may applied imputed data prior analysis step. (Section 3.5). Analysis step (Section 3.6) Analyze imputed dataset using analysis model (e.g. ANCOVA) resulting point estimate standard error (corresponding degrees freedom) treatment effect. Pooling step inference (Section 3.7) Repeat steps 2. 3. posterior sample \\(m\\), resulting \\(M\\) complete datasets, \\(M\\) point estimates treatment effect, \\(M\\) standard errors (corresponding degrees freedom). Pool \\(M\\) treatment effect estimates, standard errors, degrees freedom using rules Barnard Rubin obtain final pooled treatment effect estimator, standard error, degrees freedom.","code":""},{"path":"/articles/stat_specs.html","id":"conditional-mean-imputation","dir":"Articles","previous_headings":"3 Statistical methodology > 3.1 Overview of the imputation procedure","what":"Conditional mean imputation","title":"rbmi: Statistical Specifications","text":"conditional mean imputation approach includes following steps: Base imputation model fitting step (Section 3.3) Fit conventional multivariate normal/MMRM model using restricted maximum likelihood (REML) observed longitudinal outcomes exclusion data ICEs reference-based missing data imputation desired (Section 3.3.2). Imputation step (Section 3.4) subject, use fitted parameters step 1. construct conditional distribution missing outcomes given observed outcomes (including observed outcomes ICEs reference-based missing data imputation desired) described . subject, impute missing data deterministically mean conditional distribution leading complete imputed dataset. sensitivity analyses, pre-defined \\(\\delta\\)-adjustment may applied imputed data prior analysis step. (Section 3.5). Analysis step (Section 3.6) Apply analysis model (e.g. ANCOVA) completed dataset resulting point estimate treatment effect. Jackknife bootstrap inference step (Section 3.8) Inference treatment effect estimate 3. based re-sampling techniques. jackknife bootstrap supported. Importantly, methods require repeating steps imputation procedure (.e. imputation, conditional mean imputation, analysis steps) resampled datasets.","code":""},{"path":"/articles/stat_specs.html","id":"bootstrapped-mi","dir":"Articles","previous_headings":"3 Statistical methodology > 3.1 Overview of the imputation procedure","what":"Bootstrapped MI","title":"rbmi: Statistical Specifications","text":"bootstrapped MI approach includes following steps: Base imputation model fitting step (Section 3.3) Apply conventional restricted maximum-likelihood (REML) parameter estimation MMRM model \\(B\\) nonparametric bootstrap samples original dataset using observed longitudinal outcomes exclusion data ICEs reference-based missing data imputation desired. Imputation step (Section 3.4) Take bootstrapped dataset \\(b\\) (\\(b\\1,\\ldots, B)\\) corresponding imputation model parameter estimates. subject (bootstrapped dataset), use parameter estimates defined strategy dealing ICEs determine mean covariance matrix describing subject’s marginal outcome distribution longitudinal outcome assessments (.e. observed missing outcomes). subjects (bootstrapped dataset), construct conditional multivariate normal distribution missing outcomes given observed outcomes (including observed outcomes ICEs reference-based missing data imputation desired). subject (bootstrapped dataset), draw \\(D\\) samples conditional distributions impute missing outcomes leading \\(D\\) complete imputed dataset bootstrap sample \\(b\\). sensitivity analyses, pre-defined \\(\\delta\\)-adjustment may applied imputed data prior analysis step. (Section 3.5). Analysis step (Section 3.6) Analyze \\(B\\times D\\) imputed datasets using analysis model (e.g. ANCOVA) resulting \\(B\\times D\\) point estimates treatment effect. Pooling step inference (Section 3.9) Pool \\(B\\times D\\) treatment effect estimates described von Hippel Bartlett (2021) obtain final pooled treatment effect estimate, standard error, degrees freedom.","code":""},{"path":"/articles/stat_specs.html","id":"setting-notation-and-missing-data-assumptions","dir":"Articles","previous_headings":"3 Statistical methodology","what":"Setting, notation, and missing data assumptions","title":"rbmi: Statistical Specifications","text":"Assume data study \\(n\\) subjects total subject \\(\\) (\\(=1,\\ldots,n\\)) \\(J\\) scheduled follow-visits outcome interest assessed. applications, data randomized trial intervention vs control group treatment effect interest comparison outcomes specific visit randomized groups. However, single-arm trials multi-arm trials principle also supported rbmi implementation. Denote observed outcome vector length \\(J\\) subject \\(\\) \\(Y_i\\) (missing assessments coded NA (available)) non-missing missing components \\(Y_{!}\\) \\(Y_{?}\\), respectively. default, imputation missing outcomes \\(Y_{}\\) performed MAR assumption rbmi. Therefore, missing data following ICE handled using MAR imputation, compatible default assumption. discussed Section 2, MAR assumption often good starting point implementing hypothetical strategy. also note observed outcome data ICE handled using hypothetical strategy compatible strategy. Therefore, assume post-ICE data ICEs handled using hypothetical strategy already set NA \\(Y_i\\) prior calling rbmi functions. However, observed outcomes ICEs handled using treatment policy strategy included \\(Y_i\\) compatible strategy. Subjects may also experience one ICE missing data imputation according reference-based imputation method foreseen. subject \\(\\) ICE, denote first visit affected ICE \\(\\tilde{t}_i \\\\{1,\\ldots,J\\}\\). subjects, set \\(\\tilde{t}_i=\\infty\\). subject’s outcome vector setting observed outcomes visit \\(\\tilde{t}_i\\) onwards missing (.e. NA) denoted \\(Y'_i\\) corresponding data vector removal NA elements \\(Y'_{!}\\). MNAR \\(\\delta\\)-adjustments added imputed datasets formal imputation steps. covered separate section (Section 3.5).","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"sec:imputationModelSpecs","dir":"Articles","previous_headings":"3 Statistical methodology > 3.3 The base imputation model","what":"Included data and model specification","title":"rbmi: Statistical Specifications","text":"purpose imputation model estimate (covariate-dependent) mean trajectories covariance matrices group absence ICEs handled using reference-based imputation methods. Conventionally, publications reference-based imputation methods implicitly assumed corresponding post-ICE data missing subjects (Carpenter, Roger, Kenward (2013)). also allow situation post-ICE data available subjects needs imputed using reference-based methods others. However, observed data ICEs reference-based imputation methods specified compatible imputation model described therefore removed considered missing purpose estimating imputation model, purpose . example, patient ICE addressed reference-based method outcomes ICE collected, post-ICE outcomes excluded fitting base imputation model (included following steps). , base imputation model fitted \\(Y'_{!}\\) \\(Y_{!}\\). exclude data, imputation model mistakenly estimate mean trajectories based mixture observed pre- post-ICE data relevant reference-based imputations. Observed post-ICE outcomes control reference group also excluded base imputation model user specifies reference-based imputation strategy ICEs. ensures ICE impact data included imputation model regardless whether ICE occurred control intervention group. hand, imputation reference group based MAR assumption even reference-based imputation methods may preferable settings include post-ICE data control group base imputation model. can implemented specifying MAR strategy ICE control group reference-based strategy ICE intervention group. base imputation model longitudinal outcomes \\(Y'_i\\) assumes mean structure linear function covariates. Full flexibility specification linear predictor model supported. minimum covariates include treatment group, (categorical) visit, treatment--visit interactions. Typically, covariates including baseline outcome also included. External time-varying covariates (e.g. calendar time visit) well internal time-varying (e.g. time-varying indicators treatment discontinuation initiation rescue treatment) may principle also included indicated (Guizzaro et al. (2021)). Missing covariate values allowed. means values time-varying covariates must non-missing every visit regardless whether outcome measured missing. Denote \\(J\\times p\\) design matrix subject \\(\\) corresponding mean structure model \\(X_i\\) matrix removal rows corresponding missing outcomes \\(Y'_{!}\\) \\(X'_{!}\\). \\(p\\) number parameters mean structure model elements \\(Y'_{!}\\). base imputation model observed outcomes defined : \\[ Y'_{!} = X'_{!}\\beta + \\epsilon_{!} \\mbox{ } \\epsilon_{!}\\sim N(0,\\Sigma_{!!})\\] \\(\\beta\\) vector regression coefficients \\(\\Sigma_{!!}\\) covariance matrix obtained complete-data \\(J\\times J\\)-covariance matrix \\(\\Sigma\\) omitting rows columns corresponding missing outcome assessments subject \\(\\). Typically, common unstructured covariance matrix subjects assumed \\(\\Sigma\\) separate covariate matrices per treatment group also supported. Indeed, implementation also supports specification separate covariate matrices according arbitrarily defined categorical variable groups subjects disjoint subset. example, useful different covariance matrices suspected different subject strata. Finally, imputation methods described rely Bayesian model fitting MCMC, flexibility choice covariance structure, .e. unstructured (default), heterogeneous Toeplitz, heterogeneous compound symmetry, AR(1) covariance structures supported.","code":""},{"path":"/articles/stat_specs.html","id":"sec:imputationModelREML","dir":"Articles","previous_headings":"3 Statistical methodology > 3.3 The base imputation model","what":"Restricted maximum likelihood estimation (REML)","title":"rbmi: Statistical Specifications","text":"Frequentist parameter estimation base imputation based REML. use REML improved alternative maximum likelihood (ML) covariance parameter estimation originally proposed Patterson Thompson (1971). Since , become default method parameter estimation linear mixed effects models. rbmi allows choose ML REML methods estimate model parameters, REML default option.","code":""},{"path":"/articles/stat_specs.html","id":"sec:imputationModelBayes","dir":"Articles","previous_headings":"3 Statistical methodology > 3.3 The base imputation model","what":"Bayesian model fitting","title":"rbmi: Statistical Specifications","text":"Bayesian imputation model fitted R package rstan (Stan Development Team (2020)). rstan R interface Stan. Stan powerful flexible statistical software developed dedicated team implements Bayesian inference state---art MCMC sampling procedures. multivariate normal model missing data specified section 3.3.1 can considered generalization models described Stan user’s guide (see Stan Development Team (2020, sec. 3.5)). prior distributions SAS implementation “five macros” used (Roger (2021)), .e. improper flat priors regression coefficients weakly informative inverse Wishart prior covariance matrix (matrices). Specifically, let \\(S \\\\mathbb{R}^{J \\times J}\\) symmetric positive definite matrix \\(\\nu \\(J-1, \\infty)\\). symmetric positive definite matrix \\(x \\\\mathbb{R}^{J \\times J}\\) density: \\[ \\text{InvWish}(x \\vert \\nu, S) = \\frac{1}{2^{\\nu J/2}} \\frac{1}{\\Gamma_J(\\frac{\\nu}{2})} \\vert S \\vert^{\\nu/2} \\vert x \\vert ^{-(\\nu + J + 1)/2} \\text{exp}(-\\frac{1}{2} \\text{tr}(Sx^{-1})). \\] \\(\\nu > J+1\\) mean given : \\[ E[x] = \\frac{S}{\\nu - J - 1}. \\] choose \\(S\\) equal estimated covariance matrix frequentist REML fit \\(\\nu = J+2\\) lowest degrees freedom guarantee finite mean. Setting degrees freedom low \\(\\nu\\) ensures prior little impact posterior. Moreover, choice allows interpret parameter \\(S\\) mean prior distribution. “five macros”, MCMC algorithm initialized parameters frequentist REML fit (see section 3.3.2). described , using weakly informative priors parameters. Therefore, Markov chain essentially starting targeted stationary posterior distribution minimal amount burn-chain required.","code":""},{"path":"/articles/stat_specs.html","id":"sec:imputationModelBoot","dir":"Articles","previous_headings":"3 Statistical methodology > 3.3 The base imputation model","what":"Approximate Bayesian posterior draws via the bootstrap","title":"rbmi: Statistical Specifications","text":"Several authors suggested stabler way get Bayesian posterior draws imputation model bootstrap incomplete data calculate REML estimates bootstrap sample (Little Rubin (2002), Efron (1994), Honaker King (2010), von Hippel Bartlett (2021)). method proper REML estimates bootstrap samples asymptotically equivalent sample posterior distribution may provide additional robustness model misspecification (Little Rubin (2002, sec. 10.2.3, part 6), Honaker King (2010)). order retain balance treatment groups stratification factors across bootstrap samples, user able provide stratification variables bootstrap rbmi implementation.","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"sec:imputatioMNAR","dir":"Articles","previous_headings":"3 Statistical methodology > 3.4 Imputation step","what":"Marginal imputation distribution for a subject - MAR case","title":"rbmi: Statistical Specifications","text":"subject \\(\\), marginal distribution complete \\(J\\)-dimensional outcome vector assessment visits according imputation model multivariate normal distribution. mean \\(\\tilde{\\mu}_i\\) given predicted mean imputation model conditional subject’s baseline characteristics, group, , optionally, time-varying covariates. covariance matrix \\(\\tilde{\\Sigma}_i\\) given overall estimated covariance matrix , different covariance matrices assumed different groups, covariance matrix corresponding subject \\(\\)’s group.","code":""},{"path":"/articles/stat_specs.html","id":"sec:imputationRefBased","dir":"Articles","previous_headings":"3 Statistical methodology > 3.4 Imputation step","what":"Marginal imputation distribution for a subject - reference-based imputation methods","title":"rbmi: Statistical Specifications","text":"subject \\(\\), calculate mean covariance matrix complete \\(J\\)-dimensional outcome vector assessment visits MAR case denote \\(\\mu_i\\) \\(\\Sigma_i\\). reference-based imputation methods, corresponding reference group also required group. Typically, reference group intervention group control group. reference mean \\(\\mu_{ref,}\\) defined predicted mean imputation model conditional reference group (rather actual group subject \\(\\) belongs ) subject’s baseline characteristics. reference covariance matrix \\(\\Sigma_{ref,}\\) overall estimated covariance matrix , different covariance matrices assumed different groups, estimated covariance matrix corresponding reference group. principle, time-varying covariates also included reference-based imputation methods. However, sensible external time-varying covariates (e.g. calendar time visit) internal time-varying covariates (e.g. treatment discontinuation) latter likely depend actual treatment group typically sensible assume trajectory time-varying covariate reference group. Based means covariance matrices, subject’s marginal imputation distribution reference-based imputation methods calculated detailed Carpenter, Roger, Kenward (2013, sec. 4.3). Denote mean covariance matrix marginal imputation distribution \\(\\tilde{\\mu}_i\\) \\(\\tilde{\\Sigma}_i\\). Recall subject’s first visit affected ICE denoted \\(\\tilde{t}_i \\\\{1,\\ldots,J\\}\\) (visit \\(\\tilde{t}_i-1\\) last visit unaffected ICE). marginal distribution patient \\(\\) built according specific assumption data post ICE follows: Jump reference (JR): patient’s outcome distribution normally distributed following mean: \\[\\tilde{\\mu}_i = (\\mu_i[1], \\dots, \\mu_i[\\tilde{t}_i-1], \\mu_{ref,}[\\tilde{t}_i], \\dots, \\mu_{ref,}[J])^T.\\] covariance matrix constructed follows. First, partition covariance matrices \\(\\Sigma_i\\) \\(\\Sigma_{ref,}\\) blocks according time ICE \\(\\tilde{t}_i\\): \\[ \\Sigma_{} = \\begin{bmatrix} \\Sigma_{, 11} & \\Sigma_{, 12} \\\\ \\Sigma_{, 21} & \\Sigma_{,22} \\\\ \\end{bmatrix} \\] \\[ \\Sigma_{ref,} = \\begin{bmatrix} \\Sigma_{ref, , 11} & \\Sigma_{ref, , 12} \\\\ \\Sigma_{ref, , 21} & \\Sigma_{ref, ,22} \\\\ \\end{bmatrix}. \\] want covariance matrix \\(\\tilde{\\Sigma}_i\\) match \\(\\Sigma_i\\) pre-deviation measurements, \\(\\Sigma_{ref,}\\) conditional components post-deviation given pre-deviation measurements. solution derived Carpenter, Roger, Kenward (2013, sec. 4.3) given : \\[ \\begin{matrix} \\tilde{\\Sigma}_{,11} = \\Sigma_{, 11} \\\\ \\tilde{\\Sigma}_{, 21} = \\Sigma_{ref,, 21} \\Sigma^{-1}_{ref,, 11} \\Sigma_{, 11} \\\\ \\tilde{\\Sigma}_{, 22} = \\Sigma_{ref, , 22} - \\Sigma_{ref,, 21} \\Sigma^{-1}_{ref,, 11} (\\Sigma_{ref,, 11} - \\Sigma_{,11}) \\Sigma^{-1}_{ref,, 11} \\Sigma_{ref,, 12}. \\end{matrix} \\] Copy increments reference (CIR): patient’s outcome distribution normally distributed following mean: \\[ \\begin{split} \\tilde{\\mu}_i =& (\\mu_i[1], \\dots, \\mu_i[\\tilde{t}_i-1], \\mu_i[\\tilde{t}_i-1] + (\\mu_{ref,}[\\tilde{t}_i] - \\mu_{ref,}[\\tilde{t}_i-1]), \\dots,\\\\ & \\mu_i[\\tilde{t}_i-1]+(\\mu_{ref,}[J] - \\mu_{ref,}[\\tilde{t}_i-1]))^T. \\end{split} \\] covariance matrix derived JR method. Copy reference (CR): patient’s outcome distribution normally distributed mean covariance matrix taken reference group: \\[ \\tilde{\\mu}_i = \\mu_{ref,} \\] \\[ \\tilde{\\Sigma}_i = \\Sigma_{ref,}. \\] Last mean carried forward (LMCF): patient’s outcome distribution normally distributed following mean: \\[ \\tilde{\\mu}_i = (\\mu_i[1], \\dots, \\mu_i[\\tilde{t}_i-1], \\mu_i[\\tilde{t}_i-1], \\dots, \\mu_i[\\tilde{t}_i-1])'\\] covariance matrix: \\[ \\tilde{\\Sigma}_i = \\Sigma_i.\\]","code":""},{"path":"/articles/stat_specs.html","id":"sec:imputationRandomConditionalMean","dir":"Articles","previous_headings":"3 Statistical methodology > 3.4 Imputation step","what":"Imputation of missing outcome data","title":"rbmi: Statistical Specifications","text":"joint marginal multivariate normal imputation distribution subject \\(\\)’s observed missing outcome data mean \\(\\tilde{\\mu}_i\\) covariance matrix \\(\\tilde{\\Sigma}_i\\) defined . actual imputation missing outcome data obtained conditioning marginal distribution subject’s observed outcome data. note, approach valid regardless whether subject intermittent terminal missing data. conditional distribution used imputation multivariate normal distribution explicit formulas conditional mean covariance readily available. completeness, report notation terminology setting. marginal distribution outcome patient \\(\\) \\(Y_i \\sim N(\\tilde{\\mu}_i, \\tilde{\\Sigma}_i)\\) outcome \\(Y_i\\) can decomposed observed (\\(Y_{,!}\\)) unobserved (\\(Y_{,?}\\)) components. Analogously mean \\(\\tilde{\\mu}_i\\) can decomposed \\((\\tilde{\\mu}_{,!},\\tilde{\\mu}_{,?})\\) covariance \\(\\tilde{\\Sigma}_i\\) : \\[ \\tilde{\\Sigma}_i = \\begin{bmatrix} \\tilde{\\Sigma}_{, !!} & \\tilde{\\Sigma}_{,!?} \\\\ \\tilde{\\Sigma}_{, ?!} & \\tilde{\\Sigma}_{, ??} \\end{bmatrix}. \\] conditional distribution \\(Y_{,?}\\) conditional \\(Y_{,!}\\) multivariate normal distribution expectation \\[ E(Y_{,?} \\vert Y_{,!})= \\tilde{\\mu}_{,?} + \\tilde{\\Sigma}_{, ?!} \\tilde{\\Sigma}_{,!!}^{-1} (Y_{,!} - \\tilde{\\mu}_{,!}) \\] covariance matrix \\[ Cov(Y_{,?} \\vert Y_{,!}) = \\tilde{\\Sigma}_{,??} - \\tilde{\\Sigma}_{,?!} \\tilde{\\Sigma}_{,!!}^{-1} \\tilde{\\Sigma}_{,!?}. \\] Conventional random imputation consists sampling conditional multivariate normal distribution. Conditional mean imputation imputes missing values deterministic conditional expectation \\(E(Y_{,?} \\vert Y_{,!})\\).","code":""},{"path":"/articles/stat_specs.html","id":"sec:deltaAdjustment","dir":"Articles","previous_headings":"3 Statistical methodology","what":"\\(\\delta\\)-adjustment","title":"rbmi: Statistical Specifications","text":"marginal \\(\\delta\\)-adjustment approach similar “five macros” SAS implemented (Roger (2021)), .e. fixed non-stochastic values added multivariate normal imputation step prior analysis. relevant sensitivity analyses order make imputed data systematically worse better, respectively, observed data. addition, authors suggested \\(\\delta\\)-type adjustments implement composite strategy continuous outcomes (Darken et al. (2020)). implementation provides full flexibility regarding specific implementation \\(\\delta\\)-adjustment, .e. value added may depend randomized treatment group, timing subject’s ICE, factors. suggestions case studies regarding topic, refer Cro et al. (2020).","code":""},{"path":"/articles/stat_specs.html","id":"sec:analysis","dir":"Articles","previous_headings":"3 Statistical methodology","what":"Analysis step","title":"rbmi: Statistical Specifications","text":"data imputation, standard analysis model can applied completed data resulting treatment effect estimate. imputed data longer contains missing values, analysis model often simple. example, can analysis covariance (ANCOVA) model outcome (change outcome baseline) specific visit j dependent variable, randomized treatment group primary covariate , typically, adjustment baseline covariates imputation model.","code":""},{"path":"/articles/stat_specs.html","id":"sec:pooling","dir":"Articles","previous_headings":"3 Statistical methodology","what":"Pooling step for inference of (approximate) Bayesian MI and Rubin’s rules","title":"rbmi: Statistical Specifications","text":"Assume analysis model applied \\(M\\) multiple imputed random datasets resulted \\(m\\) treatment effect estimates \\(\\hat{\\theta}_m\\) (\\(m=1,\\ldots,M\\)) corresponding standard error \\(SE_m\\) (available) degrees freedom \\(\\nu_{com}\\). degrees freedom available analysis model, set \\(\\nu_{com}=\\infty\\) inference based normal distribution. Rubin’s rules used pooling treatment effect estimates corresponding variances estimates analysis steps across \\(M\\) multiple imputed datasets. According Rubin’s rules, final estimate treatment effect calculated sample mean \\(M\\) treatment effect estimates: \\[ \\hat{\\theta} = \\frac{1}{M} \\sum_{m = 1}^M \\hat{\\theta}_m. \\] pooled variance based two components reflect within variance treatment effects across multiple imputed datasets: \\[ V(\\hat{\\theta}) = V_W(\\hat{\\theta}) + (1 + \\frac{1}{M}) V_B(\\hat{\\theta}) \\] \\(V_W(\\hat{\\theta}) = \\frac{1}{M}\\sum_{m = 1}^M SE^2_m\\) within-variance \\(V_B(\\hat{\\theta}) = \\frac{1}{M-1} \\sum_{m = 1}^M (\\hat{\\theta}_m - \\hat{\\theta})^2\\) -variance. Confidence intervals tests null hypothesis \\(H_0: \\theta=\\theta_0\\) based \\(t\\)-statistics \\(T\\): \\[ T= (\\hat{\\theta}-\\theta_0)/\\sqrt{V(\\hat{\\theta})}. \\] null hypothesis, \\(T\\) approximate \\(t\\)-distribution \\(\\nu\\) degrees freedom. \\(\\nu\\) calculated according Barnard Rubin approximation, see Barnard Rubin (1999) (formula 3) Little Rubin (2002) (formula (5.24), page 87): \\[ \\nu = \\frac{\\nu_{old}* \\nu_{obs}}{\\nu_{old} + \\nu_{obs}} \\] \\[ \\nu_{old} = \\frac{M-1}{\\lambda^2} \\quad\\mbox{}\\quad \\nu_{obs} = \\frac{\\nu_{com} + 1}{\\nu_{com} + 3} \\nu_{com} (1 - \\lambda) \\] \\(\\lambda = \\frac{(1 + \\frac{1}{M})V_B(\\hat{\\theta})}{V(\\hat{\\theta})}\\) fraction missing information.","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"point-estimate-of-the-treatment-effect","dir":"Articles","previous_headings":"3 Statistical methodology > 3.8 Bootstrap and jackknife inference for conditional mean imputation","what":"Point estimate of the treatment effect","title":"rbmi: Statistical Specifications","text":"point estimator obtained applying analysis model (Section 3.6) single conditional mean imputation missing data (see Section 3.4.3) based REML estimator parameters imputation model (see Section 3.3.2). denote treatment effect estimator \\(\\hat{\\theta}\\). demonstrated Wolbers et al. (2022) (Section 2.4), treatment effect estimator valid analysis model ANCOVA model , generally, treatment effect estimator linear function imputed outcome vector. Indeed, case, estimator identical pooled treatment effect across multiple random REML imputation infinite number imputations corresponds computationally efficient implementation proposal von Hippel Bartlett (2021). expect conditional mean imputation method also applicable analysis models (e.g. general MMRM analysis models) formally justified.","code":""},{"path":"/articles/stat_specs.html","id":"jackknife-standard-errors-confidence-intervals-ci-and-tests-for-the-treatment-effect","dir":"Articles","previous_headings":"3 Statistical methodology > 3.8 Bootstrap and jackknife inference for conditional mean imputation","what":"Jackknife standard errors, confidence intervals (CI) and tests for the treatment effect","title":"rbmi: Statistical Specifications","text":"dataset containing \\(n\\) subjects, jackknife standard error depends treatment effect estimates \\(\\hat{\\theta}_{(-b)}\\) (\\(b=1,\\ldots,n\\)) samples original dataset leave observation subject \\(b\\). described previously, obtain treatment effect estimates leave-one-subject-datasets, steps imputation procedure (.e. imputation, conditional mean imputation, analysis steps) need repeated new dataset. , jackknife standard error defined \\[\\hat{se}_{jack}=[\\frac{(n-1)}{n}\\cdot\\sum_{b=1}^{n} (\\hat{\\theta}_{(-b)}-\\bar{\\theta}_{(.)})^2]^{1/2}\\] \\(\\bar{\\theta}_{(.)}\\) denotes mean jackknife estimates (Efron Tibshirani (1994), chapter 10). corresponding two-sided normal approximation \\(1-\\alpha\\) CI defined \\(\\hat{\\theta}\\pm z^{1-\\alpha/2}\\cdot \\hat{se}_{jack}\\) \\(\\hat{\\theta}\\) treatment effect estimate original dataset. Tests null hypothesis \\(H_0: \\theta=\\theta_0\\) based \\(Z\\)-score \\(Z=(\\hat{\\theta}-\\theta_0)/\\hat{se}_{jack}\\) using standard normal approximation. simulation study reported Wolbers et al. (2022) demonstrated exact protection type error jackknife-based inference relatively low sample size (n = 100 per group) substantial amount missing data (>25% subjects ICE).","code":""},{"path":"/articles/stat_specs.html","id":"bootstrap-standard-errors-confidence-intervals-ci-and-tests-for-the-treatment-effect","dir":"Articles","previous_headings":"3 Statistical methodology > 3.8 Bootstrap and jackknife inference for conditional mean imputation","what":"Bootstrap standard errors, confidence intervals (CI) and tests for the treatment effect","title":"rbmi: Statistical Specifications","text":"alternative jackknife, bootstrap also implemented rbmi (Efron Tibshirani (1994), Davison Hinkley (1997)). Two different bootstrap methods implemented rbmi: Methods based bootstrap standard error normal approximation percentile bootstrap methods. Denote treatment effect estimates \\(B\\) bootstrap samples \\(\\hat{\\theta}^*_b\\) (\\(b=1,\\ldots,B\\)). bootstrap standard error \\(\\hat{se}_{boot}\\) defined empirical standard deviation bootstrapped treatment effect estimates. Confidence intervals tests based bootstrap standard error can constructed way jackknife. Confidence intervals using percentile bootstrap based empirical quantiles bootstrap distribution corresponding statistical tests implemented rbmi via inversion confidence interval. Explicit formulas bootstrap inference implemented rbmi package considerations regarding required number bootstrap samples included Appendix Wolbers et al. (2022). simulation study reported Wolbers et al. (2022) demonstrated small inflation type error rate inference based bootstrap standard error (\\(5.3\\%\\) nominal type error rate \\(5\\%\\)) sample size n = 100 per group substantial amount missing data (>25% subjects ICE). Based simulations, recommend jackknife bootstrap inference performed better simulation study typically much faster compute bootstrap.","code":""},{"path":"/articles/stat_specs.html","id":"sec:poolbmlmi","dir":"Articles","previous_headings":"3 Statistical methodology","what":"Pooling step for inference of the bootstrapped MI methods","title":"rbmi: Statistical Specifications","text":"Assume analysis model applied \\(B\\times D\\) multiple imputed random datasets resulted \\(B\\times D\\) treatment effect estimates \\(\\hat{\\theta}_{bd}\\) (\\(b=1,\\ldots,B\\); \\(d=1,\\ldots,D\\)). final estimate treatment effect calculated sample mean \\(B*D\\) treatment effect estimates: \\[ \\hat{\\theta} = \\frac{1}{BD} \\sum_{b = 1}^B \\sum_{d = 1}^D \\hat{\\theta}_{bd}. \\] pooled variance based two components reflect variability within imputed bootstrap samples (von Hippel Bartlett (2021), formula 8.4): \\[ V(\\hat{\\theta}) = (1 + \\frac{1}{B})\\frac{MSB - MSW}{D} + \\frac{MSW}{BD} \\] \\(MSB\\) mean square bootstrapped datasets, \\(MSW\\) mean square within bootstrapped datasets imputed datasets: \\[ \\begin{align*} MSB &= \\frac{D}{B-1} \\sum_{b = 1}^B (\\bar{\\theta_{b}} - \\hat{\\theta})^2 \\\\ MSW &= \\frac{1}{B(D-1)} \\sum_{b = 1}^B \\sum_{d = 1}^D (\\theta_{bd} - \\bar{\\theta_b})^2 \\end{align*} \\] \\(\\bar{\\theta_{b}}\\) mean across \\(D\\) estimates obtained random imputation \\(b\\)-th bootstrap sample. degrees freedom estimated following formula (von Hippel Bartlett (2021), formula 8.6): \\[ \\nu = \\frac{(MSB\\cdot (B+1) - MSW\\cdot B)^2}{\\frac{MSB^2\\cdot (B+1)^2}{B-1} + \\frac{MSW^2\\cdot B}{D-1}} \\] Confidence intervals tests null hypothesis \\(H_0: \\theta=\\theta_0\\) based \\(t\\)-statistics \\(T\\): \\[ T= (\\hat{\\theta}-\\theta_0)/\\sqrt{V(\\hat{\\theta})}. \\] null hypothesis, \\(T\\) approximate \\(t\\)-distribution \\(\\nu\\) degrees freedom.","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"treatment-effect-estimation","dir":"Articles","previous_headings":"3 Statistical methodology > 3.10 Comparison between the implemented approaches","what":"Treatment effect estimation","title":"rbmi: Statistical Specifications","text":"approaches provide consistent treatment effect estimates standard reference-based imputation methods case analysis model completed datasets general linear model ANCOVA. Methods conditional mean imputation also valid analysis models. validity conditional mean imputation formally demonstrated analyses using general linear model (Wolbers et al. (2022, sec. 2.4)) though may also applicable widely (e.g. general MMRM analysis models). Treatment effects based conditional mean imputation deterministic. methods affected Monte Carlo sampling error precision estimates depends number imputations bootstrap samples, respectively.","code":""},{"path":"/articles/stat_specs.html","id":"standard-errors-of-the-treatment-effect","dir":"Articles","previous_headings":"3 Statistical methodology > 3.10 Comparison between the implemented approaches","what":"Standard errors of the treatment effect","title":"rbmi: Statistical Specifications","text":"approaches imputation MAR assumption provide consistent estimates frequentist standard error. reference-based imputation methods, situation complicated two different types variance estimators proposed statistical literature (Bartlett (2023)). first frequentist variance describes actual repeated sampling variability estimator. reference-based missing data assumption correctly specified, resulting inference based variance correct frequentist sense, .e. hypothesis tests asymptotically correct type error control confidence intervals correct coverage probabilities repeated sampling (Bartlett (2023), Wolbers et al. (2022)). Reference-based missing data assumptions strong borrow information reference arm imputation active arm. consequence, size frequentist standard errors treatment effects may decrease increasing amounts missing data. second proposal -called “information-anchored” variance originally proposed context sensitivity analyses (Cro, Carpenter, Kenward (2019)). variance estimator based disentangling point estimation variance estimation altogether. information-anchoring principle described Cro, Carpenter, Kenward (2019) states relative increase variance treatment effect estimator MAR imputation increasing amounts missing data preserved reference-based imputation methods. resulting information-anchored variance typically similar variance MAR imputation typically increases increasing amounts missing data. However, information-anchored variance reflect actual variability reference-based estimator repeated sampling resulting inference highly conservative resulting substantial power loss (Wolbers et al. (2022)). Moreover, date, Bayesian frequentist framework developed information-anchored variance provides correct inference reference-based missingness assumptions, clear whether framework can even developed. Reference-based conditional mean imputation (method_condmean()) bootstrapped likelihood-based multiple methods (method = method_bmlmi()) obtain standard errors via resampling hence target frequentist variance (Wolbers et al. (2022), von Hippel Bartlett (2021)). finite samples, simulations sample size \\(n=100\\) per group reported Wolbers et al. (2022) demonstrated conditional mean imputation combined jackknife (method_condmean(type = \"jackknife\")) provided exact protection type one error rate whereas bootstrap (method_condmean(type = \"bootstrap\")) associated small type error inflation (5.1% 5.3% nominal level 5%). reference-based conditional mean imputation, alternative information-anchored variance can obtained following proposal Lu (2021). basic idea Lu (2021) obtain information-anchored variance via MAR imputation combined delta-adjustment delta selected data-driven way match reference-based estimator. conditional mean imputation, proposal Lu (2021) can implemented choosing delta-adjustment difference conditional mean imputation chosen reference-based assumption MAR original dataset. illustration different variances can obtained conditional mean imputation rbmi provided vignette “Frequentist information-anchored inference reference-based conditional mean imputation” (vignette(topic = \"CondMean_Inference\", package = \"rbmi\")). Reference-based Bayesian (approximate Bayesian) multiple imputation methods combined Rubin’s rules (method_bayes() method_approxbayes()) target information-anchored variance (Cro, Carpenter, Kenward (2019)). frequentist variance methods principle obtained via bootstrap jackknife re-sampling treatment effect estimates computationally intensive directly supported rbmi. view primary analyses, accurate type error control (can obtained using frequentist variance) important adherence information anchoring principle , us, fully compatible strong reference-based assumptions. case, reference-based imputation used primary analysis, critical chosen reference-based assumption can clinically justified, suitable sensitivity analyses conducted stress-test assumptions. Conditional mean imputation combined jackknife method leads deterministic standard error estimates , consequently, confidence intervals \\(p\\)-values also deterministic. particularly important regulatory setting important ascertain whether calculated \\(p\\)-value close critical boundary 5% truly threshold rather uncertain Monte Carlo error.","code":""},{"path":"/articles/stat_specs.html","id":"computational-complexity","dir":"Articles","previous_headings":"3 Statistical methodology > 3.10 Comparison between the implemented approaches","what":"Computational complexity","title":"rbmi: Statistical Specifications","text":"Bayesian MI methods rely specification prior distributions usage Markov chain Monte Carlo (MCMC) methods. methods based multiple imputation bootstrapping require tuning parameters specification number imputations \\(M\\) bootstrap samples \\(B\\) rely numerical optimization fitting MMRM imputation models via REML. Conditional mean imputation combined jackknife tuning parameters. rbmi implementation, fitting MMRM imputation model via REML computationally expensive. MCMC sampling using rstan (Stan Development Team (2020)) typically relatively fast setting requires small burn-burn-chains. addition, number random imputations reliable inference using Rubin’s rules often smaller number resamples required jackknife bootstrap (see e.g. discussions . R. White, Royston, Wood (2011, sec. 7) Bayesian MI Appendix Wolbers et al. (2022) bootstrap). Thus, many applications, expect conventional MI based Bayesian posterior draws fastest, followed conventional MI using approximate Bayesian posterior draws conditional mean imputation combined jackknife. Conditional mean imputation combined bootstrap bootstrapped MI methods typically computationally demanding. note, implemented methods conceptually straightforward parallelise parallelisation support provided rbmi.","code":""},{"path":"/articles/stat_specs.html","id":"sec:rbmiFunctions","dir":"Articles","previous_headings":"","what":"Mapping of statistical methods to rbmi functions","title":"rbmi: Statistical Specifications","text":"full documentation rbmi package functionality refer help pages functions package vignettes. give brief overview different steps imputation procedure mapped rbmi functions: Bayesian posterior parameter draws imputation model obtained via argument method = method_bayes(). Approximate Bayesian posterior parameter draws imputation model obtained via argument method = method_approxbayes(). ML REML parameter estimates imputation model parameters original dataset leave-one-subject-datasets (required jackknife) obtained via argument method = method_condmean(type = \"jackknife\"). ML REML parameter estimates imputation model parameters original dataset bootstrapped datasets obtained via argument method = method_condmean(type = \"bootstrap\"). Bootstrapped MI methods obtained via argument method = method_bmlmi(B=B, D=D) \\(B\\) refers number bootstrap samples \\(D\\) number random imputations bootstrap sample. imputation step using random imputation deterministic conditional mean imputation, respectively, implemented function impute(). Imputation can performed assuming already implemented imputation strategies presented section 3.4. Additionally, user-defined imputation strategies also supported. analysis step implemented function analyse() applies analysis model imputed datasets. default, analysis model (argument fun) ancova() function alternative analysis functions can also provided user. analyse() function also allows \\(\\delta\\)-adjustments imputed datasets prior analysis via argument delta. inference step implemented function pool() pools results across imputed datasets. Rubin Bernard rule applied case (approximate) Bayesian MI. conditional mean imputation, jackknife bootstrap (normal approximation percentile) inference supported. BMLMI, pooling inference steps performed via pool() case implements method described Section 3.9.","code":""},{"path":"/articles/stat_specs.html","id":"sec:otherSoftware","dir":"Articles","previous_headings":"","what":"Comparison to other software implementations","title":"rbmi: Statistical Specifications","text":"established software implementation reference-based imputation SAS -called “five macros” James Roger (Roger (2021)). alternative R implementation also currently development R package RefBasedMI (McGrath White (2021)). rbmi several features supported implementations: addition Bayesian MI approach implemented also packages, implementation provides three alternative MI approaches: approximate Bayesian MI, conditional mean imputation combined resampling, bootstrapped MI. rbmi allows usage data collected ICE. example, suppose want adopt treatment policy strategy ICE “treatment discontinuation”. possible implementation strategy use observed outcome data subjects remain study ICE use reference-based imputation case subject drops . implementation, implemented excluding observed post ICE data imputation model assumes MAR missingness including analysis model. knowledge, directly supported implementations. RefBasedMI fits imputation model data treatment group separately implies covariate-treatment group interactions covariates pooled data treatment groups. contrast, Roger’s five macros assume joint model including data randomized groups covariate-treatment interactions covariates allowed. also chose implement joint model use flexible model linear predictor may may include interaction term covariate treatment group. addition, imputation model also allows inclusion time-varying covariates. implementation, grouping subjects purpose imputation model (definition reference group) need correspond assigned treatment groups. provides additional flexibility imputation procedure. clear us whether feature supported Roger’s five macros RefBasedMI. believe R-based implementation modular RefBasedMI facilitate package enhancements. contrast, general causal model introduced . White, Royes, Best (2020) available implementations currently supported .","code":""},{"path":[]},{"path":"/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Craig Gower-Page. Author, maintainer. Alessandro Noci. Author. Marcel Wolbers. Contributor. Isaac Gravestock. Author. F. Hoffmann-La Roche AG. Copyright holder, funder.","code":""},{"path":"/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Gower-Page C, Noci , Gravestock (2024). rbmi: Reference Based Multiple Imputation. R package version 1.3.1, https://github.com/insightsengineering/rbmi, https://insightsengineering.github.io/rbmi/main/. Gower-Page C, Noci , Wolbers M (2022). “rbmi: R package standard reference-based multiple imputation methods.” Journal Open Source Software, 7(74), 4251. doi:10.21105/joss.04251, https://doi.org/10.21105/joss.04251.","code":"@Manual{, title = {rbmi: Reference Based Multiple Imputation}, author = {Craig Gower-Page and Alessandro Noci and Isaac Gravestock}, year = {2024}, note = {R package version 1.3.1, https://github.com/insightsengineering/rbmi}, url = {https://insightsengineering.github.io/rbmi/main/}, } @Article{, title = {rbmi: A R package for standard and reference-based multiple imputation methods}, author = {Craig Gower-Page and Alessandro Noci and Marcel Wolbers}, year = {2022}, publisher = {The Open Journal}, doi = {10.21105/joss.04251}, url = {https://doi.org/10.21105/joss.04251}, volume = {7}, number = {74}, pages = {4251}, journal = {Journal of Open Source Software}, }"},{"path":[]},{"path":"/index.html","id":"overview","dir":"","previous_headings":"","what":"Overview","title":"Reference Based Multiple Imputation","text":"rbmi package used imputation missing data clinical trials continuous multivariate normal longitudinal outcomes. supports imputation missing random (MAR) assumption, reference-based imputation methods, delta adjustments (required sensitivity analysis tipping point analyses). package implements Bayesian approximate Bayesian multiple imputation combined Rubin’s rules inference, frequentist conditional mean imputation combined (jackknife bootstrap) resampling.","code":""},{"path":"/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Reference Based Multiple Imputation","text":"package can installed directly CRAN via: Note usage Bayesian multiple imputation requires installation suggested package rstan.","code":"install.packages(\"rbmi\") install.packages(\"rstan\")"},{"path":"/index.html","id":"usage","dir":"","previous_headings":"","what":"Usage","title":"Reference Based Multiple Imputation","text":"package designed around 4 core functions: draws() - Fits multiple imputation models impute() - Imputes multiple datasets analyse() - Analyses multiple datasets pool() - Pools multiple results single statistic basic usage core functions described quickstart vignette:","code":"vignette(topic = \"quickstart\", package = \"rbmi\")"},{"path":"/index.html","id":"validation","dir":"","previous_headings":"","what":"Validation","title":"Reference Based Multiple Imputation","text":"clarification current validation status rbmi please see FAQ vignette.","code":""},{"path":"/index.html","id":"support","dir":"","previous_headings":"","what":"Support","title":"Reference Based Multiple Imputation","text":"help regards using package find bug please create GitHub issue","code":""},{"path":"/reference/QR_decomp.html","id":null,"dir":"Reference","previous_headings":"","what":"QR decomposition — QR_decomp","title":"QR decomposition — QR_decomp","text":"QR decomposition defined Stan user's guide (section 1.2).","code":""},{"path":"/reference/QR_decomp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"QR decomposition — QR_decomp","text":"","code":"QR_decomp(mat)"},{"path":"/reference/QR_decomp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"QR decomposition — QR_decomp","text":"mat matrix perform QR decomposition .","code":""},{"path":"/reference/Stack.html","id":null,"dir":"Reference","previous_headings":"","what":"R6 Class for a FIFO stack — Stack","title":"R6 Class for a FIFO stack — Stack","text":"simple stack object offering add / pop functionality","code":""},{"path":"/reference/Stack.html","id":"public-fields","dir":"Reference","previous_headings":"","what":"Public fields","title":"R6 Class for a FIFO stack — Stack","text":"stack list containing current stack","code":""},{"path":[]},{"path":"/reference/Stack.html","id":"public-methods","dir":"Reference","previous_headings":"","what":"Public methods","title":"R6 Class for a FIFO stack — Stack","text":"Stack$add() Stack$pop() Stack$clone()","code":""},{"path":"/reference/Stack.html","id":"method-add-","dir":"Reference","previous_headings":"","what":"Method add()","title":"R6 Class for a FIFO stack — Stack","text":"Adds content end stack (must list)","code":""},{"path":"/reference/Stack.html","id":"usage","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for a FIFO stack — Stack","text":"","code":"Stack$add(x)"},{"path":"/reference/Stack.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for a FIFO stack — Stack","text":"x content add stack","code":""},{"path":"/reference/Stack.html","id":"method-pop-","dir":"Reference","previous_headings":"","what":"Method pop()","title":"R6 Class for a FIFO stack — Stack","text":"Retrieve content stack","code":""},{"path":"/reference/Stack.html","id":"usage-1","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for a FIFO stack — Stack","text":"","code":"Stack$pop(i)"},{"path":"/reference/Stack.html","id":"arguments-1","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for a FIFO stack — Stack","text":"number items retrieve stack. less items left stack just return everything left.","code":""},{"path":"/reference/Stack.html","id":"method-clone-","dir":"Reference","previous_headings":"","what":"Method clone()","title":"R6 Class for a FIFO stack — Stack","text":"objects class cloneable method.","code":""},{"path":"/reference/Stack.html","id":"usage-2","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for a FIFO stack — Stack","text":"","code":"Stack$clone(deep = FALSE)"},{"path":"/reference/Stack.html","id":"arguments-2","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for a FIFO stack — Stack","text":"deep Whether make deep clone.","code":""},{"path":"/reference/add_class.html","id":null,"dir":"Reference","previous_headings":"","what":"Add a class — add_class","title":"Add a class — add_class","text":"Utility function add class object. Adds new class existing classes.","code":""},{"path":"/reference/add_class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Add a class — add_class","text":"","code":"add_class(x, cls)"},{"path":"/reference/add_class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Add a class — add_class","text":"x object add class . cls class added.","code":""},{"path":"/reference/adjust_trajectories.html","id":null,"dir":"Reference","previous_headings":"","what":"Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories","title":"Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories","text":"Adjust trajectories due intercurrent event (ICE)","code":""},{"path":"/reference/adjust_trajectories.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories","text":"","code":"adjust_trajectories( distr_pars_group, outcome, ids, ind_ice, strategy_fun, distr_pars_ref = NULL )"},{"path":"/reference/adjust_trajectories.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories","text":"distr_pars_group Named list containing simulation parameters multivariate normal distribution assumed given treatment group. contains following elements: mu: Numeric vector indicating mean outcome trajectory. include outcome baseline. sigma Covariance matrix outcome trajectory. outcome Numeric variable specifies longitudinal outcome. ids Factor variable specifies id subject. ind_ice binary variable takes value 1 corresponding outcome affected ICE 0 otherwise. strategy_fun Function implementing trajectories intercurrent event (ICE). Must one getStrategies(). See getStrategies() details. distr_pars_ref Optional. Named list containing simulation parameters reference arm. contains following elements: mu: Numeric vector indicating mean outcome trajectory assuming ICEs. include outcome baseline. sigma Covariance matrix outcome trajectory assuming ICEs.","code":""},{"path":"/reference/adjust_trajectories.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories","text":"numeric vector containing adjusted trajectories.","code":""},{"path":[]},{"path":"/reference/adjust_trajectories_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","title":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","text":"Adjust trajectory subject's outcome due intercurrent event (ICE)","code":""},{"path":"/reference/adjust_trajectories_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","text":"","code":"adjust_trajectories_single( distr_pars_group, outcome, strategy_fun, distr_pars_ref = NULL )"},{"path":"/reference/adjust_trajectories_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","text":"distr_pars_group Named list containing simulation parameters multivariate normal distribution assumed given treatment group. contains following elements: mu: Numeric vector indicating mean outcome trajectory. include outcome baseline. sigma Covariance matrix outcome trajectory. outcome Numeric variable specifies longitudinal outcome. strategy_fun Function implementing trajectories intercurrent event (ICE). Must one getStrategies(). See getStrategies() details. distr_pars_ref Optional. Named list containing simulation parameters reference arm. contains following elements: mu: Numeric vector indicating mean outcome trajectory assuming ICEs. include outcome baseline. sigma Covariance matrix outcome trajectory assuming ICEs.","code":""},{"path":"/reference/adjust_trajectories_single.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","text":"numeric vector containing adjusted trajectory single subject.","code":""},{"path":"/reference/adjust_trajectories_single.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","text":"outcome specified --post-ICE observations (.e. observations adjusted) set NA.","code":""},{"path":"/reference/analyse.html","id":null,"dir":"Reference","previous_headings":"","what":"Analyse Multiple Imputed Datasets — analyse","title":"Analyse Multiple Imputed Datasets — analyse","text":"function takes multiple imputed datasets (generated impute() function) runs analysis function .","code":""},{"path":"/reference/analyse.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Analyse Multiple Imputed Datasets — analyse","text":"","code":"analyse( imputations, fun = ancova, delta = NULL, ..., ncores = 1, .validate = TRUE )"},{"path":"/reference/analyse.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Analyse Multiple Imputed Datasets — analyse","text":"imputations imputations object created impute(). fun analysis function applied imputed dataset. See details. delta data.frame containing delta transformation applied imputed datasets prior running fun. See details. ... Additional arguments passed onto fun. ncores number parallel processes use running function. Can also cluster object created make_rbmi_cluster(). See parallisation section . .validate inputations checked ensure conforms required format (default = TRUE) ? Can gain small performance increase set FALSE analysing large number samples.","code":""},{"path":"/reference/analyse.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Analyse Multiple Imputed Datasets — analyse","text":"function works performing following steps: Extract dataset imputations object. Apply delta adjustments specified delta argument. Run analysis function fun dataset. Repeat steps 1-3 across datasets inside imputations object. Collect return analysis results. analysis function fun must take data.frame first argument. options analyse() passed onto fun via .... fun must return named list element list containing single numeric element called est (additionally se df originally specified method_bayes() method_approxbayes()) .e.: Please note vars$subjid column (defined original call draws()) scrambled data.frames provided fun. say contain original subject values hard coding subject ids strictly avoided. default fun ancova() function. Please note function requires vars object, created set_vars(), provided via vars argument e.g. analyse(imputeObj, vars = set_vars(...)). Please see documentation ancova() full details. Please also note theoretical justification conditional mean imputation method (method = method_condmean() draws()) relies fact ANCOVA linear transformation outcomes. Thus care required applying alternative analysis functions setting. delta argument can used specify offsets applied outcome variable imputed datasets prior analysis. typically used sensitivity tipping point analyses. delta dataset must contain columns vars$subjid, vars$visit (specified original call draws()) delta. Essentially data.frame merged onto imputed dataset vars$subjid vars$visit outcome variable modified : Please note order provide maximum flexibility, delta argument can used modify /outcome values including imputed. Care must taken defining offsets. recommend use helper function delta_template() define delta datasets provides utility variables is_missing can used identify exactly visits imputed.","code":"myfun <- function(dat, ...) { mod_1 <- lm(data = dat, outcome ~ group) mod_2 <- lm(data = dat, outcome ~ group + covar) x <- list( trt_1 = list( est = coef(mod_1)[[group]], se = sqrt(vcov(mod_1)[group, group]), df = df.residual(mod_1) ), trt_2 = list( est = coef(mod_2)[[group]], se = sqrt(vcov(mod_2)[group, group]), df = df.residual(mod_2) ) ) return(x) } imputed_data[[vars$outcome]] <- imputed_data[[vars$outcome]] + imputed_data[[\"delta\"]]"},{"path":"/reference/analyse.html","id":"parallelisation","dir":"Reference","previous_headings":"","what":"Parallelisation","title":"Analyse Multiple Imputed Datasets — analyse","text":"speed evaluation analyse() can use ncores argument enable parallelisation. Simply providing integer get rbmi automatically spawn many background processes parallelise across. using custom analysis function need ensure libraries global objects required function available sub-processes. need use make_rbmi_cluster() function example: Note significant overhead setting sub-processes transferring data back--forth main process sub-processes. parallelisation analyse() function tends worth > 2000 samples generated draws(). Conversely using parallelisation samples smaller may lead longer run times just running sequentially. important note implementation parallel processing within analyse() optimised around assumption parallel processes spawned machine remote cluster. One optimisation required data saved temporary file local disk read sub-process. done avoid overhead transferring data network. assumption stage need parallelising analysis remote cluster likely better parallelising across multiple rbmi runs rather within single rbmi run. Finally, tipping point analysis can get reasonable performance improvement re-using cluster call analyse() e.g.","code":"my_custom_fun <- function(...) cl <- make_rbmi_cluster( 4, objects = list(\"my_custom_fun\" = my_custom_fun), packages = c(\"dplyr\", \"nlme\") ) analyse( imputations = imputeObj, fun = my_custom_fun, ncores = cl ) parallel::stopCluster(cl) cl <- make_rbmi_cluster(4) ana_1 <- analyse( imputations = imputeObj, delta = delta_plan_1, ncores = cl ) ana_2 <- analyse( imputations = imputeObj, delta = delta_plan_2, ncores = cl ) ana_3 <- analyse( imputations = imputeObj, delta = delta_plan_3, ncores = cl ) parallel::clusterStop(cl)"},{"path":[]},{"path":"/reference/analyse.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Analyse Multiple Imputed Datasets — analyse","text":"","code":"if (FALSE) { # \\dontrun{ vars <- set_vars( subjid = \"subjid\", visit = \"visit\", outcome = \"outcome\", group = \"group\", covariates = c(\"sex\", \"age\", \"sex*age\") ) analyse( imputations = imputeObj, vars = vars ) deltadf <- data.frame( subjid = c(\"Pt1\", \"Pt1\", \"Pt2\"), visit = c(\"Visit_1\", \"Visit_2\", \"Visit_2\"), delta = c( 5, 9, -10) ) analyse( imputations = imputeObj, delta = deltadf, vars = vars ) } # }"},{"path":"/reference/ancova.html","id":null,"dir":"Reference","previous_headings":"","what":"Analysis of Covariance — ancova","title":"Analysis of Covariance — ancova","text":"Performs analysis covariance two groups returning estimated \"treatment effect\" (.e. contrast two treatment groups) least square means estimates group.","code":""},{"path":"/reference/ancova.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Analysis of Covariance — ancova","text":"","code":"ancova( data, vars, visits = NULL, weights = c(\"counterfactual\", \"equal\", \"proportional_em\", \"proportional\") )"},{"path":"/reference/ancova.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Analysis of Covariance — ancova","text":"data data.frame containing data used model. vars vars object generated set_vars(). group, visit, outcome covariates elements required. See details. visits optional character vector specifying visits fit ancova model . NULL, separate ancova model fit outcomes visit (determined unique(data[[vars$visit]])). See details. weights Character, either \"counterfactual\" (default), \"equal\", \"proportional_em\" \"proportional\". Specifies weighting strategy used calculating lsmeans. See weighting section details.","code":""},{"path":"/reference/ancova.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Analysis of Covariance — ancova","text":"function works follows: Select first value visits. Subset data observations occurred visit. Fit linear model vars$outcome ~ vars$group + vars$covariates. Extract \"treatment effect\" & least square means treatment group. Repeat points 2-3 values visits. value visits provided set unique(data[[vars$visit]]). order meet formatting standards set analyse() results collapsed single list suffixed visit name, e.g.: Please note \"ref\" refers first factor level vars$group necessarily coincide control arm. Analogously, \"alt\" refers second factor level vars$group. \"trt\" refers model contrast translating mean difference second level first level. want include interaction terms model can done providing covariates argument set_vars() e.g. set_vars(covariates = c(\"sex*age\")).","code":"list( trt_visit_1 = list(est = ...), lsm_ref_visit_1 = list(est = ...), lsm_alt_visit_1 = list(est = ...), trt_visit_2 = list(est = ...), lsm_ref_visit_2 = list(est = ...), lsm_alt_visit_2 = list(est = ...), ... )"},{"path":[]},{"path":"/reference/ancova.html","id":"counterfactual","dir":"Reference","previous_headings":"","what":"Counterfactual","title":"Analysis of Covariance — ancova","text":"weights = \"counterfactual\" (default) lsmeans obtained taking average predicted values patient assigning patients arm turn. approach equivalent standardization g-computation. comparison emmeans approach equivalent : Note ensure backwards compatibility previous versions rbmi weights = \"proportional\" alias weights = \"counterfactual\". get results consistent emmeans's weights = \"proportional\" please use weights = \"proportional_em\".","code":"emmeans::emmeans(model, specs = \"\", counterfactual = \"\")"},{"path":"/reference/ancova.html","id":"equal","dir":"Reference","previous_headings":"","what":"Equal","title":"Analysis of Covariance — ancova","text":"weights = \"equal\" lsmeans obtained taking model fitted value hypothetical patient whose covariates defined follows: Continuous covariates set mean(X) Dummy categorical variables set 1/N N number levels Continuous * continuous interactions set mean(X) * mean(Y) Continuous * categorical interactions set mean(X) * 1/N Dummy categorical * categorical interactions set 1/N * 1/M comparison emmeans approach equivalent :","code":"emmeans::emmeans(model, specs = \"\", weights = \"equal\")"},{"path":"/reference/ancova.html","id":"proportional","dir":"Reference","previous_headings":"","what":"Proportional","title":"Analysis of Covariance — ancova","text":"weights = \"proportional_em\" lsmeans obtained per weights = \"equal\" except instead weighting observation equally weighted proportion given combination categorical values occurred data. comparison emmeans approach equivalent : Note confused weights = \"proportional\" alias weights = \"counterfactual\".","code":"emmeans::emmeans(model, specs = \"\", weights = \"proportional\")"},{"path":[]},{"path":"/reference/ancova_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"Performance analysis covariance. See ancova() full details.","code":""},{"path":"/reference/ancova_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"","code":"ancova_single( data, outcome, group, covariates, weights = c(\"counterfactual\", \"equal\", \"proportional_em\", \"proportional\") )"},{"path":"/reference/ancova_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"data data.frame containing data used model. outcome Character, name outcome variable data. group Character, name group variable data. covariates Character vector containing name additional covariates included model well interaction terms. weights Character, either \"counterfactual\" (default), \"equal\", \"proportional_em\" \"proportional\". Specifies weighting strategy used calculating lsmeans. See weighting section details.","code":""},{"path":"/reference/ancova_single.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"group must factor variable 2 levels. outcome must continuous numeric variable.","code":""},{"path":[]},{"path":"/reference/ancova_single.html","id":"counterfactual","dir":"Reference","previous_headings":"","what":"Counterfactual","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"weights = \"counterfactual\" (default) lsmeans obtained taking average predicted values patient assigning patients arm turn. approach equivalent standardization g-computation. comparison emmeans approach equivalent : Note ensure backwards compatibility previous versions rbmi weights = \"proportional\" alias weights = \"counterfactual\". get results consistent emmeans's weights = \"proportional\" please use weights = \"proportional_em\".","code":"emmeans::emmeans(model, specs = \"\", counterfactual = \"\")"},{"path":"/reference/ancova_single.html","id":"equal","dir":"Reference","previous_headings":"","what":"Equal","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"weights = \"equal\" lsmeans obtained taking model fitted value hypothetical patient whose covariates defined follows: Continuous covariates set mean(X) Dummy categorical variables set 1/N N number levels Continuous * continuous interactions set mean(X) * mean(Y) Continuous * categorical interactions set mean(X) * 1/N Dummy categorical * categorical interactions set 1/N * 1/M comparison emmeans approach equivalent :","code":"emmeans::emmeans(model, specs = \"\", weights = \"equal\")"},{"path":"/reference/ancova_single.html","id":"proportional","dir":"Reference","previous_headings":"","what":"Proportional","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"weights = \"proportional_em\" lsmeans obtained per weights = \"equal\" except instead weighting observation equally weighted proportion given combination categorical values occurred data. comparison emmeans approach equivalent : Note confused weights = \"proportional\" alias weights = \"counterfactual\".","code":"emmeans::emmeans(model, specs = \"\", weights = \"proportional\")"},{"path":[]},{"path":"/reference/ancova_single.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"","code":"if (FALSE) { # \\dontrun{ iris2 <- iris[ iris$Species %in% c(\"versicolor\", \"virginica\"), ] iris2$Species <- factor(iris2$Species) ancova_single(iris2, \"Sepal.Length\", \"Species\", c(\"Petal.Length * Petal.Width\")) } # }"},{"path":"/reference/antidepressant_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Antidepressant trial data — antidepressant_data","title":"Antidepressant trial data — antidepressant_data","text":"dataset containing data publicly available example data set antidepressant clinical trial. dataset available website Drug Information Association Scientific Working Group Estimands Missing Data. per website, original data antidepressant clinical trial four treatments; two doses experimental medication, positive control, placebo published Goldstein et al (2004). mask real data, week 8 observations removed two arms created: original placebo arm \"drug arm\" created randomly selecting patients three non-placebo arms.","code":""},{"path":"/reference/antidepressant_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Antidepressant trial data — antidepressant_data","text":"","code":"antidepressant_data"},{"path":"/reference/antidepressant_data.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Antidepressant trial data — antidepressant_data","text":"data.frame 608 rows 11 variables: PATIENT: patients IDs. HAMATOTL: total score Hamilton Anxiety Rating Scale. PGIIMP: patient's Global Impression Improvement Rating Scale. RELDAYS: number days visit baseline. VISIT: post-baseline visit. levels 4,5,6,7. THERAPY: treatment group variable. equal PLACEBO observations placebo arm, DRUG observations active arm. GENDER: patient's gender. POOLINV: pooled investigator. BASVAL: baseline outcome value. HAMDTL17: Hamilton 17-item rating scale value. CHANGE: change baseline Hamilton 17-item rating scale.","code":""},{"path":"/reference/antidepressant_data.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Antidepressant trial data — antidepressant_data","text":"relevant endpoint Hamilton 17-item rating scale depression (HAMD17) baseline weeks 1, 2, 4, 6 assessments included. Study drug discontinuation occurred 24% subjects active drug 26% placebo. data study drug discontinuation missing single additional intermittent missing observation.","code":""},{"path":"/reference/antidepressant_data.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Antidepressant trial data — antidepressant_data","text":"Goldstein, Lu, Detke, Wiltse, Mallinckrodt, Demitrack. Duloxetine treatment depression: double-blind placebo-controlled comparison paroxetine. J Clin Psychopharmacol 2004;24: 389-399.","code":""},{"path":"/reference/apply_delta.html","id":null,"dir":"Reference","previous_headings":"","what":"Applies delta adjustment — apply_delta","title":"Applies delta adjustment — apply_delta","text":"Takes delta dataset adjusts outcome variable adding corresponding delta.","code":""},{"path":"/reference/apply_delta.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Applies delta adjustment — apply_delta","text":"","code":"apply_delta(data, delta = NULL, group = NULL, outcome = NULL)"},{"path":"/reference/apply_delta.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Applies delta adjustment — apply_delta","text":"data data.frame outcome column adjusted. delta data.frame (must contain column called delta). group character vector variables data delta used merge 2 data.frames together . outcome character, name outcome variable data.","code":""},{"path":"/reference/as_analysis.html","id":null,"dir":"Reference","previous_headings":"","what":"Construct an analysis object — as_analysis","title":"Construct an analysis object — as_analysis","text":"Creates analysis object ensuring components correctly defined.","code":""},{"path":"/reference/as_analysis.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Construct an analysis object — as_analysis","text":"","code":"as_analysis(results, method, delta = NULL, fun = NULL, fun_name = NULL)"},{"path":"/reference/as_analysis.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Construct an analysis object — as_analysis","text":"results list lists contain analysis results imputation See analyse() details object look like. method method object specified draws(). delta delta dataset used. See analyse() details specified. fun analysis function used. fun_name character name analysis function (used printing) purposes.","code":""},{"path":"/reference/as_ascii_table.html","id":null,"dir":"Reference","previous_headings":"","what":"as_ascii_table — as_ascii_table","title":"as_ascii_table — as_ascii_table","text":"function takes data.frame attempts convert simple ascii format suitable printing screen assumed variable values .character() method order cast character.","code":""},{"path":"/reference/as_ascii_table.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"as_ascii_table — as_ascii_table","text":"","code":"as_ascii_table(dat, line_prefix = \" \", pcol = NULL)"},{"path":"/reference/as_ascii_table.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"as_ascii_table — as_ascii_table","text":"dat Input dataset convert ascii table line_prefix Symbols prefix infront every line table pcol name column handled p-value. Sets value <0.001 value 0 rounding","code":""},{"path":"/reference/as_class.html","id":null,"dir":"Reference","previous_headings":"","what":"Set Class — as_class","title":"Set Class — as_class","text":"Utility function set objects class.","code":""},{"path":"/reference/as_class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set Class — as_class","text":"","code":"as_class(x, cls)"},{"path":"/reference/as_class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set Class — as_class","text":"x object set class . cls class set.","code":""},{"path":"/reference/as_cropped_char.html","id":null,"dir":"Reference","previous_headings":"","what":"as_cropped_char — as_cropped_char","title":"as_cropped_char — as_cropped_char","text":"Makes character string x chars Reduce x char string ...","code":""},{"path":"/reference/as_cropped_char.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"as_cropped_char — as_cropped_char","text":"","code":"as_cropped_char(inval, crop_at = 30, ndp = 3)"},{"path":"/reference/as_cropped_char.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"as_cropped_char — as_cropped_char","text":"inval single element value crop_at character limit ndp Number decimal places display","code":""},{"path":"/reference/as_dataframe.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert object to dataframe — as_dataframe","title":"Convert object to dataframe — as_dataframe","text":"Convert object dataframe","code":""},{"path":"/reference/as_dataframe.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert object to dataframe — as_dataframe","text":"","code":"as_dataframe(x)"},{"path":"/reference/as_dataframe.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert object to dataframe — as_dataframe","text":"x data.frame like object Utility function convert \"data.frame-like\" object actual data.frame avoid issues inconsistency methods ( [() dplyr's grouped dataframes)","code":""},{"path":"/reference/as_draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates a draws object — as_draws","title":"Creates a draws object — as_draws","text":"Creates draws object final output call draws().","code":""},{"path":"/reference/as_draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates a draws object — as_draws","text":"","code":"as_draws(method, samples, data, formula, n_failures = NULL, fit = NULL)"},{"path":"/reference/as_draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates a draws object — as_draws","text":"method method object generated either method_bayes(), method_approxbayes(), method_condmean() method_bmlmi(). samples list sample_single objects. See sample_single(). data R6 longdata object containing relevant input data information. formula Fixed effects formula object used model specification. n_failures Absolute number failures model fit. fit method_bayes() chosen, returns MCMC Stan fit object. Otherwise NULL.","code":""},{"path":"/reference/as_draws.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Creates a draws object — as_draws","text":"draws object named list containing following: data: R6 longdata object containing relevant input data information. method: method object generated either method_bayes(), method_approxbayes() method_condmean(). samples: list containing estimated parameters interest. element samples named list containing following: ids: vector characters containing ids subjects included original dataset. beta: numeric vector estimated regression coefficients. sigma: list estimated covariance matrices (one level vars$group). theta: numeric vector transformed covariances. failed: Logical. TRUE model fit failed. ids_samp: vector characters containing ids subjects included given sample. fit: method_bayes() chosen, returns MCMC Stan fit object. Otherwise NULL. n_failures: absolute number failures model fit. Relevant method_condmean(type = \"bootstrap\"), method_approxbayes() method_bmlmi(). formula: fixed effects formula object used model specification.","code":""},{"path":"/reference/as_imputation.html","id":null,"dir":"Reference","previous_headings":"","what":"Create an imputation object — as_imputation","title":"Create an imputation object — as_imputation","text":"function creates object returned impute(). Essentially glorified wrapper around list() ensuring required elements set class added expected.","code":""},{"path":"/reference/as_imputation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create an imputation object — as_imputation","text":"","code":"as_imputation(imputations, data, method, references)"},{"path":"/reference/as_imputation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create an imputation object — as_imputation","text":"imputations list imputations_list's created imputation_df() data longdata object created longDataConstructor() method method object created method_condmean(), method_bayes() method_approxbayes() references named vector. Identifies references used generating imputed values. form c(\"Group\" = \"Reference\", \"Group\" = \"Reference\").","code":""},{"path":"/reference/as_indices.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert indicator to index — as_indices","title":"Convert indicator to index — as_indices","text":"Converts string 0's 1's index positions 1's padding results 0's length","code":""},{"path":"/reference/as_indices.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert indicator to index — as_indices","text":"","code":"as_indices(x)"},{"path":"/reference/as_indices.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert indicator to index — as_indices","text":"x character vector whose values either \"0\" \"1\". elements vector must length","code":""},{"path":"/reference/as_indices.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Convert indicator to index — as_indices","text":".e.","code":"patmap(c(\"1101\", \"0001\")) -> list(c(1,2,4,999), c(4,999, 999, 999))"},{"path":"/reference/as_mmrm_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates a ","title":"Creates a ","text":"Converts design matrix + key variables common format particular function following: Renames covariates V1, V2, etc avoid issues special characters variable names Ensures key variables right type Inserts outcome, visit subjid variables data.frame naming outcome, visit subjid provided also insert group variable data.frame named group","code":""},{"path":"/reference/as_mmrm_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates a ","text":"","code":"as_mmrm_df(designmat, outcome, visit, subjid, group = NULL)"},{"path":"/reference/as_mmrm_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates a ","text":"designmat data.frame matrix containing covariates use MMRM model. Dummy variables must already expanded , .e. via stats::model.matrix(). contain missing values outcome numeric vector. outcome value regressed MMRM model. visit character / factor vector. Indicates visit outcome value occurred . subjid character / factor vector. subject identifier used link separate visits belong subject. group character / factor vector. Indicates treatment group patient belongs .","code":""},{"path":"/reference/as_mmrm_formula.html","id":null,"dir":"Reference","previous_headings":"","what":"Create MMRM formula — as_mmrm_formula","title":"Create MMRM formula — as_mmrm_formula","text":"Derives MMRM model formula structure mmrm_df. returns formula object form:","code":""},{"path":"/reference/as_mmrm_formula.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create MMRM formula — as_mmrm_formula","text":"","code":"as_mmrm_formula(mmrm_df, cov_struct)"},{"path":"/reference/as_mmrm_formula.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create MMRM formula — as_mmrm_formula","text":"mmrm_df mmrm data.frame created as_mmrm_df() cov_struct Character - covariance structure used, must one \"us\" (default), \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\")","code":""},{"path":"/reference/as_mmrm_formula.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Create MMRM formula — as_mmrm_formula","text":"","code":"outcome ~ 0 + V1 + V2 + V4 + ... + us(visit | group / subjid)"},{"path":"/reference/as_model_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Expand data.frame into a design matrix — as_model_df","title":"Expand data.frame into a design matrix — as_model_df","text":"Expands data.frame using formula create design matrix. Key details always place outcome variable first column return object.","code":""},{"path":"/reference/as_model_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Expand data.frame into a design matrix — as_model_df","text":"","code":"as_model_df(dat, frm)"},{"path":"/reference/as_model_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Expand data.frame into a design matrix — as_model_df","text":"dat data.frame frm formula","code":""},{"path":"/reference/as_model_df.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Expand data.frame into a design matrix — as_model_df","text":"outcome column may contain NA's none variables listed formula contain missing values","code":""},{"path":"/reference/as_simple_formula.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates a simple formula object from a string — as_simple_formula","title":"Creates a simple formula object from a string — as_simple_formula","text":"Converts string list variables formula object","code":""},{"path":"/reference/as_simple_formula.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates a simple formula object from a string — as_simple_formula","text":"","code":"as_simple_formula(outcome, covars)"},{"path":"/reference/as_simple_formula.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates a simple formula object from a string — as_simple_formula","text":"outcome character (length 1 vector). Name outcome variable covars character (vector). Name covariates","code":""},{"path":"/reference/as_simple_formula.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Creates a simple formula object from a string — as_simple_formula","text":"formula","code":""},{"path":"/reference/as_stan_array.html","id":null,"dir":"Reference","previous_headings":"","what":"As array — as_stan_array","title":"As array — as_stan_array","text":"Converts numeric value length 1 1 dimension array. avoid type errors thrown stan length 1 numeric vectors provided R stan::vector inputs","code":""},{"path":"/reference/as_stan_array.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"As array — as_stan_array","text":"","code":"as_stan_array(x)"},{"path":"/reference/as_stan_array.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"As array — as_stan_array","text":"x numeric vector","code":""},{"path":"/reference/as_strata.html","id":null,"dir":"Reference","previous_headings":"","what":"Create vector of Stratas — as_strata","title":"Create vector of Stratas — as_strata","text":"Collapse multiple categorical variables distinct unique categories. e.g. return","code":"as_strata(c(1,1,2,2,2,1), c(5,6,5,5,6,5)) c(1,2,3,3,4,1)"},{"path":"/reference/as_strata.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create vector of Stratas — as_strata","text":"","code":"as_strata(...)"},{"path":"/reference/as_strata.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create vector of Stratas — as_strata","text":"... numeric/character/factor vectors length","code":""},{"path":"/reference/as_strata.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create vector of Stratas — as_strata","text":"","code":"if (FALSE) { # \\dontrun{ as_strata(c(1,1,2,2,2,1), c(5,6,5,5,6,5)) } # }"},{"path":"/reference/assert_variables_exist.html","id":null,"dir":"Reference","previous_headings":"","what":"Assert that all variables exist within a dataset — assert_variables_exist","title":"Assert that all variables exist within a dataset — assert_variables_exist","text":"Performs assertion check ensure vector variable exists within data.frame expected.","code":""},{"path":"/reference/assert_variables_exist.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Assert that all variables exist within a dataset — assert_variables_exist","text":"","code":"assert_variables_exist(data, vars)"},{"path":"/reference/assert_variables_exist.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Assert that all variables exist within a dataset — assert_variables_exist","text":"data data.frame vars character vector variable names","code":""},{"path":"/reference/char2fct.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert character variables to factor — char2fct","title":"Convert character variables to factor — char2fct","text":"Provided vector variable names function converts character variables factors. affect numeric existing factor variables","code":""},{"path":"/reference/char2fct.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert character variables to factor — char2fct","text":"","code":"char2fct(data, vars = NULL)"},{"path":"/reference/char2fct.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert character variables to factor — char2fct","text":"data data.frame vars character vector variables data","code":""},{"path":"/reference/check_ESS.html","id":null,"dir":"Reference","previous_headings":"","what":"Diagnostics of the MCMC based on ESS — check_ESS","title":"Diagnostics of the MCMC based on ESS — check_ESS","text":"Check quality MCMC draws posterior distribution checking whether relative ESS sufficiently large.","code":""},{"path":"/reference/check_ESS.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Diagnostics of the MCMC based on ESS — check_ESS","text":"","code":"check_ESS(stan_fit, n_draws, threshold_lowESS = 0.4)"},{"path":"/reference/check_ESS.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Diagnostics of the MCMC based on ESS — check_ESS","text":"stan_fit stanfit object. n_draws Number MCMC draws. threshold_lowESS number [0,1] indicating minimum acceptable value relative ESS. See details.","code":""},{"path":"/reference/check_ESS.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Diagnostics of the MCMC based on ESS — check_ESS","text":"warning message case detected problems.","code":""},{"path":"/reference/check_ESS.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Diagnostics of the MCMC based on ESS — check_ESS","text":"check_ESS() works follows: Extract ESS stan_fit parameter model. Compute relative ESS (.e. ESS divided number draws). Check whether parameter ESS lower threshold. least one parameter relative ESS threshold, warning thrown.","code":""},{"path":"/reference/check_hmc_diagn.html","id":null,"dir":"Reference","previous_headings":"","what":"Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn","title":"Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn","text":"Check : divergent iterations. Bayesian Fraction Missing Information (BFMI) sufficiently low. number iterations saturated max treedepth zero. Please see rstan::check_hmc_diagnostics() details.","code":""},{"path":"/reference/check_hmc_diagn.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn","text":"","code":"check_hmc_diagn(stan_fit)"},{"path":"/reference/check_hmc_diagn.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn","text":"stan_fit stanfit object.","code":""},{"path":"/reference/check_hmc_diagn.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn","text":"warning message case detected problems.","code":""},{"path":"/reference/check_mcmc.html","id":null,"dir":"Reference","previous_headings":"","what":"Diagnostics of the MCMC — check_mcmc","title":"Diagnostics of the MCMC — check_mcmc","text":"Diagnostics MCMC","code":""},{"path":"/reference/check_mcmc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Diagnostics of the MCMC — check_mcmc","text":"","code":"check_mcmc(stan_fit, n_draws, threshold_lowESS = 0.4)"},{"path":"/reference/check_mcmc.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Diagnostics of the MCMC — check_mcmc","text":"stan_fit stanfit object. n_draws Number MCMC draws. threshold_lowESS number [0,1] indicating minimum acceptable value relative ESS. See details.","code":""},{"path":"/reference/check_mcmc.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Diagnostics of the MCMC — check_mcmc","text":"warning message case detected problems.","code":""},{"path":"/reference/check_mcmc.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Diagnostics of the MCMC — check_mcmc","text":"Performs checks quality MCMC. See check_ESS() check_hmc_diagn() details.","code":""},{"path":"/reference/compute_sigma.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma","title":"Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma","text":"Adapt covariance matrix reference-based methods. Used Copy Increments Reference (CIR) Jump Reference (JTR) methods, adapt covariance matrix different pre-deviation post deviation covariance structures. See Carpenter et al. (2013)","code":""},{"path":"/reference/compute_sigma.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma","text":"","code":"compute_sigma(sigma_group, sigma_ref, index_mar)"},{"path":"/reference/compute_sigma.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma","text":"sigma_group covariance matrix dimensions equal index_mar subjects original group sigma_ref covariance matrix dimensions equal index_mar subjects reference group index_mar logical vector indicating visits meet MAR assumption subject. .e. identifies observations non-MAR intercurrent event (ICE).","code":""},{"path":"/reference/compute_sigma.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma","text":"Carpenter, James R., James H. Roger, Michael G. Kenward. \"Analysis longitudinal trials protocol deviation: framework relevant, accessible assumptions, inference via multiple imputation.\" Journal Biopharmaceutical statistics 23.6 (2013): 1352-1371.","code":""},{"path":"/reference/convert_to_imputation_list_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert list of imputation_list_single() objects to an imputation_list_df() object (i.e. a list of imputation_df() objects's) — convert_to_imputation_list_df","title":"Convert list of imputation_list_single() objects to an imputation_list_df() object (i.e. a list of imputation_df() objects's) — convert_to_imputation_list_df","text":"Convert list imputation_list_single() objects imputation_list_df() object (.e. list imputation_df() objects's)","code":""},{"path":"/reference/convert_to_imputation_list_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert list of imputation_list_single() objects to an imputation_list_df() object (i.e. a list of imputation_df() objects's) — convert_to_imputation_list_df","text":"","code":"convert_to_imputation_list_df(imputes, sample_ids)"},{"path":"/reference/convert_to_imputation_list_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert list of imputation_list_single() objects to an imputation_list_df() object (i.e. a list of imputation_df() objects's) — convert_to_imputation_list_df","text":"imputes list imputation_list_single() objects sample_ids list 1 element per required imputation_df. element must contain vector \"ID\"'s correspond imputation_single() ID's required dataset. total number ID's must equal total number rows within imputes$imputations accommodate method_bmlmi() impute_data_individual() function returns list imputation_list_single() objects 1 object per subject. imputation_list_single() stores subjects imputations matrix columns matrix correspond D method_bmlmi(). Note methods (.e. methods_*()) special case D = 1. number rows matrix varies subject equal number times patient selected imputation (non-conditional mean methods 1 per subject per imputed dataset). function best illustrated example: convert_to_imputation_df(imputes, sample_ids) result : Note different repetitions (.e. value set D) grouped together sequentially.","code":"imputes = list( imputation_list_single( id = \"Tom\", imputations = matrix( imputation_single_t_1_1, imputation_single_t_1_2, imputation_single_t_2_1, imputation_single_t_2_2, imputation_single_t_3_1, imputation_single_t_3_2 ) ), imputation_list_single( id = \"Tom\", imputations = matrix( imputation_single_h_1_1, imputation_single_h_1_2, ) ) ) sample_ids <- list( c(\"Tom\", \"Harry\", \"Tom\"), c(\"Tom\") ) imputation_list_df( imputation_df( imputation_single_t_1_1, imputation_single_h_1_1, imputation_single_t_2_1 ), imputation_df( imputation_single_t_1_2, imputation_single_h_1_2, imputation_single_t_2_2 ), imputation_df( imputation_single_t_3_1 ), imputation_df( imputation_single_t_3_2 ) )"},{"path":"/reference/d_lagscale.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate delta from a lagged scale coefficient — d_lagscale","title":"Calculate delta from a lagged scale coefficient — d_lagscale","text":"Calculates delta value based upon baseline delta value post ICE scaling coefficient.","code":""},{"path":"/reference/d_lagscale.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate delta from a lagged scale coefficient — d_lagscale","text":"","code":"d_lagscale(delta, dlag, is_post_ice)"},{"path":"/reference/d_lagscale.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate delta from a lagged scale coefficient — d_lagscale","text":"delta numeric vector. Determines baseline amount delta applied visit. dlag numeric vector. Determines scaling applied delta based upon visit ICE occurred . Must length delta. is_post_ice logical vector. Indicates whether visit \"post-ICE\" .","code":""},{"path":"/reference/d_lagscale.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Calculate delta from a lagged scale coefficient — d_lagscale","text":"See delta_template() full details calculation performed.","code":""},{"path":"/reference/delta_template.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a delta data.frame template — delta_template","title":"Create a delta data.frame template — delta_template","text":"Creates data.frame format required analyse() use applying delta adjustment.","code":""},{"path":"/reference/delta_template.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a delta data.frame template — delta_template","text":"","code":"delta_template(imputations, delta = NULL, dlag = NULL, missing_only = TRUE)"},{"path":"/reference/delta_template.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a delta data.frame template — delta_template","text":"imputations imputation object created impute(). delta NULL numeric vector. Determines baseline amount delta applied visit. See details. numeric vector must length number unique visits original dataset. dlag NULL numeric vector. Determines scaling applied delta based upon visit ICE occurred . See details. numeric vector must length number unique visits original dataset. missing_only Logical, TRUE non-missing post-ICE data delta value 0 assigned. Note calculation (described details section) performed first overwritten 0's end (.e. delta values missing post-ICE visits stay regardless option).","code":""},{"path":"/reference/delta_template.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Create a delta data.frame template — delta_template","text":"apply delta adjustment analyse() function expects delta data.frame 3 variables: vars$subjid, vars$visit delta (vars object supplied original call draws() created set_vars() function). function return data.frame aforementioned variables one row per subject per visit. delta argument function NULL delta column returned data.frame 0 observations. delta argument NULL delta calculated separately subject accumulative sum delta multiplied scaling coefficient dlag based upon many visits subject's intercurrent event (ICE) visit question . best illustrated example: Let delta = c(5,6,7,8) dlag=c(1,2,3,4) (.e. assuming 4 visits) lets say subject ICE visit 2. calculation follows: say subject delta offset 0 applied visit-1, 6 visit-2, 20 visit-3 44 visit-4. comparison, lets say subject instead ICE visit 3, calculation follows: terms practical usage, lets say wanted delta 5 used post ICE visits regardless proximity ICE visit. can achieved setting delta = c(5,5,5,5) dlag = c(1,0,0,0). example lets say subject ICE visit-1, calculation follows: Another way using arguments set delta difference time visits dlag amount delta per unit time. example lets say visit weeks 1, 5, 6 & 9 want delta 3 applied week ICE. can achieved setting delta = c(0,4,1,3) (difference weeks visit) dlag = c(3, 3, 3, 3). example lets say subject ICE week-5 (.e. visit-2) calculation : .e. week-6 (1 week ICE) delta 3 week-9 (4 weeks ICE) delta 12. Please note function also returns several utility variables user can create custom logic defining delta set . additional variables include: is_mar - observation missing regarded MAR? variable set FALSE observations occurred non-MAR ICE, otherwise set TRUE. is_missing - outcome variable observation missing. is_post_ice - observation occur patient's ICE defined data_ice dataset supplied draws(). strategy - imputation strategy assigned subject. design implementation function largely based upon functionality implemented called \"five marcos\" James Roger. See Roger (2021).","code":"v1 v2 v3 v4 -------------- 5 6 7 8 # delta assigned to each visit 0 1 2 3 # lagged scaling starting from the first visit after the subjects ICE -------------- 0 6 14 24 # delta * lagged scaling -------------- 0 6 20 44 # accumulative sum of delta to be applied to each visit v1 v2 v3 v4 -------------- 5 6 7 8 # delta assigned to each visit 0 0 1 2 # lagged scaling starting from the first visit after the subjects ICE -------------- 0 0 7 16 # delta * lagged scaling -------------- 0 0 7 23 # accumulative sum of delta to be applied to each visit v1 v2 v3 v4 -------------- 5 5 5 5 # delta assigned to each visit 1 0 0 0 # lagged scaling starting from the first visit after the subjects ICE -------------- 5 0 0 0 # delta * lagged scaling -------------- 5 5 5 5 # accumulative sum of delta to be applied to each visit v1 v2 v3 v4 -------------- 0 4 1 3 # delta assigned to each visit 0 0 3 3 # lagged scaling starting from the first visit after the subjects ICE -------------- 0 0 3 9 # delta * lagged scaling -------------- 0 0 3 12 # accumulative sum of delta to be applied to each visit"},{"path":"/reference/delta_template.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Create a delta data.frame template — delta_template","text":"Roger, James. Reference-based mi via multivariate normal rm (“five macros” miwithd), 2021. URL https://www.lshtm.ac.uk/research/centres-projects-groups/missing-data#dia-missing-data.","code":""},{"path":[]},{"path":"/reference/delta_template.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a delta data.frame template — delta_template","text":"","code":"if (FALSE) { # \\dontrun{ delta_template(imputeObj) delta_template(imputeObj, delta = c(5,6,7,8), dlag = c(1,2,3,4)) } # }"},{"path":"/reference/draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Fit the base imputation model and get parameter estimates — draws","title":"Fit the base imputation model and get parameter estimates — draws","text":"draws fits base imputation model observed outcome data according given multiple imputation methodology. According user's method specification, returns either draws posterior distribution model parameters required Bayesian multiple imputation frequentist parameter estimates original data bootstrapped leave-one-datasets required conditional mean imputation. purpose imputation model estimate model parameters absence intercurrent events (ICEs) handled using reference-based imputation methods. reason, observed outcome data ICEs, reference-based imputation methods specified, removed considered missing purpose estimating imputation model, purpose . imputation model mixed model repeated measures (MMRM) valid missing--random (MAR) assumption. can fit using maximum likelihood (ML) restricted ML (REML) estimation, Bayesian approach, approximate Bayesian approach according user's method specification. ML/REML approaches approximate Bayesian approach support several possible covariance structures, Bayesian approach based MCMC sampling supports unstructured covariance structure. case covariance matrix can assumed different across group.","code":""},{"path":"/reference/draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Fit the base imputation model and get parameter estimates — draws","text":"","code":"draws(data, data_ice = NULL, vars, method, ncores = 1, quiet = FALSE) # S3 method for class 'approxbayes' draws(data, data_ice = NULL, vars, method, ncores = 1, quiet = FALSE) # S3 method for class 'condmean' draws(data, data_ice = NULL, vars, method, ncores = 1, quiet = FALSE) # S3 method for class 'bmlmi' draws(data, data_ice = NULL, vars, method, ncores = 1, quiet = FALSE) # S3 method for class 'bayes' draws(data, data_ice = NULL, vars, method, ncores = 1, quiet = FALSE)"},{"path":"/reference/draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Fit the base imputation model and get parameter estimates — draws","text":"data data.frame containing data used model. See details. data_ice data.frame specifies information related ICEs imputation strategies. See details. vars vars object generated set_vars(). See details. method method object generated either method_bayes(), method_approxbayes(), method_condmean() method_bmlmi(). specifies multiple imputation methodology used. See details. ncores single numeric specifying number cores use creating draws object. Note parameter ignored method_bayes() (Default = 1). Can also cluster object generated make_rbmi_cluster() quiet Logical, TRUE suppress printing progress information printed console.","code":""},{"path":"/reference/draws.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Fit the base imputation model and get parameter estimates — draws","text":"draws object named list containing following: data: R6 longdata object containing relevant input data information. method: method object generated either method_bayes(), method_approxbayes() method_condmean(). samples: list containing estimated parameters interest. element samples named list containing following: ids: vector characters containing ids subjects included original dataset. beta: numeric vector estimated regression coefficients. sigma: list estimated covariance matrices (one level vars$group). theta: numeric vector transformed covariances. failed: Logical. TRUE model fit failed. ids_samp: vector characters containing ids subjects included given sample. fit: method_bayes() chosen, returns MCMC Stan fit object. Otherwise NULL. n_failures: absolute number failures model fit. Relevant method_condmean(type = \"bootstrap\"), method_approxbayes() method_bmlmi(). formula: fixed effects formula object used model specification.","code":""},{"path":"/reference/draws.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Fit the base imputation model and get parameter estimates — draws","text":"draws performs first step multiple imputation (MI) procedure: fitting base imputation model. goal estimate parameters interest needed imputation phase (.e. regression coefficients covariance matrices MMRM model). function distinguishes following methods: Bayesian MI based MCMC sampling: draws returns draws posterior distribution parameters using Bayesian approach based MCMC sampling. method can specified using method = method_bayes(). Approximate Bayesian MI based bootstrapping: draws returns draws posterior distribution parameters using approximate Bayesian approach, sampling posterior distribution simulated fitting MMRM model bootstrap samples original dataset. method can specified using method = method_approxbayes()]. Conditional mean imputation bootstrap re-sampling: draws returns MMRM parameter estimates original dataset n_samples bootstrap samples. method can specified using method = method_condmean() argument type = \"bootstrap\". Conditional mean imputation jackknife re-sampling: draws returns MMRM parameter estimates original dataset leave-one-subject-sample. method can specified using method = method_condmean() argument type = \"jackknife\". Bootstrapped Maximum Likelihood MI: draws returns MMRM parameter estimates given number bootstrap samples needed perform random imputations bootstrapped samples. method can specified using method = method_bmlmi(). Bayesian MI based MCMC sampling proposed Carpenter, Roger, Kenward (2013) first introduced reference-based imputation methods. Approximate Bayesian MI discussed Little Rubin (2002). Conditional mean imputation methods discussed Wolbers et al (2022). Bootstrapped Maximum Likelihood MI described Von Hippel & Bartlett (2021). argument data contains longitudinal data. must least following variables: subjid: factor vector containing subject ids. visit: factor vector containing visit outcome observed . group: factor vector containing group subject belongs . outcome: numeric vector containing outcome variable. might contain missing values. Additional baseline time-varying covariates must included data. data must one row per visit per subject. means incomplete outcome data must set NA instead related row missing. Missing values covariates allowed. data incomplete expand_locf() helper function can used insert missing rows using Last Observation Carried Forward (LOCF) imputation impute covariates values. Note LOCF generally principled imputation method used appropriate specific covariate. Please note special provisioning baseline outcome values. want baseline observations included model part response variable removed advance outcome variable data. time want include baseline outcome covariate model, included separate column data (covariate). Character covariates explicitly cast factors. use custom analysis function requires specific reference levels character covariates (example computation least square means computation) advised manually cast character covariates factor advance running draws(). argument data_ice contains information occurrence ICEs. data.frame 3 columns: Subject ID: character vector containing ids subjects experienced ICE. column must named specified vars$subjid. Visit: character vector containing first visit occurrence ICE (.e. first visit affected ICE). visits must equal one levels data[[vars$visit]]. multiple ICEs happen subject, first non-MAR visit used. column must named specified vars$visit. Strategy: character vector specifying imputation strategy address ICE subject. column must named specified vars$strategy. Possible imputation strategies : \"MAR\": Missing Random. \"CIR\": Copy Increments Reference. \"CR\": Copy Reference. \"JR\": Jump Reference. \"LMCF\": Last Mean Carried Forward. explanations imputation strategies, see Carpenter, Roger, Kenward (2013), Cro et al (2021), Wolbers et al (2022). Please note user-defined imputation strategies can also set. data_ice argument necessary stage since (explained Wolbers et al (2022)), model fitted removing observations incompatible imputation model, .e. observed data data_ice[[vars$visit]] addressed imputation strategy different MAR excluded model fit. However observations discarded data imputation phase (performed function (impute()). summarize, stage pre-ICE data post-ICE data ICEs MAR imputation specified used. data_ice argument omitted, subject record within data_ice, assumed relevant subject's data pre-ICE missing visits imputed MAR assumption observed data used fit base imputation model. Please note ICE visit updated via update_strategy argument impute(); means subjects record data_ice always missing data imputed MAR assumption even strategy updated. vars argument named list specifies names key variables within data data_ice. list created set_vars() contains following named elements: subjid: name column data data_ice contains subject ids variable. visit: name column data data_ice contains visit variable. group: name column data contains group variable. outcome: name column data contains outcome variable. covariates: vector characters contains covariates included model (including interactions specified \"covariateName1*covariateName2\"). covariates provided default model specification outcome ~ 1 + visit + group used. Please note group*visit interaction included model default. strata: covariates used stratification variables bootstrap sampling. default vars$group set stratification variable. Needed method_condmean(type = \"bootstrap\") method_approxbayes(). strategy: name column data_ice contains subject-specific imputation strategy. experience, Bayesian MI (method = method_bayes()) relatively low number samples (e.g. n_samples 100) frequently triggers STAN warnings R-hat \"largest R-hat X.XX, indicating chains mixed\". many instances, warning might spurious, .e. standard diagnostics analysis MCMC samples indicate issues results look reasonable. Increasing number samples e.g. 150 usually gets rid warning.","code":""},{"path":"/reference/draws.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Fit the base imputation model and get parameter estimates — draws","text":"James R Carpenter, James H Roger, Michael G Kenward. Analysis longitudinal trials protocol deviation: framework relevant, accessible assumptions, inference via multiple imputation. Journal Biopharmaceutical Statistics, 23(6):1352–1371, 2013. Suzie Cro, Tim P Morris, Michael G Kenward, James R Carpenter. Sensitivity analysis clinical trials missing continuous outcome data using controlled multiple imputation: practical guide. Statistics Medicine, 39(21):2815–2842, 2020. Roderick J. . Little Donald B. Rubin. Statistical Analysis Missing Data, Second Edition. John Wiley & Sons, Hoboken, New Jersey, 2002. [Section 10.2.3] Marcel Wolbers, Alessandro Noci, Paul Delmar, Craig Gower-Page, Sean Yiu, Jonathan W. Bartlett. Standard reference-based conditional mean imputation. https://arxiv.org/abs/2109.11162, 2022. Von Hippel, Paul T Bartlett, Jonathan W. Maximum likelihood multiple imputation: Faster imputations consistent standard errors without posterior draws. 2021.","code":""},{"path":[]},{"path":"/reference/ensure_rstan.html","id":null,"dir":"Reference","previous_headings":"","what":"Ensure rstan exists — ensure_rstan","title":"Ensure rstan exists — ensure_rstan","text":"Checks see rstan exists throws helpful error message","code":""},{"path":"/reference/ensure_rstan.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Ensure rstan exists — ensure_rstan","text":"","code":"ensure_rstan()"},{"path":"/reference/eval_mmrm.html","id":null,"dir":"Reference","previous_headings":"","what":"Evaluate a call to mmrm — eval_mmrm","title":"Evaluate a call to mmrm — eval_mmrm","text":"utility function attempts evaluate call mmrm managing warnings errors thrown. particular function attempts catch warnings errors instead surfacing simply add additional element failed value TRUE. allows multiple calls made without program exiting.","code":""},{"path":"/reference/eval_mmrm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Evaluate a call to mmrm — eval_mmrm","text":"","code":"eval_mmrm(expr)"},{"path":"/reference/eval_mmrm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Evaluate a call to mmrm — eval_mmrm","text":"expr expression evaluated. call mmrm::mmrm().","code":""},{"path":"/reference/eval_mmrm.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Evaluate a call to mmrm — eval_mmrm","text":"function originally developed use glmmTMB needed hand-holding dropping false-positive warnings. important now kept around encase need catch false-positive warnings future.","code":""},{"path":[]},{"path":"/reference/eval_mmrm.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Evaluate a call to mmrm — eval_mmrm","text":"","code":"if (FALSE) { # \\dontrun{ eval_mmrm({ mmrm::mmrm(formula, data) }) } # }"},{"path":"/reference/expand.html","id":null,"dir":"Reference","previous_headings":"","what":"Expand and fill in missing data.frame rows — expand","title":"Expand and fill in missing data.frame rows — expand","text":"functions essentially wrappers around base::expand.grid() ensure missing combinations data inserted data.frame imputation/fill methods updating covariate values newly created rows.","code":""},{"path":"/reference/expand.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Expand and fill in missing data.frame rows — expand","text":"","code":"expand(data, ...) fill_locf(data, vars, group = NULL, order = NULL) expand_locf(data, ..., vars, group, order)"},{"path":"/reference/expand.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Expand and fill in missing data.frame rows — expand","text":"data dataset expand fill . ... variables levels expanded (note duplicate entries levels result multiple rows level). vars character vector containing names variables need filled . group character vector containing names variables group performing LOCF imputation var. order character vector containing names additional variables sort data.frame performing LOCF.","code":""},{"path":"/reference/expand.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Expand and fill in missing data.frame rows — expand","text":"draws() function makes assumption subjects visits present data.frame covariate values non missing; expand(), fill_locf() expand_locf() utility functions support users ensuring data.frame's conform assumptions. expand() takes vectors expected levels data.frame expands combinations inserting missing rows data.frame. Note \"expanded\" variables cast factors. fill_locf() applies LOCF imputation named covariates fill NAs created insertion new rows expand() (though note distinction made existing NAs newly created NAs). Note data.frame sorted c(group, order) performing LOCF imputation; data.frame returned original sort order however. expand_locf() simple composition function fill_locf() expand() .e. fill_locf(expand(...)).","code":""},{"path":"/reference/expand.html","id":"missing-first-values","dir":"Reference","previous_headings":"","what":"Missing First Values","title":"Expand and fill in missing data.frame rows — expand","text":"fill_locf() function performs last observation carried forward imputation. natural consequence unable impute missing observations observation first value given subject / grouping. values deliberately imputed risks silent errors case time varying covariates. One solution first use expand_locf() just visit variable time varying covariates merge baseline covariates afterwards .e.","code":"library(dplyr) dat_expanded <- expand( data = dat, subject = c(\"pt1\", \"pt2\", \"pt3\", \"pt4\"), visit = c(\"vis1\", \"vis2\", \"vis3\") ) dat_filled <- dat_expanded %>% left_join(baseline_covariates, by = \"subject\")"},{"path":"/reference/expand.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Expand and fill in missing data.frame rows — expand","text":"","code":"if (FALSE) { # \\dontrun{ dat_expanded <- expand( data = dat, subject = c(\"pt1\", \"pt2\", \"pt3\", \"pt4\"), visit = c(\"vis1\", \"vis2\", \"vis3\") ) dat_filled <- fill_loc( data = dat_expanded, vars = c(\"Sex\", \"Age\"), group = \"subject\", order = \"visit\" ) ## Or dat_filled <- expand_locf( data = dat, subject = c(\"pt1\", \"pt2\", \"pt3\", \"pt4\"), visit = c(\"vis1\", \"vis2\", \"vis3\"), vars = c(\"Sex\", \"Age\"), group = \"subject\", order = \"visit\" ) } # }"},{"path":"/reference/extract_covariates.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract Variables from string vector — extract_covariates","title":"Extract Variables from string vector — extract_covariates","text":"Takes string including potentially model terms like * : extracts individual variables","code":""},{"path":"/reference/extract_covariates.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract Variables from string vector — extract_covariates","text":"","code":"extract_covariates(x)"},{"path":"/reference/extract_covariates.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract Variables from string vector — extract_covariates","text":"x string variable names potentially including interaction terms","code":""},{"path":"/reference/extract_covariates.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Extract Variables from string vector — extract_covariates","text":".e. c(\"v1\", \"v2\", \"v2*v3\", \"v1:v2\") becomes c(\"v1\", \"v2\", \"v3\")","code":""},{"path":"/reference/extract_data_nmar_as_na.html","id":null,"dir":"Reference","previous_headings":"","what":"Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na","title":"Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na","text":"Set NA outcome values MNAR missing (.e. occur ICE handled using reference-based imputation strategy)","code":""},{"path":"/reference/extract_data_nmar_as_na.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na","text":"","code":"extract_data_nmar_as_na(longdata)"},{"path":"/reference/extract_data_nmar_as_na.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na","text":"longdata R6 longdata object containing relevant input data information.","code":""},{"path":"/reference/extract_data_nmar_as_na.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na","text":"data.frame containing longdata$get_data(longdata$ids), MNAR outcome values set NA.","code":""},{"path":"/reference/extract_draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract draws from a stanfit object — extract_draws","title":"Extract draws from a stanfit object — extract_draws","text":"Extract draws stanfit object convert lists. function rstan::extract() returns draws given parameter array. function calls rstan::extract() extract draws stanfit object convert arrays lists.","code":""},{"path":"/reference/extract_draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract draws from a stanfit object — extract_draws","text":"","code":"extract_draws(stan_fit)"},{"path":"/reference/extract_draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract draws from a stanfit object — extract_draws","text":"stan_fit stanfit object.","code":""},{"path":"/reference/extract_draws.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract draws from a stanfit object — extract_draws","text":"named list length 2 containing: beta: list length equal number draws containing draws posterior distribution regression coefficients. sigma: list length equal number draws containing draws posterior distribution covariance matrices. element list list length equal 1 same_cov = TRUE equal number groups same_cov = FALSE.","code":""},{"path":"/reference/extract_imputed_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract imputed dataset — extract_imputed_df","title":"Extract imputed dataset — extract_imputed_df","text":"Takes imputation object generated imputation_df() uses extract completed dataset longdata object created longDataConstructor(). Also applies delta transformation data.frame provided delta argument. See analyse() details structure data.frame. Subject IDs returned data.frame scrambled .e. original values.","code":""},{"path":"/reference/extract_imputed_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract imputed dataset — extract_imputed_df","text":"","code":"extract_imputed_df(imputation, ld, delta = NULL, idmap = FALSE)"},{"path":"/reference/extract_imputed_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract imputed dataset — extract_imputed_df","text":"imputation imputation object generated imputation_df(). ld longdata object generated longDataConstructor(). delta Either NULL data.frame. used offset outcome values imputed dataset. idmap Logical. TRUE attribute called \"idmap\" attached return object contains list maps old subject ids new subject ids.","code":""},{"path":"/reference/extract_imputed_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract imputed dataset — extract_imputed_df","text":"data.frame.","code":""},{"path":"/reference/extract_imputed_dfs.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract imputed datasets — extract_imputed_dfs","title":"Extract imputed datasets — extract_imputed_dfs","text":"Extracts imputed datasets contained within imputations object generated impute().","code":""},{"path":"/reference/extract_imputed_dfs.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract imputed datasets — extract_imputed_dfs","text":"","code":"extract_imputed_dfs( imputations, index = seq_along(imputations$imputations), delta = NULL, idmap = FALSE )"},{"path":"/reference/extract_imputed_dfs.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract imputed datasets — extract_imputed_dfs","text":"imputations imputations object created impute(). index indexes imputed datasets return. default, datasets within imputations object returned. delta data.frame containing delta transformation applied imputed dataset. See analyse() details format specification data.frame. idmap Logical. subject IDs imputed data.frame's replaced new IDs ensure unique. Setting argument TRUE attaches attribute, called idmap, returned data.frame's provide map new subject IDs old subject IDs.","code":""},{"path":"/reference/extract_imputed_dfs.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract imputed datasets — extract_imputed_dfs","text":"list data.frames equal length index argument.","code":""},{"path":[]},{"path":"/reference/extract_imputed_dfs.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Extract imputed datasets — extract_imputed_dfs","text":"","code":"if (FALSE) { # \\dontrun{ extract_imputed_dfs(imputeObj) extract_imputed_dfs(imputeObj, c(1:3)) } # }"},{"path":"/reference/extract_params.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract parameters from a MMRM model — extract_params","title":"Extract parameters from a MMRM model — extract_params","text":"Extracts beta sigma coefficients MMRM model created mmrm::mmrm().","code":""},{"path":"/reference/extract_params.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract parameters from a MMRM model — extract_params","text":"","code":"extract_params(fit)"},{"path":"/reference/extract_params.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract parameters from a MMRM model — extract_params","text":"fit object created mmrm::mmrm()","code":""},{"path":"/reference/fit_mcmc.html","id":null,"dir":"Reference","previous_headings":"","what":"Fit the base imputation model using a Bayesian approach — fit_mcmc","title":"Fit the base imputation model using a Bayesian approach — fit_mcmc","text":"fit_mcmc() fits base imputation model using Bayesian approach. done MCMC method implemented stan run using function rstan::sampling(). function returns draws posterior distribution model parameters stanfit object. Additionally performs multiple diagnostics checks chain returns warnings case detected issues.","code":""},{"path":"/reference/fit_mcmc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Fit the base imputation model using a Bayesian approach — fit_mcmc","text":"","code":"fit_mcmc(designmat, outcome, group, subjid, visit, method, quiet = FALSE)"},{"path":"/reference/fit_mcmc.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Fit the base imputation model using a Bayesian approach — fit_mcmc","text":"designmat design matrix fixed effects. outcome response variable. Must numeric. group Character vector containing group variable. subjid Character vector containing subjects IDs. visit Character vector containing visit variable. method method object generated method_bayes(). quiet Specify whether stan sampling log printed console.","code":""},{"path":"/reference/fit_mcmc.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Fit the base imputation model using a Bayesian approach — fit_mcmc","text":"named list composed following: samples: named list containing draws parameter. corresponds output extract_draws(). fit: stanfit object.","code":""},{"path":"/reference/fit_mcmc.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Fit the base imputation model using a Bayesian approach — fit_mcmc","text":"Bayesian model assumes multivariate normal likelihood function weakly-informative priors model parameters: particular, uniform priors assumed regression coefficients inverse-Wishart priors covariance matrices. chain initialized using REML parameter estimates MMRM starting values. function performs following steps: Fit MMRM using REML approach. Prepare input data MCMC fit described data{} block Stan file. See prepare_stan_data() details. Run MCMC according input arguments using starting values REML parameter estimates estimated point 1. Performs diagnostics checks MCMC. See check_mcmc() details. Extract draws model fit. chains perform method$n_samples draws keeping one every method$burn_between iterations. Additionally first method$burn_in iterations discarded. total number iterations method$burn_in + method$burn_between*method$n_samples. purpose method$burn_in ensure samples drawn stationary distribution Markov Chain. method$burn_between aims keep draws uncorrelated .","code":""},{"path":"/reference/fit_mmrm.html","id":null,"dir":"Reference","previous_headings":"","what":"Fit a MMRM model — fit_mmrm","title":"Fit a MMRM model — fit_mmrm","text":"Fits MMRM model allowing different covariance structures using mmrm::mmrm(). Returns list key model parameters beta, sigma additional element failed indicating whether fit failed converge. fit fail converge beta sigma present.","code":""},{"path":"/reference/fit_mmrm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Fit a MMRM model — fit_mmrm","text":"","code":"fit_mmrm( designmat, outcome, subjid, visit, group, cov_struct = c(\"us\", \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"), REML = TRUE, same_cov = TRUE )"},{"path":"/reference/fit_mmrm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Fit a MMRM model — fit_mmrm","text":"designmat data.frame matrix containing covariates use MMRM model. Dummy variables must already expanded , .e. via stats::model.matrix(). contain missing values outcome numeric vector. outcome value regressed MMRM model. subjid character / factor vector. subject identifier used link separate visits belong subject. visit character / factor vector. Indicates visit outcome value occurred . group character / factor vector. Indicates treatment group patient belongs . cov_struct character value. Specifies covariance structure use. Must one \"us\" (default), \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\") REML logical. Specifies whether restricted maximum likelihood used same_cov logical. Used specify shared individual covariance matrix used per group","code":""},{"path":"/reference/generate_data_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate data for a single group — generate_data_single","title":"Generate data for a single group — generate_data_single","text":"Generate data single group","code":""},{"path":"/reference/generate_data_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate data for a single group — generate_data_single","text":"","code":"generate_data_single(pars_group, strategy_fun = NULL, distr_pars_ref = NULL)"},{"path":"/reference/generate_data_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate data for a single group — generate_data_single","text":"pars_group simul_pars object generated set_simul_pars(). specifies simulation parameters given group. strategy_fun Function implementing trajectories intercurrent event (ICE). Must one getStrategies(). See getStrategies() details. NULL post-ICE outcomes untouched. distr_pars_ref Optional. Named list containing simulation parameters reference arm. contains following elements: mu: Numeric vector indicating mean outcome trajectory assuming ICEs. include outcome baseline. sigma Covariance matrix outcome trajectory assuming ICEs. NULL, parameters inherited pars_group.","code":""},{"path":"/reference/generate_data_single.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate data for a single group — generate_data_single","text":"data.frame containing simulated data. includes following variables: id: Factor variable specifies id subject. visit: Factor variable specifies visit assessment. Visit 0 denotes baseline visit. group: Factor variable specifies treatment group subject belongs . outcome_bl: Numeric variable specifies baseline outcome. outcome_noICE: Numeric variable specifies longitudinal outcome assuming ICEs. ind_ice1: Binary variable takes value 1 corresponding visit affected ICE1 0 otherwise. dropout_ice1: Binary variable takes value 1 corresponding visit affected drop-following ICE1 0 otherwise. ind_ice2: Binary variable takes value 1 corresponding visit affected ICE2. outcome: Numeric variable specifies longitudinal outcome including ICE1, ICE2 intermittent missing values.","code":""},{"path":[]},{"path":"/reference/getStrategies.html","id":null,"dir":"Reference","previous_headings":"","what":"Get imputation strategies — getStrategies","title":"Get imputation strategies — getStrategies","text":"Returns list defining imputation strategies used create multivariate normal distribution parameters merging source group reference group per patient.","code":""},{"path":"/reference/getStrategies.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get imputation strategies — getStrategies","text":"","code":"getStrategies(...)"},{"path":"/reference/getStrategies.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get imputation strategies — getStrategies","text":"... User defined methods added return list. Input must function.","code":""},{"path":"/reference/getStrategies.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Get imputation strategies — getStrategies","text":"default Jump Reference (JR), Copy Reference (CR), Copy Increments Reference (CIR), Last Mean Carried Forward (LMCF) Missing Random (MAR) defined. user can define strategy functions (overwrite pre-defined ones) specifying named input function .e. NEW = function(...) .... exception MAR overwritten. user defined functions must take 3 inputs: pars_group, pars_ref index_mar. pars_group pars_ref lists elements mu sigma representing multivariate normal distribution parameters subject's current group reference group respectively. index_mar logical vector specifying visits subject met MAR assumption . function must return list elements mu sigma. See implementation strategy_JR() example.","code":""},{"path":"/reference/getStrategies.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get imputation strategies — getStrategies","text":"","code":"if (FALSE) { # \\dontrun{ getStrategies() getStrategies( NEW = function(pars_group, pars_ref, index_mar) code , JR = function(pars_group, pars_ref, index_mar) more_code ) } # }"},{"path":"/reference/get_ESS.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS","title":"Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS","text":"Extract Effective Sample Size (ESS) stanfit object","code":""},{"path":"/reference/get_ESS.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS","text":"","code":"get_ESS(stan_fit)"},{"path":"/reference/get_ESS.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS","text":"stan_fit stanfit object.","code":""},{"path":"/reference/get_ESS.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS","text":"named vector containing ESS parameter model.","code":""},{"path":"/reference/get_bootstrap_stack.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates a stack object populated with bootstrapped samples — get_bootstrap_stack","title":"Creates a stack object populated with bootstrapped samples — get_bootstrap_stack","text":"Function creates Stack() object populated stack bootstrap samples based upon method$n_samples","code":""},{"path":"/reference/get_bootstrap_stack.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates a stack object populated with bootstrapped samples — get_bootstrap_stack","text":"","code":"get_bootstrap_stack(longdata, method, stack = Stack$new())"},{"path":"/reference/get_bootstrap_stack.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates a stack object populated with bootstrapped samples — get_bootstrap_stack","text":"longdata longDataConstructor() object method method object stack Stack() object (exposed unit testing purposes)","code":""},{"path":"/reference/get_conditional_parameters.html","id":null,"dir":"Reference","previous_headings":"","what":"Derive conditional multivariate normal parameters — get_conditional_parameters","title":"Derive conditional multivariate normal parameters — get_conditional_parameters","text":"Takes parameters multivariate normal distribution observed values calculate conditional distribution unobserved values.","code":""},{"path":"/reference/get_conditional_parameters.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Derive conditional multivariate normal parameters — get_conditional_parameters","text":"","code":"get_conditional_parameters(pars, values)"},{"path":"/reference/get_conditional_parameters.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Derive conditional multivariate normal parameters — get_conditional_parameters","text":"pars list elements mu sigma defining mean vector covariance matrix respectively. values vector observed values condition , must length pars$mu. Missing values must represented NA.","code":""},{"path":"/reference/get_conditional_parameters.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Derive conditional multivariate normal parameters — get_conditional_parameters","text":"list conditional distribution parameters: mu - conditional mean vector. sigma - conditional covariance matrix.","code":""},{"path":"/reference/get_delta_template.html","id":null,"dir":"Reference","previous_headings":"","what":"Get delta utility variables — get_delta_template","title":"Get delta utility variables — get_delta_template","text":"function creates default delta template (1 row per subject per visit) extracts utility information users need define logic defining delta. See delta_template() full details.","code":""},{"path":"/reference/get_delta_template.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get delta utility variables — get_delta_template","text":"","code":"get_delta_template(imputations)"},{"path":"/reference/get_delta_template.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get delta utility variables — get_delta_template","text":"imputations imputations object created impute().","code":""},{"path":"/reference/get_draws_mle.html","id":null,"dir":"Reference","previous_headings":"","what":"Fit the base imputation model on bootstrap samples — get_draws_mle","title":"Fit the base imputation model on bootstrap samples — get_draws_mle","text":"Fit base imputation model using ML/REML approach given number bootstrap samples specified method$n_samples. Returns parameter estimates model fit.","code":""},{"path":"/reference/get_draws_mle.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Fit the base imputation model on bootstrap samples — get_draws_mle","text":"","code":"get_draws_mle( longdata, method, sample_stack, n_target_samples, first_sample_orig, use_samp_ids, failure_limit = 0, ncores = 1, quiet = FALSE )"},{"path":"/reference/get_draws_mle.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Fit the base imputation model on bootstrap samples — get_draws_mle","text":"longdata R6 longdata object containing relevant input data information. method method object generated either method_approxbayes() method_condmean() argument type = \"bootstrap\". sample_stack stack object containing subject ids used mmrm iteration. n_target_samples Number samples needed created first_sample_orig Logical. TRUE function returns method$n_samples + 1 samples first sample contains parameter estimates original dataset method$n_samples samples contain parameter estimates bootstrap samples. FALSE function returns method$n_samples samples containing parameter estimates bootstrap samples. use_samp_ids Logical. TRUE, sampled subject ids returned. Otherwise subject ids original dataset returned. values used tell impute() subjects used derive imputed dataset. failure_limit Number failed samples allowed throwing error ncores Number processes parallelise job quiet Logical, TRUE suppress printing progress information printed console.","code":""},{"path":"/reference/get_draws_mle.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Fit the base imputation model on bootstrap samples — get_draws_mle","text":"draws object named list containing following: data: R6 longdata object containing relevant input data information. method: method object generated either method_bayes(), method_approxbayes() method_condmean(). samples: list containing estimated parameters interest. element samples named list containing following: ids: vector characters containing ids subjects included original dataset. beta: numeric vector estimated regression coefficients. sigma: list estimated covariance matrices (one level vars$group). theta: numeric vector transformed covariances. failed: Logical. TRUE model fit failed. ids_samp: vector characters containing ids subjects included given sample. fit: method_bayes() chosen, returns MCMC Stan fit object. Otherwise NULL. n_failures: absolute number failures model fit. Relevant method_condmean(type = \"bootstrap\"), method_approxbayes() method_bmlmi(). formula: fixed effects formula object used model specification.","code":""},{"path":"/reference/get_draws_mle.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Fit the base imputation model on bootstrap samples — get_draws_mle","text":"function takes Stack object contains multiple lists patient ids. function takes Stack pulls set ids constructs dataset just consisting patients (.e. potentially bootstrap jackknife sample). function fits MMRM model dataset create sample object. function repeats process n_target_samples reached. failure_limit samples fail converge function throws error. reaching desired number samples function generates returns draws object.","code":""},{"path":"/reference/get_ests_bmlmi.html","id":null,"dir":"Reference","previous_headings":"","what":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"Compute pooled point estimates, standard error degrees freedom according Von Hippel Bartlett formula Bootstrapped Maximum Likelihood Multiple Imputation (BMLMI).","code":""},{"path":"/reference/get_ests_bmlmi.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"","code":"get_ests_bmlmi(ests, D)"},{"path":"/reference/get_ests_bmlmi.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"ests numeric vector containing estimates analysis imputed datasets. D numeric representing number imputations bootstrap sample BMLMI method.","code":""},{"path":"/reference/get_ests_bmlmi.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"list containing point estimate, standard error degrees freedom.","code":""},{"path":"/reference/get_ests_bmlmi.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"ests must provided following order: firsts D elements related analyses random imputation one bootstrap sample. second set D elements (.e. D+1 2*D) related second bootstrap sample .","code":""},{"path":"/reference/get_ests_bmlmi.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"Von Hippel, Paul T Bartlett, Jonathan W8. Maximum likelihood multiple imputation: Faster imputations consistent standard errors without posterior draws. 2021","code":""},{"path":"/reference/get_example_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate a realistic example dataset — get_example_data","title":"Simulate a realistic example dataset — get_example_data","text":"Simulate realistic example dataset using simulate_data() hard-coded values input arguments.","code":""},{"path":"/reference/get_example_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate a realistic example dataset — get_example_data","text":"","code":"get_example_data()"},{"path":"/reference/get_example_data.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Simulate a realistic example dataset — get_example_data","text":"get_example_data() simulates 1:1 randomized trial active drug (intervention) versus placebo (control) 100 subjects per group 6 post-baseline assessments (bi-monthly visits 12 months). One intercurrent event corresponding treatment discontinuation also simulated. Specifically, data simulated following assumptions: mean outcome trajectory placebo group increases linearly 50 baseline (visit 0) 60 visit 6, .e. slope 10 points/year. mean outcome trajectory intervention group identical placebo group visit 2. visit 2 onward, slope decreases 50% 5 points/year. covariance structure baseline follow-values groups implied random intercept slope model standard deviation 5 intercept slope, correlation 0.25. addition, independent residual error standard deviation 2.5 added assessment. probability study drug discontinuation visit calculated according logistic model depends observed outcome visit. Specifically, visit-wise discontinuation probability 2% 3% control intervention group, respectively, specified case observed outcome equal 50 (mean value baseline). odds discontinuation simulated increase +10% +1 point increase observed outcome. Study drug discontinuation simulated effect mean trajectory placebo group. intervention group, subjects discontinue follow slope mean trajectory placebo group time point onward. compatible copy increments reference (CIR) assumption. Study drop-study drug discontinuation visit occurs probability 50% leading missing outcome data time point onward.","code":""},{"path":[]},{"path":"/reference/get_jackknife_stack.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates a stack object populated with jackknife samples — get_jackknife_stack","title":"Creates a stack object populated with jackknife samples — get_jackknife_stack","text":"Function creates Stack() object populated stack jackknife samples based upon","code":""},{"path":"/reference/get_jackknife_stack.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates a stack object populated with jackknife samples — get_jackknife_stack","text":"","code":"get_jackknife_stack(longdata, method, stack = Stack$new())"},{"path":"/reference/get_jackknife_stack.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates a stack object populated with jackknife samples — get_jackknife_stack","text":"longdata longDataConstructor() object method method object stack Stack() object (exposed unit testing purposes)","code":""},{"path":"/reference/get_mmrm_sample.html","id":null,"dir":"Reference","previous_headings":"","what":"Fit MMRM and returns parameter estimates — get_mmrm_sample","title":"Fit MMRM and returns parameter estimates — get_mmrm_sample","text":"get_mmrm_sample fits base imputation model using ML/REML approach. Returns parameter estimates fit.","code":""},{"path":"/reference/get_mmrm_sample.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Fit MMRM and returns parameter estimates — get_mmrm_sample","text":"","code":"get_mmrm_sample(ids, longdata, method)"},{"path":"/reference/get_mmrm_sample.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Fit MMRM and returns parameter estimates — get_mmrm_sample","text":"ids vector characters containing ids subjects. longdata R6 longdata object containing relevant input data information. method method object generated either method_approxbayes() method_condmean().","code":""},{"path":"/reference/get_mmrm_sample.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Fit MMRM and returns parameter estimates — get_mmrm_sample","text":"named list class sample_single. contains following: ids vector characters containing ids subjects included original dataset. beta numeric vector estimated regression coefficients. sigma list estimated covariance matrices (one level vars$group). theta numeric vector transformed covariances. failed logical. TRUE model fit failed. ids_samp vector characters containing ids subjects included given sample.","code":""},{"path":"/reference/get_pattern_groups.html","id":null,"dir":"Reference","previous_headings":"","what":"Determine patients missingness group — get_pattern_groups","title":"Determine patients missingness group — get_pattern_groups","text":"Takes design matrix multiple rows per subject returns dataset 1 row per subject new column pgroup indicating group patient belongs (based upon missingness pattern treatment group)","code":""},{"path":"/reference/get_pattern_groups.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Determine patients missingness group — get_pattern_groups","text":"","code":"get_pattern_groups(ddat)"},{"path":"/reference/get_pattern_groups.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Determine patients missingness group — get_pattern_groups","text":"ddat data.frame columns subjid, visit, group, is_avail","code":""},{"path":"/reference/get_pattern_groups.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Determine patients missingness group — get_pattern_groups","text":"column is_avail must character numeric 0 1","code":""},{"path":"/reference/get_pattern_groups_unique.html","id":null,"dir":"Reference","previous_headings":"","what":"Get Pattern Summary — get_pattern_groups_unique","title":"Get Pattern Summary — get_pattern_groups_unique","text":"Takes dataset pattern information creates summary dataset just 1 row per pattern","code":""},{"path":"/reference/get_pattern_groups_unique.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get Pattern Summary — get_pattern_groups_unique","text":"","code":"get_pattern_groups_unique(patterns)"},{"path":"/reference/get_pattern_groups_unique.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get Pattern Summary — get_pattern_groups_unique","text":"patterns data.frame columns pgroup, pattern group","code":""},{"path":"/reference/get_pattern_groups_unique.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Get Pattern Summary — get_pattern_groups_unique","text":"column pgroup must numeric vector indicating pattern group patient belongs column pattern must character string 0's 1's. must identical rows within pgroup column group must character / numeric vector indicating covariance group observation belongs . must identical within pgroup","code":""},{"path":"/reference/get_pool_components.html","id":null,"dir":"Reference","previous_headings":"","what":"Expected Pool Components — get_pool_components","title":"Expected Pool Components — get_pool_components","text":"Returns elements expected contained analyse object depending analysis method specified.","code":""},{"path":"/reference/get_pool_components.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Expected Pool Components — get_pool_components","text":"","code":"get_pool_components(x)"},{"path":"/reference/get_pool_components.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Expected Pool Components — get_pool_components","text":"x Character name analysis method, must one either \"rubin\", \"jackknife\", \"bootstrap\" \"bmlmi\".","code":""},{"path":"/reference/get_session_hash.html","id":null,"dir":"Reference","previous_headings":"","what":"Get session hash — get_session_hash","title":"Get session hash — get_session_hash","text":"Gets unique string based current R version relevant packages.","code":""},{"path":"/reference/get_session_hash.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get session hash — get_session_hash","text":"","code":"get_session_hash()"},{"path":"/reference/get_stan_model.html","id":null,"dir":"Reference","previous_headings":"","what":"Get Compiled Stan Object — get_stan_model","title":"Get Compiled Stan Object — get_stan_model","text":"Gets compiled Stan object can used rstan::sampling()","code":""},{"path":"/reference/get_stan_model.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get Compiled Stan Object — get_stan_model","text":"","code":"get_stan_model()"},{"path":"/reference/get_visit_distribution_parameters.html","id":null,"dir":"Reference","previous_headings":"","what":"Derive visit distribution parameters — get_visit_distribution_parameters","title":"Derive visit distribution parameters — get_visit_distribution_parameters","text":"Takes patient level data beta coefficients expands get patient specific estimate visit distribution parameters mu sigma. Returns values specific format expected downstream functions imputation process (namely list(list(mu = ..., sigma = ...), list(mu = ..., sigma = ...))).","code":""},{"path":"/reference/get_visit_distribution_parameters.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Derive visit distribution parameters — get_visit_distribution_parameters","text":"","code":"get_visit_distribution_parameters(dat, beta, sigma)"},{"path":"/reference/get_visit_distribution_parameters.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Derive visit distribution parameters — get_visit_distribution_parameters","text":"dat Patient level dataset, must 1 row per visit. Column order must order beta. number columns must match length beta beta List model beta coefficients. 1 element sample e.g. 3 samples models 4 beta coefficients argument form list( c(1,2,3,4) , c(5,6,7,8), c(9,10,11,12)). elements beta must length must length order dat. sigma List sigma. Must number entries beta.","code":""},{"path":"/reference/has_class.html","id":null,"dir":"Reference","previous_headings":"","what":"Does object have a class ? — has_class","title":"Does object have a class ? — has_class","text":"Utility function see object particular class. Useful know many classes object may .","code":""},{"path":"/reference/has_class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Does object have a class ? — has_class","text":"","code":"has_class(x, cls)"},{"path":"/reference/has_class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Does object have a class ? — has_class","text":"x object want check class . cls class want know .","code":""},{"path":"/reference/has_class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Does object have a class ? — has_class","text":"TRUE object class. FALSE object class.","code":""},{"path":"/reference/ife.html","id":null,"dir":"Reference","previous_headings":"","what":"if else — ife","title":"if else — ife","text":"wrapper around () else() prevent unexpected interactions ifelse() factor variables","code":""},{"path":"/reference/ife.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"if else — ife","text":"","code":"ife(x, a, b)"},{"path":"/reference/ife.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"if else — ife","text":"x True / False value return True b value return False","code":""},{"path":"/reference/ife.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"if else — ife","text":"default ifelse() convert factor variables numeric values often undesirable. connivance function avoids problem","code":""},{"path":"/reference/imputation_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a valid imputation_df object — imputation_df","title":"Create a valid imputation_df object — imputation_df","text":"Create valid imputation_df object","code":""},{"path":"/reference/imputation_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a valid imputation_df object — imputation_df","text":"","code":"imputation_df(...)"},{"path":"/reference/imputation_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a valid imputation_df object — imputation_df","text":"... list imputation_single.","code":""},{"path":"/reference/imputation_list_df.html","id":null,"dir":"Reference","previous_headings":"","what":"List of imputations_df — imputation_list_df","title":"List of imputations_df — imputation_list_df","text":"container multiple imputation_df's","code":""},{"path":"/reference/imputation_list_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"List of imputations_df — imputation_list_df","text":"","code":"imputation_list_df(...)"},{"path":"/reference/imputation_list_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"List of imputations_df — imputation_list_df","text":"... objects class imputation_df","code":""},{"path":"/reference/imputation_list_single.html","id":null,"dir":"Reference","previous_headings":"","what":"A collection of imputation_singles() grouped by a single subjid ID — imputation_list_single","title":"A collection of imputation_singles() grouped by a single subjid ID — imputation_list_single","text":"collection imputation_singles() grouped single subjid ID","code":""},{"path":"/reference/imputation_list_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"A collection of imputation_singles() grouped by a single subjid ID — imputation_list_single","text":"","code":"imputation_list_single(imputations, D = 1)"},{"path":"/reference/imputation_list_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"A collection of imputation_singles() grouped by a single subjid ID — imputation_list_single","text":"imputations list imputation_single() objects ordered repetitions grouped sequentially D number repetitions performed determines many columns imputation matrix constructor function create imputation_list_single object contains matrix imputation_single() objects grouped single id. matrix split D columns (.e. non-bmlmi methods always 1) id attribute determined extracting id attribute contributing imputation_single() objects. error throw multiple id detected","code":""},{"path":"/reference/imputation_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a valid imputation_single object — imputation_single","title":"Create a valid imputation_single object — imputation_single","text":"Create valid imputation_single object","code":""},{"path":"/reference/imputation_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a valid imputation_single object — imputation_single","text":"","code":"imputation_single(id, values)"},{"path":"/reference/imputation_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a valid imputation_single object — imputation_single","text":"id character string specifying subject id. values numeric vector indicating imputed values.","code":""},{"path":"/reference/impute.html","id":null,"dir":"Reference","previous_headings":"","what":"Create imputed datasets — impute","title":"Create imputed datasets — impute","text":"impute() creates imputed datasets based upon data options specified call draws(). One imputed dataset created per \"sample\" created draws().","code":""},{"path":"/reference/impute.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create imputed datasets — impute","text":"","code":"impute( draws, references = NULL, update_strategy = NULL, strategies = getStrategies() ) # S3 method for class 'random' impute( draws, references = NULL, update_strategy = NULL, strategies = getStrategies() ) # S3 method for class 'condmean' impute( draws, references = NULL, update_strategy = NULL, strategies = getStrategies() )"},{"path":"/reference/impute.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create imputed datasets — impute","text":"draws draws object created draws(). references named vector. Identifies references used reference-based imputation methods. form c(\"Group1\" = \"Reference1\", \"Group2\" = \"Reference2\"). NULL (default), references assumed form c(\"Group1\" = \"Group1\", \"Group2\" = \"Group2\"). argument NULL imputation strategy (defined data_ice[[vars$strategy]] call draws) MAR set. update_strategy optional data.frame. Updates imputation method originally set via data_ice option draws(). See details section information. strategies named list functions. Defines imputation functions used. names list mirror values specified strategy column data_ice. Default = getStrategies(). See getStrategies() details.","code":""},{"path":"/reference/impute.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Create imputed datasets — impute","text":"impute() uses imputation model parameter estimates, generated draws(), first calculate marginal (multivariate normal) distribution subject's longitudinal outcome variable depending covariate values. subjects intercurrent events (ICEs) handled using non-MAR methods, marginal distribution updated depending time first visit affected ICE, chosen imputation strategy chosen reference group described Carpenter, Roger, Kenward (2013) . subject's imputation distribution used imputing missing values defined marginal distribution conditional observed outcome values. One dataset generated per set parameter estimates provided draws(). exact manner missing values imputed conditional imputation distribution depends method object provided draws(), particular: Bayes & Approximate Bayes: imputed dataset contains 1 row per subject & visit original dataset missing values imputed taking single random sample conditional imputation distribution. Conditional Mean: imputed dataset contains 1 row per subject & visit bootstrapped jackknife dataset used generate corresponding parameter estimates draws(). Missing values imputed using mean conditional imputation distribution. Please note first imputed dataset refers conditional mean imputation original dataset whereas subsequent imputed datasets refer conditional mean imputations bootstrap jackknife samples, respectively, original data. Bootstrapped Maximum Likelihood MI (BMLMI): performs D random imputations bootstrapped dataset used generate corresponding parameter estimates draws(). total number B*D imputed datasets provided, B number bootstrapped datasets. Missing values imputed taking random sample conditional imputation distribution. update_strategy argument can used update imputation strategy originally set via data_ice option draws(). avoids re-run draws() function changing imputation strategy certain circumstances (detailed ). data.frame provided update_strategy argument must contain two columns, one subject ID another imputation strategy, whose names defined vars argument specified call draws(). Please note argument allows update imputation strategy arguments time first visit affected ICE. key limitation functionality one can switch MAR non-MAR strategy (vice versa) subjects without observed post-ICE data. reason change affect whether post-ICE data included base imputation model (explained help draws()). example, subject ICE \"Visit 2\" observed/known values \"Visit 3\" function throw error one tries switch strategy MAR non-MAR strategy. contrast, switching non-MAR MAR strategy, whilst valid, raise warning usable data utilised imputation model.","code":""},{"path":"/reference/impute.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Create imputed datasets — impute","text":"James R Carpenter, James H Roger, Michael G Kenward. Analysis longitudinal trials protocol deviation: framework relevant, accessible assumptions, inference via multiple imputation. Journal Biopharmaceutical Statistics, 23(6):1352–1371, 2013. [Section 4.2 4.3]","code":""},{"path":"/reference/impute.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create imputed datasets — impute","text":"","code":"if (FALSE) { # \\dontrun{ impute( draws = drawobj, references = c(\"Trt\" = \"Placebo\", \"Placebo\" = \"Placebo\") ) new_strategy <- data.frame( subjid = c(\"Pt1\", \"Pt2\"), strategy = c(\"MAR\", \"JR\") ) impute( draws = drawobj, references = c(\"Trt\" = \"Placebo\", \"Placebo\" = \"Placebo\"), update_strategy = new_strategy ) } # }"},{"path":"/reference/impute_data_individual.html","id":null,"dir":"Reference","previous_headings":"","what":"Impute data for a single subject — impute_data_individual","title":"Impute data for a single subject — impute_data_individual","text":"function performs imputation single subject time implementing process detailed impute().","code":""},{"path":"/reference/impute_data_individual.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Impute data for a single subject — impute_data_individual","text":"","code":"impute_data_individual( id, index, beta, sigma, data, references, strategies, condmean, n_imputations = 1 )"},{"path":"/reference/impute_data_individual.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Impute data for a single subject — impute_data_individual","text":"id Character string identifying subject. index sample indexes subject belongs e.g c(1,1,1,2,2,4). beta list beta coefficients sample, .e. beta[[1]] set beta coefficients first sample. sigma list sigma coefficients sample split group .e. sigma[[1]][[\"\"]] give sigma coefficients group first sample. data longdata object created longDataConstructor() references named vector. Identifies references used generating imputed values. form c(\"Group\" = \"Reference\", \"Group\" = \"Reference\"). strategies named list functions. Defines imputation functions used. names list mirror values specified method column data_ice. Default = getStrategies(). See getStrategies() details. condmean Logical. TRUE impute using conditional mean values, FALSE impute taking random draw multivariate normal distribution. n_imputations condmean = FALSE numeric representing number random imputations performed sample. Default 1 (one random imputation per sample).","code":""},{"path":"/reference/impute_data_individual.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Impute data for a single subject — impute_data_individual","text":"Note function performs required imputations subject time. .e. subject included samples 1,3,5,9 imputations (using sample-dependent imputation model parameters) performed one step order avoid look subjects's covariates expanding design matrix multiple times (computationally expensive). function also supports subject belonging sample multiple times, .e. 1,1,2,3,5,5, typically occur bootstrapped datasets.","code":""},{"path":"/reference/impute_internal.html","id":null,"dir":"Reference","previous_headings":"","what":"Create imputed datasets — impute_internal","title":"Create imputed datasets — impute_internal","text":"work horse function implements functionality impute. See user level function impute() details.","code":""},{"path":"/reference/impute_internal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create imputed datasets — impute_internal","text":"","code":"impute_internal( draws, references = NULL, update_strategy, strategies, condmean )"},{"path":"/reference/impute_internal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create imputed datasets — impute_internal","text":"draws draws object created draws(). references named vector. Identifies references used reference-based imputation methods. form c(\"Group1\" = \"Reference1\", \"Group2\" = \"Reference2\"). NULL (default), references assumed form c(\"Group1\" = \"Group1\", \"Group2\" = \"Group2\"). argument NULL imputation strategy (defined data_ice[[vars$strategy]] call draws) MAR set. update_strategy optional data.frame. Updates imputation method originally set via data_ice option draws(). See details section information. strategies named list functions. Defines imputation functions used. names list mirror values specified strategy column data_ice. Default = getStrategies(). See getStrategies() details. condmean logical. TRUE impute using conditional mean values, values impute taking random draw multivariate normal distribution.","code":""},{"path":"/reference/impute_outcome.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample outcome value — impute_outcome","title":"Sample outcome value — impute_outcome","text":"Draws random sample multivariate normal distribution.","code":""},{"path":"/reference/impute_outcome.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample outcome value — impute_outcome","text":"","code":"impute_outcome(conditional_parameters, n_imputations = 1, condmean = FALSE)"},{"path":"/reference/impute_outcome.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample outcome value — impute_outcome","text":"conditional_parameters list elements mu sigma contain mean vector covariance matrix sample . n_imputations numeric representing number random samples multivariate normal distribution performed. Default 1. condmean conditional mean imputation performed (opposed random sampling)","code":""},{"path":"/reference/invert.html","id":null,"dir":"Reference","previous_headings":"","what":"invert — invert","title":"invert — invert","text":"Utility function used replicated purrr::transpose. Turns list inside .","code":""},{"path":"/reference/invert.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"invert — invert","text":"","code":"invert(x)"},{"path":"/reference/invert.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"invert — invert","text":"x list","code":""},{"path":"/reference/invert_indexes.html","id":null,"dir":"Reference","previous_headings":"","what":"Invert and derive indexes — invert_indexes","title":"Invert and derive indexes — invert_indexes","text":"Takes list elements creates new list containing 1 entry per unique element value containing indexes original elements occurred .","code":""},{"path":"/reference/invert_indexes.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Invert and derive indexes — invert_indexes","text":"","code":"invert_indexes(x)"},{"path":"/reference/invert_indexes.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Invert and derive indexes — invert_indexes","text":"x list elements invert calculate index (see details).","code":""},{"path":"/reference/invert_indexes.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Invert and derive indexes — invert_indexes","text":"functions purpose best illustrated example: input: becomes:","code":"list( c(\"A\", \"B\", \"C\"), c(\"A\", \"A\", \"B\"))} list( \"A\" = c(1,2,2), \"B\" = c(1,2), \"C\" = 1 )"},{"path":"/reference/is_absent.html","id":null,"dir":"Reference","previous_headings":"","what":"Is value absent — is_absent","title":"Is value absent — is_absent","text":"Returns true value either NULL, NA \"\". case vector values must NULL/NA/\"\" x regarded absent.","code":""},{"path":"/reference/is_absent.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Is value absent — is_absent","text":"","code":"is_absent(x, na = TRUE, blank = TRUE)"},{"path":"/reference/is_absent.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Is value absent — is_absent","text":"x value check absent na NAs count absent blank blanks .e. \"\" count absent","code":""},{"path":"/reference/is_char_fact.html","id":null,"dir":"Reference","previous_headings":"","what":"Is character or factor — is_char_fact","title":"Is character or factor — is_char_fact","text":"returns true x character factor vector","code":""},{"path":"/reference/is_char_fact.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Is character or factor — is_char_fact","text":"","code":"is_char_fact(x)"},{"path":"/reference/is_char_fact.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Is character or factor — is_char_fact","text":"x character factor vector","code":""},{"path":"/reference/is_char_one.html","id":null,"dir":"Reference","previous_headings":"","what":"Is single character — is_char_one","title":"Is single character — is_char_one","text":"returns true x length 1 character vector","code":""},{"path":"/reference/is_char_one.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Is single character — is_char_one","text":"","code":"is_char_one(x)"},{"path":"/reference/is_char_one.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Is single character — is_char_one","text":"x character vector","code":""},{"path":"/reference/is_in_rbmi_development.html","id":null,"dir":"Reference","previous_headings":"","what":"Is package in development mode? — is_in_rbmi_development","title":"Is package in development mode? — is_in_rbmi_development","text":"Returns TRUE package developed .e. local copy source code actively editing Returns FALSE otherwise","code":""},{"path":"/reference/is_in_rbmi_development.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Is package in development mode? — is_in_rbmi_development","text":"","code":"is_in_rbmi_development()"},{"path":"/reference/is_in_rbmi_development.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Is package in development mode? — is_in_rbmi_development","text":"Main use function parallel processing indicate whether sub-processes need load current development version code whether load main installed package system","code":""},{"path":"/reference/is_num_char_fact.html","id":null,"dir":"Reference","previous_headings":"","what":"Is character, factor or numeric — is_num_char_fact","title":"Is character, factor or numeric — is_num_char_fact","text":"returns true x character, numeric factor vector","code":""},{"path":"/reference/is_num_char_fact.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Is character, factor or numeric — is_num_char_fact","text":"","code":"is_num_char_fact(x)"},{"path":"/reference/is_num_char_fact.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Is character, factor or numeric — is_num_char_fact","text":"x character, numeric factor vector","code":""},{"path":"/reference/locf.html","id":null,"dir":"Reference","previous_headings":"","what":"Last Observation Carried Forward — locf","title":"Last Observation Carried Forward — locf","text":"Returns vector applied last observation carried forward imputation.","code":""},{"path":"/reference/locf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Last Observation Carried Forward — locf","text":"","code":"locf(x)"},{"path":"/reference/locf.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Last Observation Carried Forward — locf","text":"x vector.","code":""},{"path":"/reference/locf.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Last Observation Carried Forward — locf","text":"","code":"if (FALSE) { # \\dontrun{ locf(c(NA, 1, 2, 3, NA, 4)) # Returns c(NA, 1, 2, 3, 3, 4) } # }"},{"path":"/reference/longDataConstructor.html","id":null,"dir":"Reference","previous_headings":"","what":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"longdata object allows efficient storage recall longitudinal datasets use bootstrap sampling. object works de-constructing data lists based upon subject id thus enabling efficient lookup.","code":""},{"path":"/reference/longDataConstructor.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"object also handles multiple operations specific rbmi defining whether outcome value MAR / Missing well tracking imputation strategy assigned subject. recognised objects functionality fairly overloaded hoped can split area specific objects / functions future. additions functionality object avoided possible.","code":""},{"path":"/reference/longDataConstructor.html","id":"public-fields","dir":"Reference","previous_headings":"","what":"Public fields","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"data original dataset passed constructor (sorted id visit) vars vars object (list key variables) passed constructor visits character vector containing distinct visit levels ids character vector containing unique ids subject self$data formula formula expressing design matrix data constructed strata numeric vector indicating strata corresponding value self$ids belongs . stratification variable defined default 1 subjects (.e. group). field used part self$sample_ids() function enable stratified bootstrap sampling ice_visit_index list indexed subject storing index number first visit affected ICE. ICE set equal number visits plus 1. values list indexed subject storing numeric vector original (unimputed) outcome values group list indexed subject storing single character indicating imputation group subject belongs defined self$data[id, self$ivars$group] used determine reference group used imputing subjects data. is_mar list indexed subject storing logical values indicating subjects outcome values MAR . list defaulted TRUE subjects & outcomes modified calls self$set_strategies(). Note indicate values missing, variable True outcome values either occurred ICE visit post ICE visit imputation strategy MAR strategies list indexed subject storing single character value indicating imputation strategy assigned subject. list defaulted \"MAR\" subjects modified calls either self$set_strategies() self$update_strategies() strategy_lock list indexed subject storing single logical value indicating whether patients imputation strategy locked . strategy locked means change MAR non-MAR. Strategies can changed non-MAR MAR though trigger warning. Strategies locked patient assigned MAR strategy non-missing ICE date. list populated call self$set_strategies(). indexes list indexed subject storing numeric vector indexes specify rows original dataset belong subject .e. recover full data subject \"pt3\" can use self$data[self$indexes[[\"pt3\"]],]. may seem redundant filtering data directly however enables efficient bootstrap sampling data .e. list populated object initialisation. is_missing list indexed subject storing logical vector indicating whether corresponding outcome subject missing. list populated object initialisation. is_post_ice list indexed subject storing logical vector indicating whether corresponding outcome subject post date ICE. ICE data provided defaults False observations. list populated call self$set_strategies().","code":"indexes <- unlist(self$indexes[c(\"pt3\", \"pt3\")]) self$data[indexes,]"},{"path":[]},{"path":"/reference/longDataConstructor.html","id":"public-methods","dir":"Reference","previous_headings":"","what":"Public methods","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"longDataConstructor$get_data() longDataConstructor$add_subject() longDataConstructor$validate_ids() longDataConstructor$sample_ids() longDataConstructor$extract_by_id() longDataConstructor$update_strategies() longDataConstructor$set_strategies() longDataConstructor$check_has_data_at_each_visit() longDataConstructor$set_strata() longDataConstructor$new() longDataConstructor$clone()","code":""},{"path":"/reference/longDataConstructor.html","id":"method-get-data-","dir":"Reference","previous_headings":"","what":"Method get_data()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Returns data.frame based upon required subject IDs. Replaces missing values new ones provided.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$get_data( obj = NULL, nmar.rm = FALSE, na.rm = FALSE, idmap = FALSE )"},{"path":"/reference/longDataConstructor.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"obj Either NULL, character vector subjects IDs imputation list object. See details. nmar.rm Logical value. TRUE remove observations regarded MAR (determined self$is_mar). na.rm Logical value. TRUE remove outcome values missing (determined self$is_missing). idmap Logical value. TRUE add attribute idmap contains mapping new subject ids old subject ids. See details.","code":""},{"path":"/reference/longDataConstructor.html","id":"details-1","dir":"Reference","previous_headings":"","what":"Details","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"obj NULL full original dataset returned. obj character vector new dataset consisting just subjects returned; character vector contains duplicate entries subject returned multiple times. obj imputation_df object (created imputation_df()) subject ids specified object returned missing values filled specified imputation list object. .e. return data.frame consisting observations pt1 twice observations pt3 . first set observations pt1 missing values filled c(1,2,3) second set filled c(4,5,6). length values must equal sum(self$is_missing[[id]]). obj NULL subject IDs scrambled order ensure unique .e. pt2 requested twice process guarantees set observations unique subject ID number. idmap attribute (requested) can used map new ids back old ids.","code":"obj <- imputation_df( imputation_single( id = \"pt1\", values = c(1,2,3)), imputation_single( id = \"pt1\", values = c(4,5,6)), imputation_single( id = \"pt3\", values = c(7,8)) ) longdata$get_data(obj)"},{"path":"/reference/longDataConstructor.html","id":"returns","dir":"Reference","previous_headings":"","what":"Returns","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"data.frame.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-add-subject-","dir":"Reference","previous_headings":"","what":"Method add_subject()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"function decomposes patient data self$data populates corresponding lists .e. self$is_missing, self$values, self$group, etc. function called upon objects initialization.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-1","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$add_subject(id)"},{"path":"/reference/longDataConstructor.html","id":"arguments-1","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"id Character subject id exists within self$data.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-validate-ids-","dir":"Reference","previous_headings":"","what":"Method validate_ids()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Throws error element ids within source data self$data.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-2","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$validate_ids(ids)"},{"path":"/reference/longDataConstructor.html","id":"arguments-2","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"ids character vector ids.","code":""},{"path":"/reference/longDataConstructor.html","id":"returns-1","dir":"Reference","previous_headings":"","what":"Returns","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"TRUE","code":""},{"path":"/reference/longDataConstructor.html","id":"method-sample-ids-","dir":"Reference","previous_headings":"","what":"Method sample_ids()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Performs random stratified sampling patient ids (replacement) patient equal weight picked within strata (.e dependent many non-missing visits ).","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-3","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$sample_ids()"},{"path":"/reference/longDataConstructor.html","id":"returns-2","dir":"Reference","previous_headings":"","what":"Returns","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Character vector ids.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-extract-by-id-","dir":"Reference","previous_headings":"","what":"Method extract_by_id()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Returns list key information given subject. convenience wrapper save manually grab element.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-4","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$extract_by_id(id)"},{"path":"/reference/longDataConstructor.html","id":"arguments-3","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"id Character subject id exists within self$data.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-update-strategies-","dir":"Reference","previous_headings":"","what":"Method update_strategies()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Convenience function run self$set_strategies(dat_ice, update=TRUE) kept legacy reasons.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-5","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$update_strategies(dat_ice)"},{"path":"/reference/longDataConstructor.html","id":"arguments-4","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"dat_ice data.frame containing ICE information see impute() format dataframe.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-set-strategies-","dir":"Reference","previous_headings":"","what":"Method set_strategies()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Updates self$strategies, self$is_mar, self$is_post_ice variables based upon provided ICE information.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-6","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$set_strategies(dat_ice = NULL, update = FALSE)"},{"path":"/reference/longDataConstructor.html","id":"arguments-5","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"dat_ice data.frame containing ICE information. See details. update Logical, indicates ICE data used update. See details.","code":""},{"path":"/reference/longDataConstructor.html","id":"details-2","dir":"Reference","previous_headings":"","what":"Details","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"See draws() specification dat_ice update=FALSE. See impute() format dat_ice update=TRUE. update=TRUE function ensures MAR strategies changed non-MAR presence post-ICE observations.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-check-has-data-at-each-visit-","dir":"Reference","previous_headings":"","what":"Method check_has_data_at_each_visit()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Ensures visits least 1 observed \"MAR\" observation. Throws error criteria met. ensure initial MMRM can resolved.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-7","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$check_has_data_at_each_visit()"},{"path":"/reference/longDataConstructor.html","id":"method-set-strata-","dir":"Reference","previous_headings":"","what":"Method set_strata()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Populates self$strata variable. user specified stratification variables first visit used determine value variables. stratification variables specified everyone defined strata 1.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-8","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$set_strata()"},{"path":"/reference/longDataConstructor.html","id":"method-new-","dir":"Reference","previous_headings":"","what":"Method new()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Constructor function.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-9","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$new(data, vars)"},{"path":"/reference/longDataConstructor.html","id":"arguments-6","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"data longitudinal dataset. vars ivars object created set_vars().","code":""},{"path":"/reference/longDataConstructor.html","id":"method-clone-","dir":"Reference","previous_headings":"","what":"Method clone()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"objects class cloneable method.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-10","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$clone(deep = FALSE)"},{"path":"/reference/longDataConstructor.html","id":"arguments-7","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"deep Whether make deep clone.","code":""},{"path":"/reference/ls_design.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate design vector for the lsmeans — ls_design","title":"Calculate design vector for the lsmeans — ls_design","text":"Calculates design vector required generate lsmean standard error. ls_design_equal calculates applying equal weight per covariate combination whilst ls_design_proportional applies weighting proportional frequency covariate combination occurred actual dataset.","code":""},{"path":"/reference/ls_design.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate design vector for the lsmeans — ls_design","text":"","code":"ls_design_equal(data, frm, fix) ls_design_counterfactual(data, frm, fix) ls_design_proportional(data, frm, fix)"},{"path":"/reference/ls_design.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate design vector for the lsmeans — ls_design","text":"data data.frame frm Formula used fit original model fix named list variables fixed values","code":""},{"path":"/reference/lsmeans.html","id":null,"dir":"Reference","previous_headings":"","what":"Least Square Means — lsmeans","title":"Least Square Means — lsmeans","text":"Estimates least square means linear model. exact implementation / interpretation depends weighting scheme; see weighting section information.","code":""},{"path":"/reference/lsmeans.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Least Square Means — lsmeans","text":"","code":"lsmeans( model, ..., .weights = c(\"counterfactual\", \"equal\", \"proportional_em\", \"proportional\") )"},{"path":"/reference/lsmeans.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Least Square Means — lsmeans","text":"model model created lm. ... Fixes specific variables specific values .e. trt = 1 age = 50. name argument must name variable within dataset. .weights Character, either \"counterfactual\" (default), \"equal\", \"proportional_em\" \"proportional\". Specifies weighting strategy used calculating lsmeans. See weighting section details.","code":""},{"path":[]},{"path":"/reference/lsmeans.html","id":"counterfactual","dir":"Reference","previous_headings":"","what":"Counterfactual","title":"Least Square Means — lsmeans","text":"weights = \"counterfactual\" (default) lsmeans obtained taking average predicted values patient assigning patients arm turn. approach equivalent standardization g-computation. comparison emmeans approach equivalent : Note ensure backwards compatibility previous versions rbmi weights = \"proportional\" alias weights = \"counterfactual\". get results consistent emmeans's weights = \"proportional\" please use weights = \"proportional_em\".","code":"emmeans::emmeans(model, specs = \"\", counterfactual = \"\")"},{"path":"/reference/lsmeans.html","id":"equal","dir":"Reference","previous_headings":"","what":"Equal","title":"Least Square Means — lsmeans","text":"weights = \"equal\" lsmeans obtained taking model fitted value hypothetical patient whose covariates defined follows: Continuous covariates set mean(X) Dummy categorical variables set 1/N N number levels Continuous * continuous interactions set mean(X) * mean(Y) Continuous * categorical interactions set mean(X) * 1/N Dummy categorical * categorical interactions set 1/N * 1/M comparison emmeans approach equivalent :","code":"emmeans::emmeans(model, specs = \"\", weights = \"equal\")"},{"path":"/reference/lsmeans.html","id":"proportional","dir":"Reference","previous_headings":"","what":"Proportional","title":"Least Square Means — lsmeans","text":"weights = \"proportional_em\" lsmeans obtained per weights = \"equal\" except instead weighting observation equally weighted proportion given combination categorical values occurred data. comparison emmeans approach equivalent : Note confused weights = \"proportional\" alias weights = \"counterfactual\".","code":"emmeans::emmeans(model, specs = \"\", weights = \"proportional\")"},{"path":"/reference/lsmeans.html","id":"fixing","dir":"Reference","previous_headings":"","what":"Fixing","title":"Least Square Means — lsmeans","text":"Regardless weighting scheme named arguments passed via ... fix value covariate specified value. example, lsmeans(model, trt = \"\") fix dummy variable trtA 1 patients (real hypothetical) calculating lsmeans. See references similar implementations done SAS R via emmeans package.","code":""},{"path":"/reference/lsmeans.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Least Square Means — lsmeans","text":"https://CRAN.R-project.org/package=emmeans https://documentation.sas.com/doc/en/pgmsascdc/9.4_3.3/statug/statug_glm_details41.htm","code":""},{"path":"/reference/lsmeans.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Least Square Means — lsmeans","text":"","code":"if (FALSE) { # \\dontrun{ mod <- lm(Sepal.Length ~ Species + Petal.Length, data = iris) lsmeans(mod) lsmeans(mod, Species = \"virginica\") lsmeans(mod, Species = \"versicolor\") lsmeans(mod, Species = \"versicolor\", Petal.Length = 1) } # }"},{"path":"/reference/make_rbmi_cluster.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a rbmi ready cluster — make_rbmi_cluster","title":"Create a rbmi ready cluster — make_rbmi_cluster","text":"Create rbmi ready cluster","code":""},{"path":"/reference/make_rbmi_cluster.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a rbmi ready cluster — make_rbmi_cluster","text":"","code":"make_rbmi_cluster(ncores = 1, objects = NULL, packages = NULL)"},{"path":"/reference/make_rbmi_cluster.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a rbmi ready cluster — make_rbmi_cluster","text":"ncores Number parallel processes use existing cluster make use objects named list objects export sub-processes packages character vector libraries load sub-processes function wrapper around parallel::makePSOCKcluster() takes care configuring rbmi used sub-processes well loading user defined objects libraries setting seed reproducibility. ncores 1 function return NULL. ncores cluster created via parallel::makeCluster() function just takes care inserting relevant rbmi objects existing cluster.","code":""},{"path":"/reference/make_rbmi_cluster.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a rbmi ready cluster — make_rbmi_cluster","text":"","code":"if (FALSE) { # \\dontrun{ # Basic usage make_rbmi_cluster(5) # User objects + libraries VALUE <- 5 myfun <- function(x) { x + day(VALUE) # From lubridate::day() } make_rbmi_cluster(5, list(VALUE = VALUE, myfun = myfun), c(\"lubridate\")) # Using a already created cluster cl <- parallel::makeCluster(5) make_rbmi_cluster(cl) } # }"},{"path":"/reference/method.html","id":null,"dir":"Reference","previous_headings":"","what":"Set the multiple imputation methodology — method","title":"Set the multiple imputation methodology — method","text":"functions determine methods rbmi use creating imputation models, generating imputed values pooling results.","code":""},{"path":"/reference/method.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set the multiple imputation methodology — method","text":"","code":"method_bayes( burn_in = 200, burn_between = 50, same_cov = TRUE, n_samples = 20, seed = NULL ) method_approxbayes( covariance = c(\"us\", \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"), threshold = 0.01, same_cov = TRUE, REML = TRUE, n_samples = 20 ) method_condmean( covariance = c(\"us\", \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"), threshold = 0.01, same_cov = TRUE, REML = TRUE, n_samples = NULL, type = c(\"bootstrap\", \"jackknife\") ) method_bmlmi( covariance = c(\"us\", \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"), threshold = 0.01, same_cov = TRUE, REML = TRUE, B = 20, D = 2 )"},{"path":"/reference/method.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set the multiple imputation methodology — method","text":"burn_in numeric specifies many observations discarded prior extracting actual samples. Note sampler initialized maximum likelihood estimates weakly informative prior used thus theory value need high. burn_between numeric specifies \"thinning\" rate .e. many observations discarded sample. used prevent issues associated autocorrelation samples. same_cov logical, TRUE imputation model fitted using single shared covariance matrix observations. FALSE separate covariance matrix fit group determined group argument set_vars(). n_samples numeric determines many imputed datasets generated. case method_condmean(type = \"jackknife\") argument must set NULL. See details. seed deprecated. Please use set.seed() instead. covariance character string specifies structure covariance matrix used imputation model. Must one \"us\" (default), \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"). See details. threshold numeric 0 1, specifies proportion bootstrap datasets can fail produce valid samples error thrown. See details. REML logical indicating whether use REML estimation rather maximum likelihood. type character string specifies resampling method used perform inference conditional mean imputation approach (set via method_condmean()) used. Must one \"bootstrap\" \"jackknife\". B numeric determines number bootstrap samples method_bmlmi. D numeric determines number random imputations bootstrap sample. Needed method_bmlmi().","code":""},{"path":"/reference/method.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Set the multiple imputation methodology — method","text":"case method_condmean(type = \"bootstrap\") n_samples + 1 imputation models datasets generated first sample based original dataset whilst n_samples samples bootstrapped datasets. Likewise, method_condmean(type = \"jackknife\") length(unique(data$subjid)) + 1 imputation models datasets generated. cases represented n + 1 displayed print message. user able specify different covariance structures using covariance argument. Currently supported structures include: Unstructured (\"us\") (default) Ante-dependence (\"ad\") Heterogeneous ante-dependence (\"adh\") First-order auto-regressive (\"ar1\") Heterogeneous first-order auto-regressive (\"ar1h\") Compound symmetry (\"cs\") Heterogeneous compound symmetry (\"csh\") Toeplitz (\"toep\") Heterogeneous Toeplitz (\"toeph\") full details please see mmrm::cov_types(). Note present Bayesian methods support unstructured. case method_condmean(type = \"bootstrap\"), method_approxbayes() method_bmlmi() repeated bootstrap samples original dataset taken MMRM fitted sample. Due randomness sampled datasets, well limitations optimisers used fit models, uncommon estimates particular dataset generated. instances rbmi designed throw bootstrapped dataset try another. However ensure errors due chance due underlying misspecification data /model tolerance limit set many samples can discarded. tolerance limit reached error thrown process aborted. tolerance limit defined ceiling(threshold * n_samples). Note jackknife method estimates need generated leave-one-datasets error thrown fail fit. Please note time writing (September 2021) Stan unable produce reproducible samples across different operating systems even seed used. care must taken using Stan across different machines. information limitation please consult Stan documentation https://mc-stan.org/docs/2_27/reference-manual/reproducibility-chapter.html","code":""},{"path":"/reference/par_lapply.html","id":null,"dir":"Reference","previous_headings":"","what":"Parallelise Lapply — par_lapply","title":"Parallelise Lapply — par_lapply","text":"Simple wrapper around lapply parallel::clusterApplyLB abstract away logic deciding one use","code":""},{"path":"/reference/par_lapply.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Parallelise Lapply — par_lapply","text":"","code":"par_lapply(cl, fun, x, ...)"},{"path":"/reference/par_lapply.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Parallelise Lapply — par_lapply","text":"cl Cluster created parallel::makeCluster() NULL fun Function run x object looped ... extra arguements passed fun","code":""},{"path":"/reference/parametric_ci.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate parametric confidence intervals — parametric_ci","title":"Calculate parametric confidence intervals — parametric_ci","text":"Calculates confidence intervals based upon parametric distribution.","code":""},{"path":"/reference/parametric_ci.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate parametric confidence intervals — parametric_ci","text":"","code":"parametric_ci(point, se, alpha, alternative, qfun, pfun, ...)"},{"path":"/reference/parametric_ci.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate parametric confidence intervals — parametric_ci","text":"point point estimate. se standard error point estimate. using non-\"normal\" distribution set 1. alpha type 1 error rate, value 0 1. alternative character string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". qfun quantile function assumed distribution .e. qnorm. pfun CDF function assumed distribution .e. pnorm. ... additional arguments passed qfun pfun .e. df = 102.","code":""},{"path":"/reference/pool.html","id":null,"dir":"Reference","previous_headings":"","what":"Pool analysis results obtained from the imputed datasets — pool","title":"Pool analysis results obtained from the imputed datasets — pool","text":"Pool analysis results obtained imputed datasets","code":""},{"path":"/reference/pool.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Pool analysis results obtained from the imputed datasets — pool","text":"","code":"pool( results, conf.level = 0.95, alternative = c(\"two.sided\", \"less\", \"greater\"), type = c(\"percentile\", \"normal\") ) # S3 method for class 'pool' as.data.frame(x, ...) # S3 method for class 'pool' print(x, ...)"},{"path":"/reference/pool.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Pool analysis results obtained from the imputed datasets — pool","text":"results analysis object created analyse(). conf.level confidence level returned confidence interval. Must single number 0 1. Default 0.95. alternative character string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". type character string either \"percentile\" (default) \"normal\". Determines method used calculate bootstrap confidence intervals. See details. used method_condmean(type = \"bootstrap\") specified original call draws(). x pool object generated pool(). ... used.","code":""},{"path":"/reference/pool.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Pool analysis results obtained from the imputed datasets — pool","text":"calculation used generate point estimate, standard errors confidence interval depends upon method specified original call draws(); particular: method_approxbayes() & method_bayes() use Rubin's rules pool estimates variances across multiple imputed datasets, Barnard-Rubin rule pool degree's freedom; see Little & Rubin (2002). method_condmean(type = \"bootstrap\") uses percentile normal approximation; see Efron & Tibshirani (1994). Note percentile bootstrap, standard error calculated, .e. standard errors NA object / data.frame. method_condmean(type = \"jackknife\") uses standard jackknife variance formula; see Efron & Tibshirani (1994). method_bmlmi uses pooling procedure Bootstrapped Maximum Likelihood MI (BMLMI). See Von Hippel & Bartlett (2021).","code":""},{"path":"/reference/pool.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Pool analysis results obtained from the imputed datasets — pool","text":"Bradley Efron Robert J Tibshirani. introduction bootstrap. CRC press, 1994. [Section 11] Roderick J. . Little Donald B. Rubin. Statistical Analysis Missing Data, Second Edition. John Wiley & Sons, Hoboken, New Jersey, 2002. [Section 5.4] Von Hippel, Paul T Bartlett, Jonathan W. Maximum likelihood multiple imputation: Faster imputations consistent standard errors without posterior draws. 2021.","code":""},{"path":"/reference/pool_bootstrap_normal.html","id":null,"dir":"Reference","previous_headings":"","what":"Bootstrap Pooling via normal approximation — pool_bootstrap_normal","title":"Bootstrap Pooling via normal approximation — pool_bootstrap_normal","text":"Get point estimate, confidence interval p-value using normal approximation.","code":""},{"path":"/reference/pool_bootstrap_normal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bootstrap Pooling via normal approximation — pool_bootstrap_normal","text":"","code":"pool_bootstrap_normal(est, conf.level, alternative)"},{"path":"/reference/pool_bootstrap_normal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bootstrap Pooling via normal approximation — pool_bootstrap_normal","text":"est numeric vector point estimates bootstrap sample. conf.level confidence level returned confidence interval. Must single number 0 1. Default 0.95. alternative character string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\".","code":""},{"path":"/reference/pool_bootstrap_normal.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bootstrap Pooling via normal approximation — pool_bootstrap_normal","text":"point estimate taken first element est. remaining n-1 values est used generate confidence intervals.","code":""},{"path":"/reference/pool_bootstrap_percentile.html","id":null,"dir":"Reference","previous_headings":"","what":"Bootstrap Pooling via Percentiles — pool_bootstrap_percentile","title":"Bootstrap Pooling via Percentiles — pool_bootstrap_percentile","text":"Get point estimate, confidence interval p-value using percentiles. Note quantile \"type=6\" used, see stats::quantile() details.","code":""},{"path":"/reference/pool_bootstrap_percentile.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bootstrap Pooling via Percentiles — pool_bootstrap_percentile","text":"","code":"pool_bootstrap_percentile(est, conf.level, alternative)"},{"path":"/reference/pool_bootstrap_percentile.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bootstrap Pooling via Percentiles — pool_bootstrap_percentile","text":"est numeric vector point estimates bootstrap sample. conf.level confidence level returned confidence interval. Must single number 0 1. Default 0.95. alternative character string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\".","code":""},{"path":"/reference/pool_bootstrap_percentile.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bootstrap Pooling via Percentiles — pool_bootstrap_percentile","text":"point estimate taken first element est. remaining n-1 values est used generate confidence intervals.","code":""},{"path":"/reference/pool_internal.html","id":null,"dir":"Reference","previous_headings":"","what":"Internal Pool Methods — pool_internal","title":"Internal Pool Methods — pool_internal","text":"Dispatches pool methods based upon results object class. See pool() details.","code":""},{"path":"/reference/pool_internal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Internal Pool Methods — pool_internal","text":"","code":"pool_internal(results, conf.level, alternative, type, D) # S3 method for class 'jackknife' pool_internal(results, conf.level, alternative, type, D) # S3 method for class 'bootstrap' pool_internal( results, conf.level, alternative, type = c(\"percentile\", \"normal\"), D ) # S3 method for class 'bmlmi' pool_internal(results, conf.level, alternative, type, D) # S3 method for class 'rubin' pool_internal(results, conf.level, alternative, type, D)"},{"path":"/reference/pool_internal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Internal Pool Methods — pool_internal","text":"results list results .e. x$results element analyse object created analyse()). conf.level confidence level returned confidence interval. Must single number 0 1. Default 0.95. alternative character string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". type character string either \"percentile\" (default) \"normal\". Determines method used calculate bootstrap confidence intervals. See details. used method_condmean(type = \"bootstrap\") specified original call draws(). D numeric representing number imputations bootstrap sample BMLMI method.","code":""},{"path":"/reference/prepare_stan_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Prepare input data to run the Stan model — prepare_stan_data","title":"Prepare input data to run the Stan model — prepare_stan_data","text":"Prepare input data run Stan model. Creates / calculates required inputs required data{} block MMRM Stan program.","code":""},{"path":"/reference/prepare_stan_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Prepare input data to run the Stan model — prepare_stan_data","text":"","code":"prepare_stan_data(ddat, subjid, visit, outcome, group)"},{"path":"/reference/prepare_stan_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Prepare input data to run the Stan model — prepare_stan_data","text":"ddat design matrix subjid Character vector containing subjects IDs. visit Vector containing visits. outcome Numeric vector containing outcome variable. group Vector containing group variable.","code":""},{"path":"/reference/prepare_stan_data.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Prepare input data to run the Stan model — prepare_stan_data","text":"stan_data object. named list per data{} block related Stan file. particular returns: N - number rows design matrix P - number columns design matrix G - number distinct covariance matrix groups (.e. length(unique(group))) n_visit - number unique outcome visits n_pat - total number pattern groups (defined missingness patterns & covariance group) pat_G - Index Sigma pattern group use pat_n_pt - number patients within pattern group pat_n_visit - number non-missing visits pattern group pat_sigma_index - rows/cols Sigma subset pattern group (padded 0's) y - outcome variable Q - design matrix (QR decomposition) R - R matrix QR decomposition design matrix","code":""},{"path":"/reference/prepare_stan_data.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Prepare input data to run the Stan model — prepare_stan_data","text":"group argument determines covariance matrix group subject belongs . want subjects use shared covariance matrix set group \"1\" everyone.","code":""},{"path":"/reference/print.analysis.html","id":null,"dir":"Reference","previous_headings":"","what":"Print analysis object — print.analysis","title":"Print analysis object — print.analysis","text":"Print analysis object","code":""},{"path":"/reference/print.analysis.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print analysis object — print.analysis","text":"","code":"# S3 method for class 'analysis' print(x, ...)"},{"path":"/reference/print.analysis.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print analysis object — print.analysis","text":"x analysis object generated analyse(). ... used.","code":""},{"path":"/reference/print.draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Print draws object — print.draws","title":"Print draws object — print.draws","text":"Print draws object","code":""},{"path":"/reference/print.draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print draws object — print.draws","text":"","code":"# S3 method for class 'draws' print(x, ...)"},{"path":"/reference/print.draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print draws object — print.draws","text":"x draws object generated draws(). ... used.","code":""},{"path":"/reference/print.imputation.html","id":null,"dir":"Reference","previous_headings":"","what":"Print imputation object — print.imputation","title":"Print imputation object — print.imputation","text":"Print imputation object","code":""},{"path":"/reference/print.imputation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print imputation object — print.imputation","text":"","code":"# S3 method for class 'imputation' print(x, ...)"},{"path":"/reference/print.imputation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print imputation object — print.imputation","text":"x imputation object generated impute(). ... used.","code":""},{"path":"/reference/progressLogger.html","id":null,"dir":"Reference","previous_headings":"","what":"R6 Class for printing current sampling progress — progressLogger","title":"R6 Class for printing current sampling progress — progressLogger","text":"Object initalised total number iterations expected occur. User can update object add method indicate many iterations just occurred. Every time step * 100 % iterations occurred message printed console. Use quiet argument prevent object printing anything ","code":""},{"path":"/reference/progressLogger.html","id":"public-fields","dir":"Reference","previous_headings":"","what":"Public fields","title":"R6 Class for printing current sampling progress — progressLogger","text":"step real, percentage iterations allow printing progress console step_current integer, total number iterations completed since progress last printed console n integer, current number completed iterations n_max integer, total number expected iterations completed acts denominator calculating progress percentages quiet logical holds whether print anything","code":""},{"path":[]},{"path":"/reference/progressLogger.html","id":"public-methods","dir":"Reference","previous_headings":"","what":"Public methods","title":"R6 Class for printing current sampling progress — progressLogger","text":"progressLogger$new() progressLogger$add() progressLogger$print_progress() progressLogger$clone()","code":""},{"path":"/reference/progressLogger.html","id":"method-new-","dir":"Reference","previous_headings":"","what":"Method new()","title":"R6 Class for printing current sampling progress — progressLogger","text":"Create progressLogger object","code":""},{"path":"/reference/progressLogger.html","id":"usage","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for printing current sampling progress — progressLogger","text":"","code":"progressLogger$new(n_max, quiet = FALSE, step = 0.1)"},{"path":"/reference/progressLogger.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for printing current sampling progress — progressLogger","text":"n_max integer, sets field n_max quiet logical, sets field quiet step real, sets field step","code":""},{"path":"/reference/progressLogger.html","id":"method-add-","dir":"Reference","previous_headings":"","what":"Method add()","title":"R6 Class for printing current sampling progress — progressLogger","text":"Records n iterations completed add number current step count (step_current) print progress message log step limit (step) reached. function nothing quiet set TRUE","code":""},{"path":"/reference/progressLogger.html","id":"usage-1","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for printing current sampling progress — progressLogger","text":"","code":"progressLogger$add(n)"},{"path":"/reference/progressLogger.html","id":"arguments-1","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for printing current sampling progress — progressLogger","text":"n number successfully complete iterations since add() last called","code":""},{"path":"/reference/progressLogger.html","id":"method-print-progress-","dir":"Reference","previous_headings":"","what":"Method print_progress()","title":"R6 Class for printing current sampling progress — progressLogger","text":"method print current state progress","code":""},{"path":"/reference/progressLogger.html","id":"usage-2","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for printing current sampling progress — progressLogger","text":"","code":"progressLogger$print_progress()"},{"path":"/reference/progressLogger.html","id":"method-clone-","dir":"Reference","previous_headings":"","what":"Method clone()","title":"R6 Class for printing current sampling progress — progressLogger","text":"objects class cloneable method.","code":""},{"path":"/reference/progressLogger.html","id":"usage-3","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for printing current sampling progress — progressLogger","text":"","code":"progressLogger$clone(deep = FALSE)"},{"path":"/reference/progressLogger.html","id":"arguments-2","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for printing current sampling progress — progressLogger","text":"deep Whether make deep clone.","code":""},{"path":"/reference/pval_percentile.html","id":null,"dir":"Reference","previous_headings":"","what":"P-value of percentile bootstrap — pval_percentile","title":"P-value of percentile bootstrap — pval_percentile","text":"Determines (necessarily unique) quantile (type=6) \"est\" gives value 0 , derive p-value corresponding percentile bootstrap via inversion.","code":""},{"path":"/reference/pval_percentile.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"P-value of percentile bootstrap — pval_percentile","text":"","code":"pval_percentile(est)"},{"path":"/reference/pval_percentile.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"P-value of percentile bootstrap — pval_percentile","text":"est numeric vector point estimates bootstrap sample.","code":""},{"path":"/reference/pval_percentile.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"P-value of percentile bootstrap — pval_percentile","text":"named numeric vector length 2 containing p-value H_0: theta=0 vs H_A: theta>0 (\"pval_greater\") p-value H_0: theta=0 vs H_A: theta<0 (\"pval_less\").","code":""},{"path":"/reference/pval_percentile.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"P-value of percentile bootstrap — pval_percentile","text":"p-value H_0: theta=0 vs H_A: theta>0 value alpha q_alpha = 0. least one estimate equal zero returns largest alpha q_alpha = 0. bootstrap estimates > 0 returns 0; bootstrap estimates < 0 returns 1. Analogous reasoning applied p-value H_0: theta=0 vs H_A: theta<0.","code":""},{"path":"/reference/random_effects_expr.html","id":null,"dir":"Reference","previous_headings":"","what":"Construct random effects formula — random_effects_expr","title":"Construct random effects formula — random_effects_expr","text":"Constructs character representation random effects formula fitting MMRM subject visit format required mmrm::mmrm().","code":""},{"path":"/reference/random_effects_expr.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Construct random effects formula — random_effects_expr","text":"","code":"random_effects_expr( cov_struct = c(\"us\", \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"), cov_by_group = FALSE )"},{"path":"/reference/random_effects_expr.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Construct random effects formula — random_effects_expr","text":"cov_struct Character - covariance structure used, must one \"us\" (default), \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\") cov_by_group Boolean - Whenever use separate covariances per group level","code":""},{"path":"/reference/random_effects_expr.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Construct random effects formula — random_effects_expr","text":"example assuming user specified covariance structure \"us\" groups provided return cov_by_group set FALSE indicates separate covariance matrices required per group following returned:","code":"us(visit | subjid) us( visit | group / subjid )"},{"path":"/reference/rbmi-package.html","id":null,"dir":"Reference","previous_headings":"","what":"rbmi: Reference Based Multiple Imputation — rbmi-package","title":"rbmi: Reference Based Multiple Imputation — rbmi-package","text":"rbmi package used perform reference based multiple imputation. package provides implementations common, patient-specific imputation strategies whilst allowing user select various standard Bayesian frequentist approaches. package designed around 4 core functions: draws() - Fits multiple imputation models impute() - Imputes multiple datasets analyse() - Analyses multiple datasets pool() - Pools multiple results single statistic learn rbmi, please see quickstart vignette: vignette(topic= \"quickstart\", package = \"rbmi\")","code":""},{"path":[]},{"path":"/reference/rbmi-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"rbmi: Reference Based Multiple Imputation — rbmi-package","text":"Maintainer: Craig Gower-Page craig.gower-page@roche.com Authors: Alessandro Noci alessandro.noci@roche.com Isaac Gravestock isaac.gravestock@roche.com contributors: Marcel Wolbers marcel.wolbers@roche.com [contributor] F. Hoffmann-La Roche AG [copyright holder, funder]","code":""},{"path":"/reference/rbmi-settings.html","id":null,"dir":"Reference","previous_headings":"","what":"rbmi settings — rbmi-settings","title":"rbmi settings — rbmi-settings","text":"Define settings modify behaviour rbmi package following name options can set via:","code":"options( = )"},{"path":"/reference/rbmi-settings.html","id":"rbmi-cache-dir","dir":"Reference","previous_headings":"","what":"rbmi.cache_dir","title":"rbmi settings — rbmi-settings","text":"Default = tools::R_user_dir(\"rbmi\", = \"cache\") Directory store compiled Stan model . set, temporary directory used given R session. Can also set via environment variable RBMI_CACHE_DIR.","code":""},{"path":"/reference/rbmi-settings.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"rbmi settings — rbmi-settings","text":"","code":"set_options()"},{"path":"/reference/rbmi-settings.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"rbmi settings — rbmi-settings","text":"","code":"if (FALSE) { # \\dontrun{ options(rbmi.cache_dir = \"some/directory/path\") } # }"},{"path":"/reference/record.html","id":null,"dir":"Reference","previous_headings":"","what":"Capture all Output — record","title":"Capture all Output — record","text":"function silences warnings, errors & messages instead returns list containing results (error) + warning error messages character vectors.","code":""},{"path":"/reference/record.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Capture all Output — record","text":"","code":"record(expr)"},{"path":"/reference/record.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Capture all Output — record","text":"expr expression executed","code":""},{"path":"/reference/record.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Capture all Output — record","text":"list containing results - object returned expr list() error thrown warnings - NULL character vector warnings thrown errors - NULL string error thrown messages - NULL character vector messages produced","code":""},{"path":"/reference/record.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Capture all Output — record","text":"","code":"if (FALSE) { # \\dontrun{ record({ x <- 1 y <- 2 warning(\"something went wrong\") message(\"O nearly done\") x + y }) } # }"},{"path":"/reference/recursive_reduce.html","id":null,"dir":"Reference","previous_headings":"","what":"recursive_reduce — recursive_reduce","title":"recursive_reduce — recursive_reduce","text":"Utility function used replicated purrr::reduce. Recursively applies function list elements 1 element remains","code":""},{"path":"/reference/recursive_reduce.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"recursive_reduce — recursive_reduce","text":"","code":"recursive_reduce(.l, .f)"},{"path":"/reference/recursive_reduce.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"recursive_reduce — recursive_reduce","text":".l list values apply function .f function apply element list turn .e. .l[[1]] <- .f( .l[[1]] , .l[[2]]) ; .l[[1]] <- .f( .l[[1]] , .l[[3]])","code":""},{"path":"/reference/remove_if_all_missing.html","id":null,"dir":"Reference","previous_headings":"","what":"Remove subjects from dataset if they have no observed values — remove_if_all_missing","title":"Remove subjects from dataset if they have no observed values — remove_if_all_missing","text":"function takes data.frame variables visit, outcome & subjid. removes rows given subjid non-missing values outcome.","code":""},{"path":"/reference/remove_if_all_missing.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Remove subjects from dataset if they have no observed values — remove_if_all_missing","text":"","code":"remove_if_all_missing(dat)"},{"path":"/reference/remove_if_all_missing.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Remove subjects from dataset if they have no observed values — remove_if_all_missing","text":"dat data.frame","code":""},{"path":"/reference/rubin_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Barnard and Rubin degrees of freedom adjustment — rubin_df","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"Compute degrees freedom according Barnard-Rubin formula.","code":""},{"path":"/reference/rubin_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"","code":"rubin_df(v_com, var_b, var_t, M)"},{"path":"/reference/rubin_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"v_com Positive number representing degrees freedom complete-data analysis. var_b -variance point estimate across multiply imputed datasets. var_t Total-variance point estimate according Rubin's rules. M Number imputations.","code":""},{"path":"/reference/rubin_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"Degrees freedom according Barnard-Rubin formula. See Barnard-Rubin (1999).","code":""},{"path":"/reference/rubin_df.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"computation takes account limit cases missing data (.e. -variance var_b zero) complete-data degrees freedom set Inf. Moreover, v_com given NA, function returns Inf.","code":""},{"path":"/reference/rubin_df.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"Barnard, J. Rubin, D.B. (1999). Small sample degrees freedom multiple imputation. Biometrika, 86, 948-955.","code":""},{"path":"/reference/rubin_rules.html","id":null,"dir":"Reference","previous_headings":"","what":"Combine estimates using Rubin's rules — rubin_rules","title":"Combine estimates using Rubin's rules — rubin_rules","text":"Pool together results M complete-data analyses according Rubin's rules. See details.","code":""},{"path":"/reference/rubin_rules.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Combine estimates using Rubin's rules — rubin_rules","text":"","code":"rubin_rules(ests, ses, v_com)"},{"path":"/reference/rubin_rules.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Combine estimates using Rubin's rules — rubin_rules","text":"ests Numeric vector containing point estimates complete-data analyses. ses Numeric vector containing standard errors complete-data analyses. v_com Positive number representing degrees freedom complete-data analysis.","code":""},{"path":"/reference/rubin_rules.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Combine estimates using Rubin's rules — rubin_rules","text":"list containing: est_point: pooled point estimate according Little-Rubin (2002). var_t: total variance according Little-Rubin (2002). df: degrees freedom according Barnard-Rubin (1999).","code":""},{"path":"/reference/rubin_rules.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Combine estimates using Rubin's rules — rubin_rules","text":"rubin_rules applies Rubin's rules (Rubin, 1987) pooling together results multiple imputation procedure. pooled point estimate est_point average across point estimates complete-data analyses (given input argument ests). total variance var_t sum two terms representing within-variance -variance (see Little-Rubin (2002)). function also returns df, estimated pooled degrees freedom according Barnard-Rubin (1999) can used inference based t-distribution.","code":""},{"path":"/reference/rubin_rules.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Combine estimates using Rubin's rules — rubin_rules","text":"Barnard, J. Rubin, D.B. (1999). Small sample degrees freedom multiple imputation. Biometrika, 86, 948-955 Roderick J. . Little Donald B. Rubin. Statistical Analysis Missing Data, Second Edition. John Wiley & Sons, Hoboken, New Jersey, 2002. [Section 5.4]","code":""},{"path":[]},{"path":"/reference/sample_ids.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample Patient Ids — sample_ids","title":"Sample Patient Ids — sample_ids","text":"Performs stratified bootstrap sample IDS ensuring return vector length input vector","code":""},{"path":"/reference/sample_ids.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample Patient Ids — sample_ids","text":"","code":"sample_ids(ids, strata = rep(1, length(ids)))"},{"path":"/reference/sample_ids.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample Patient Ids — sample_ids","text":"ids vector sample strata strata indicator, ids sampled within strata ensuring numbers strata maintained","code":""},{"path":"/reference/sample_ids.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sample Patient Ids — sample_ids","text":"","code":"if (FALSE) { # \\dontrun{ sample_ids( c(\"a\", \"b\", \"c\", \"d\"), strata = c(1,1,2,2)) } # }"},{"path":"/reference/sample_list.html","id":null,"dir":"Reference","previous_headings":"","what":"Create and validate a sample_list object — sample_list","title":"Create and validate a sample_list object — sample_list","text":"Given list sample_single objects generate sample_single(), creates sample_list objects validate .","code":""},{"path":"/reference/sample_list.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create and validate a sample_list object — sample_list","text":"","code":"sample_list(...)"},{"path":"/reference/sample_list.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create and validate a sample_list object — sample_list","text":"... list sample_single objects.","code":""},{"path":"/reference/sample_mvnorm.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample random values from the multivariate normal distribution — sample_mvnorm","title":"Sample random values from the multivariate normal distribution — sample_mvnorm","text":"Sample random values multivariate normal distribution","code":""},{"path":"/reference/sample_mvnorm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample random values from the multivariate normal distribution — sample_mvnorm","text":"","code":"sample_mvnorm(mu, sigma)"},{"path":"/reference/sample_mvnorm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample random values from the multivariate normal distribution — sample_mvnorm","text":"mu mean vector sigma covariance matrix Samples multivariate normal variables multiplying univariate random normal variables cholesky decomposition covariance matrix. mu length 1 just uses rnorm instead.","code":""},{"path":"/reference/sample_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Create object of sample_single class — sample_single","title":"Create object of sample_single class — sample_single","text":"Creates object class sample_single named list containing input parameters validate .","code":""},{"path":"/reference/sample_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create object of sample_single class — sample_single","text":"","code":"sample_single( ids, beta = NA, sigma = NA, theta = NA, failed = any(is.na(beta)), ids_samp = ids )"},{"path":"/reference/sample_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create object of sample_single class — sample_single","text":"ids Vector characters containing ids subjects included original dataset. beta Numeric vector estimated regression coefficients. sigma List estimated covariance matrices (one level vars$group). theta Numeric vector transformed covariances. failed Logical. TRUE model fit failed. ids_samp Vector characters containing ids subjects included given sample.","code":""},{"path":"/reference/sample_single.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create object of sample_single class — sample_single","text":"named list class sample_single. contains following: ids vector characters containing ids subjects included original dataset. beta numeric vector estimated regression coefficients. sigma list estimated covariance matrices (one level vars$group). theta numeric vector transformed covariances. failed logical. TRUE model fit failed. ids_samp vector characters containing ids subjects included given sample.","code":""},{"path":"/reference/scalerConstructor.html","id":null,"dir":"Reference","previous_headings":"","what":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Scales design matrix non-categorical columns mean 0 standard deviation 1.","code":""},{"path":"/reference/scalerConstructor.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"object initialisation used determine relevant mean SD's scale scaling (un-scaling) performed relevant object methods. Un-scaling done linear model Beta Sigma coefficients. purpose first column dataset scaled assumed outcome variable variables assumed post-transformation predictor variables (.e. dummy variables already expanded).","code":""},{"path":"/reference/scalerConstructor.html","id":"public-fields","dir":"Reference","previous_headings":"","what":"Public fields","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"centre Vector column means. first value outcome variable, variables predictors. scales Vector column standard deviations. first value outcome variable, variables predictors.","code":""},{"path":[]},{"path":"/reference/scalerConstructor.html","id":"public-methods","dir":"Reference","previous_headings":"","what":"Public methods","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"scalerConstructor$new() scalerConstructor$scale() scalerConstructor$unscale_sigma() scalerConstructor$unscale_beta() scalerConstructor$clone()","code":""},{"path":"/reference/scalerConstructor.html","id":"method-new-","dir":"Reference","previous_headings":"","what":"Method new()","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Uses dat determine relevant column means standard deviations use scaling un-scaling future datasets. Implicitly assumes new datasets column order dat","code":""},{"path":"/reference/scalerConstructor.html","id":"usage","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"","code":"scalerConstructor$new(dat)"},{"path":"/reference/scalerConstructor.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"dat data.frame matrix. columns must numeric (.e dummy variables, must already expanded ).","code":""},{"path":"/reference/scalerConstructor.html","id":"details-1","dir":"Reference","previous_headings":"","what":"Details","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Categorical columns (determined values entirely 1 0) scaled. achieved setting corresponding values centre 0 scale 1.","code":""},{"path":"/reference/scalerConstructor.html","id":"method-scale-","dir":"Reference","previous_headings":"","what":"Method scale()","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Scales dataset continuous variables mean 0 standard deviation 1.","code":""},{"path":"/reference/scalerConstructor.html","id":"usage-1","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"","code":"scalerConstructor$scale(dat)"},{"path":"/reference/scalerConstructor.html","id":"arguments-1","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"dat data.frame matrix whose columns numeric (.e. dummy variables expanded ) whose columns order dataset used initialization function.","code":""},{"path":"/reference/scalerConstructor.html","id":"method-unscale-sigma-","dir":"Reference","previous_headings":"","what":"Method unscale_sigma()","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Unscales sigma value (matrix) estimated linear model using design matrix scaled object. function works first column initialisation data.frame outcome variable.","code":""},{"path":"/reference/scalerConstructor.html","id":"usage-2","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"","code":"scalerConstructor$unscale_sigma(sigma)"},{"path":"/reference/scalerConstructor.html","id":"arguments-2","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"sigma numeric value matrix.","code":""},{"path":"/reference/scalerConstructor.html","id":"returns","dir":"Reference","previous_headings":"","what":"Returns","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"numeric value matrix","code":""},{"path":"/reference/scalerConstructor.html","id":"method-unscale-beta-","dir":"Reference","previous_headings":"","what":"Method unscale_beta()","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Unscales beta value (vector) estimated linear model using design matrix scaled object. function works first column initialization data.frame outcome variable.","code":""},{"path":"/reference/scalerConstructor.html","id":"usage-3","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"","code":"scalerConstructor$unscale_beta(beta)"},{"path":"/reference/scalerConstructor.html","id":"arguments-3","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"beta numeric vector beta coefficients estimated linear model.","code":""},{"path":"/reference/scalerConstructor.html","id":"returns-1","dir":"Reference","previous_headings":"","what":"Returns","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"numeric vector.","code":""},{"path":"/reference/scalerConstructor.html","id":"method-clone-","dir":"Reference","previous_headings":"","what":"Method clone()","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"objects class cloneable method.","code":""},{"path":"/reference/scalerConstructor.html","id":"usage-4","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"","code":"scalerConstructor$clone(deep = FALSE)"},{"path":"/reference/scalerConstructor.html","id":"arguments-4","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"deep Whether make deep clone.","code":""},{"path":"/reference/set_simul_pars.html","id":null,"dir":"Reference","previous_headings":"","what":"Set simulation parameters of a study group. — set_simul_pars","title":"Set simulation parameters of a study group. — set_simul_pars","text":"function provides input arguments study group needed simulate data simulate_data(). simulate_data() generates data two-arms clinical trial longitudinal continuous outcomes two intercurrent events (ICEs). ICE1 may thought discontinuation study treatment due study drug condition related (SDCR) reasons. ICE2 may thought discontinuation study treatment due uninformative study drop-, .e. due study drug condition related (NSDRC) reasons outcome data ICE2 always missing.","code":""},{"path":"/reference/set_simul_pars.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set simulation parameters of a study group. — set_simul_pars","text":"","code":"set_simul_pars( mu, sigma, n, prob_ice1 = 0, or_outcome_ice1 = 1, prob_post_ice1_dropout = 0, prob_ice2 = 0, prob_miss = 0 )"},{"path":"/reference/set_simul_pars.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set simulation parameters of a study group. — set_simul_pars","text":"mu Numeric vector describing mean outcome trajectory visit (including baseline) assuming ICEs. sigma Covariance matrix outcome trajectory assuming ICEs. n Number subjects belonging group. prob_ice1 Numeric vector specifies probability experiencing ICE1 (discontinuation study treatment due SDCR reasons) visit subject observed outcome visit equal mean baseline (mu[1]). single numeric provided, probability applied visit. or_outcome_ice1 Numeric value specifies odds ratio experiencing ICE1 visit corresponding +1 higher value observed outcome visit. prob_post_ice1_dropout Numeric value specifies probability study drop-following ICE1. subject simulated drop-ICE1, outcomes ICE1 set missing. prob_ice2 Numeric specifies additional probability post-baseline visit affected study drop-. Outcome data subject's first simulated visit affected study drop-subsequent visits set missing. generates second intercurrent event ICE2, may thought treatment discontinuation due NSDRC reasons subsequent drop-. subject, ICE1 ICE2 simulated occur, assumed earlier counts. case ICEs simulated occur time, assumed ICE1 counts. means single subject can experience either ICE1 ICE2, . prob_miss Numeric value specifies additional probability given post-baseline observation missing. can used produce \"intermittent\" missing values associated ICE.","code":""},{"path":"/reference/set_simul_pars.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Set simulation parameters of a study group. — set_simul_pars","text":"simul_pars object named list containing simulation parameters.","code":""},{"path":"/reference/set_simul_pars.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Set simulation parameters of a study group. — set_simul_pars","text":"details, please see simulate_data().","code":""},{"path":[]},{"path":"/reference/set_vars.html","id":null,"dir":"Reference","previous_headings":"","what":"Set key variables — set_vars","title":"Set key variables — set_vars","text":"function used define names key variables within data.frame's provided input arguments draws() ancova().","code":""},{"path":"/reference/set_vars.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set key variables — set_vars","text":"","code":"set_vars( subjid = \"subjid\", visit = \"visit\", outcome = \"outcome\", group = \"group\", covariates = character(0), strata = group, strategy = \"strategy\" )"},{"path":"/reference/set_vars.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set key variables — set_vars","text":"subjid name \"Subject ID\" variable. length 1 character vector. visit name \"Visit\" variable. length 1 character vector. outcome name \"Outcome\" variable. length 1 character vector. group name \"Group\" variable. length 1 character vector. covariates name covariates used context modeling. See details. strata name stratification variable used context bootstrap sampling. See details. strategy name \"strategy\" variable. length 1 character vector.","code":""},{"path":"/reference/set_vars.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Set key variables — set_vars","text":"draws() ancova() covariates argument can specified indicate variables included imputation analysis models respectively. wish include interaction terms need manually specified .e. covariates = c(\"group*visit\", \"age*sex\"). Please note use () function inhibit interpretation/conversion objects supported. Currently strata used draws() combination method_condmean(type = \"bootstrap\") method_approxbayes() order allow specification stratified bootstrap sampling. default strata set equal value group assumed users want preserve group size samples. See draws() details. Likewise, currently strategy argument used draws() specify name strategy variable within data_ice data.frame. See draws() details.","code":""},{"path":[]},{"path":"/reference/set_vars.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Set key variables — set_vars","text":"","code":"if (FALSE) { # \\dontrun{ # Using CDISC variable names as an example set_vars( subjid = \"usubjid\", visit = \"avisit\", outcome = \"aval\", group = \"arm\", covariates = c(\"bwt\", \"bht\", \"arm * avisit\"), strategy = \"strat\" ) } # }"},{"path":"/reference/simulate_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate data — simulate_data","title":"Generate data — simulate_data","text":"Generate data two-arms clinical trial longitudinal continuous outcome two intercurrent events (ICEs). ICE1 may thought discontinuation study treatment due study drug condition related (SDCR) reasons. ICE2 may thought discontinuation study treatment due uninformative study drop-, .e. due study drug condition related (NSDRC) reasons outcome data ICE2 always missing.","code":""},{"path":"/reference/simulate_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate data — simulate_data","text":"","code":"simulate_data(pars_c, pars_t, post_ice1_traj, strategies = getStrategies())"},{"path":"/reference/simulate_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate data — simulate_data","text":"pars_c simul_pars object generated set_simul_pars(). specifies simulation parameters control arm. pars_t simul_pars object generated set_simul_pars(). specifies simulation parameters treatment arm. post_ice1_traj string specifies observed outcomes occurring ICE1 simulated. Must target function included strategies. Possible choices : Missing Random \"MAR\", Jump Reference \"JR\", Copy Reference \"CR\", Copy Increments Reference \"CIR\", Last Mean Carried Forward \"LMCF\". User-defined strategies also added. See getStrategies() details. strategies named list functions. Default equal getStrategies(). See getStrategies() details.","code":""},{"path":"/reference/simulate_data.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate data — simulate_data","text":"data.frame containing simulated data. includes following variables: id: Factor variable specifies id subject. visit: Factor variable specifies visit assessment. Visit 0 denotes baseline visit. group: Factor variable specifies treatment group subject belongs . outcome_bl: Numeric variable specifies baseline outcome. outcome_noICE: Numeric variable specifies longitudinal outcome assuming ICEs. ind_ice1: Binary variable takes value 1 corresponding visit affected ICE1 0 otherwise. dropout_ice1: Binary variable takes value 1 corresponding visit affected drop-following ICE1 0 otherwise. ind_ice2: Binary variable takes value 1 corresponding visit affected ICE2. outcome: Numeric variable specifies longitudinal outcome including ICE1, ICE2 intermittent missing values.","code":""},{"path":"/reference/simulate_data.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Generate data — simulate_data","text":"data generation works follows: Generate outcome data visits (including baseline) multivariate normal distribution parameters pars_c$mu pars_c$sigma control arm parameters pars_t$mu pars_t$sigma treatment arm, respectively. Note randomized trial, outcomes distribution baseline treatment groups, .e. one set pars_c$mu[1]=pars_t$mu[1] pars_c$sigma[1,1]=pars_t$sigma[1,1]. Simulate whether ICE1 (study treatment discontinuation due SDCR reasons) occurs visit according parameters pars_c$prob_ice1 pars_c$or_outcome_ice1 control arm pars_t$prob_ice1 pars_t$or_outcome_ice1 treatment arm, respectively. Simulate drop-following ICE1 according pars_c$prob_post_ice1_dropout pars_t$prob_post_ice1_dropout. Simulate additional uninformative study drop-probabilities pars_c$prob_ice2 pars_t$prob_ice2 visit. generates second intercurrent event ICE2, may thought treatment discontinuation due NSDRC reasons subsequent drop-. simulated time drop-subject's first visit affected drop-data visit subsequent visits consequently set missing. subject, ICE1 ICE2 simulated occur, assumed earlier counts. case ICEs simulated occur time, assumed ICE1 counts. means single subject can experience either ICE1 ICE2, . Adjust trajectories ICE1 according given assumption expressed post_ice1_traj argument. Note post-ICE1 outcomes intervention arm can adjusted. Post-ICE1 outcomes control arm adjusted. Simulate additional intermittent missing outcome data per arguments pars_c$prob_miss pars_t$prob_miss. probability ICE visit modeled according following logistic regression model: ~ 1 + (visit == 0) + ... + (visit == n_visits-1) + ((x-alpha)) : n_visits number visits (including baseline). alpha baseline outcome mean. term ((x-alpha)) specifies dependency probability ICE current outcome value. corresponding regression coefficients logistic model defined follows: intercept set 0, coefficients corresponding discontinuation visit subject outcome equal mean baseline set according parameters pars_c$prob_ice1 (pars_t$prob_ice1), regression coefficient associated covariate ((x-alpha)) set log(pars_c$or_outcome_ice1) (log(pars_t$or_outcome_ice1)). Please note baseline outcome missing affected ICEs.","code":""},{"path":"/reference/simulate_dropout.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate drop-out — simulate_dropout","title":"Simulate drop-out — simulate_dropout","text":"Simulate drop-","code":""},{"path":"/reference/simulate_dropout.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate drop-out — simulate_dropout","text":"","code":"simulate_dropout(prob_dropout, ids, subset = rep(1, length(ids)))"},{"path":"/reference/simulate_dropout.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate drop-out — simulate_dropout","text":"prob_dropout Numeric specifies probability post-baseline visit affected study drop-. ids Factor variable specifies id subject. subset Binary variable specifies subset affected drop-. .e. subset binary vector length equal length ids takes value 1 corresponding visit affected drop-0 otherwise.","code":""},{"path":"/reference/simulate_dropout.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate drop-out — simulate_dropout","text":"binary vector length equal length ids takes value 1 corresponding outcome affected study drop-.","code":""},{"path":"/reference/simulate_dropout.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Simulate drop-out — simulate_dropout","text":"subset can used specify outcome values affected drop-. default subset set 1 values except values corresponding baseline outcome, since baseline supposed affected drop-. Even subset specified user, values corresponding baseline outcome still hard-coded 0.","code":""},{"path":"/reference/simulate_ice.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate intercurrent event — simulate_ice","title":"Simulate intercurrent event — simulate_ice","text":"Simulate intercurrent event","code":""},{"path":"/reference/simulate_ice.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate intercurrent event — simulate_ice","text":"","code":"simulate_ice(outcome, visits, ids, prob_ice, or_outcome_ice, baseline_mean)"},{"path":"/reference/simulate_ice.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate intercurrent event — simulate_ice","text":"outcome Numeric variable specifies longitudinal outcome single group. visits Factor variable specifies visit assessment. ids Factor variable specifies id subject. prob_ice Numeric vector specifies visit probability experiencing ICE current visit subject outcome equal mean baseline. single numeric provided, probability applied visit. or_outcome_ice Numeric value specifies odds ratio ICE corresponding +1 higher value outcome visit. baseline_mean Mean outcome value baseline.","code":""},{"path":"/reference/simulate_ice.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate intercurrent event — simulate_ice","text":"binary variable takes value 1 corresponding outcome affected ICE 0 otherwise.","code":""},{"path":"/reference/simulate_ice.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Simulate intercurrent event — simulate_ice","text":"probability ICE visit modeled according following logistic regression model: ~ 1 + (visit == 0) + ... + (visit == n_visits-1) + ((x-alpha)) : n_visits number visits (including baseline). alpha baseline outcome mean set via argument baseline_mean. term ((x-alpha)) specifies dependency probability ICE current outcome value. corresponding regression coefficients logistic model defined follows: intercept set 0, coefficients corresponding discontinuation visit subject outcome equal mean baseline set according parameter or_outcome_ice, regression coefficient associated covariate ((x-alpha)) set log(or_outcome_ice).","code":""},{"path":"/reference/simulate_test_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Create simulated datasets — simulate_test_data","title":"Create simulated datasets — simulate_test_data","text":"Creates longitudinal dataset format rbmi designed analyse.","code":""},{"path":"/reference/simulate_test_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create simulated datasets — simulate_test_data","text":"","code":"simulate_test_data( n = 200, sd = c(3, 5, 7), cor = c(0.1, 0.7, 0.4), mu = list(int = 10, age = 3, sex = 2, trt = c(0, 4, 8), visit = c(0, 1, 2)) ) as_vcov(sd, cor)"},{"path":"/reference/simulate_test_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create simulated datasets — simulate_test_data","text":"n number subjects sample. Total number observations returned thus n * length(sd) sd standard deviations outcome visit. .e. square root diagonal covariance matrix outcome cor correlation coefficients outcome values visit. See details. mu coefficients use construct mean outcome value visit. Must named list elements int, age, sex, trt & visit. See details.","code":""},{"path":"/reference/simulate_test_data.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Create simulated datasets — simulate_test_data","text":"number visits determined size variance covariance matrix. .e. 3 standard deviation values provided 3 visits per patient created. covariates simulated dataset produced follows: Patients age sampled random N(0,1) distribution Patients sex sampled random 50/50 split Patients group sampled random fixed group n/2 patients outcome variable sampled multivariate normal distribution, see details mean outcome variable derived : coefficients intercept, age sex taken mu$int, mu$age mu$sex respectively, must length 1 numeric. Treatment visit coefficients taken mu$trt mu$visit respectively must either length 1 (.e. constant affect across visits) equal number visits (determined length sd). .e. wanted treatment slope 5 visit slope 1 specify: correlation matrix constructed cor follows. Let cor = c(, b, c, d, e, f) correlation matrix :","code":"outcome = Intercept + age + sex + visit + treatment mu = list(..., \"trt\" = c(0,5,10), \"visit\" = c(0,1,2)) 1 a b d a 1 c e b c 1 f d e f 1"},{"path":"/reference/sort_by.html","id":null,"dir":"Reference","previous_headings":"","what":"Sort data.frame — sort_by","title":"Sort data.frame — sort_by","text":"Sorts data.frame (ascending default) based upon variables within dataset","code":""},{"path":"/reference/sort_by.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sort data.frame — sort_by","text":"","code":"sort_by(df, vars = NULL, decreasing = FALSE)"},{"path":"/reference/sort_by.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sort data.frame — sort_by","text":"df data.frame vars character vector variables decreasing logical whether sort order descending ascending (default) order. Can either single logical value (case applied variables) vector length vars","code":""},{"path":"/reference/sort_by.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sort data.frame — sort_by","text":"","code":"if (FALSE) { # \\dontrun{ sort_by(iris, c(\"Sepal.Length\", \"Sepal.Width\"), decreasing = c(TRUE, FALSE)) } # }"},{"path":"/reference/split_dim.html","id":null,"dir":"Reference","previous_headings":"","what":"Transform array into list of arrays — split_dim","title":"Transform array into list of arrays — split_dim","text":"Transform array list arrays listing performed given dimension.","code":""},{"path":"/reference/split_dim.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Transform array into list of arrays — split_dim","text":"","code":"split_dim(a, n)"},{"path":"/reference/split_dim.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Transform array into list of arrays — split_dim","text":"Array number dimensions least 2. n Positive integer. Dimension listed.","code":""},{"path":"/reference/split_dim.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Transform array into list of arrays — split_dim","text":"list length n arrays number dimensions equal number dimensions minus 1.","code":""},{"path":"/reference/split_dim.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Transform array into list of arrays — split_dim","text":"example, 3 dimensional array n = 1, split_dim(,n) returns list 2 dimensional arrays (.e. list matrices) element list [, , ], takes values 1 length first dimension array. Example: inputs: <- array( c(1,2,3,4,5,6,7,8,9,10,11,12), dim = c(3,2,2)), means : n <- 1 output res <- split_dim(,n) list 3 elements:","code":"a[1,,] a[2,,] a[3,,] [,1] [,2] [,1] [,2] [,1] [,2] --------- --------- --------- 1 7 2 8 3 9 4 10 5 11 6 12 res[[1]] res[[2]] res[[3]] [,1] [,2] [,1] [,2] [,1] [,2] --------- --------- --------- 1 7 2 8 3 9 4 10 5 11 6 12"},{"path":"/reference/split_imputations.html","id":null,"dir":"Reference","previous_headings":"","what":"Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations","title":"Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations","text":"Split flat list imputation_single() multiple imputation_df()'s ID","code":""},{"path":"/reference/split_imputations.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations","text":"","code":"split_imputations(list_of_singles, split_ids)"},{"path":"/reference/split_imputations.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations","text":"list_of_singles list imputation_single()'s split_ids list 1 element per required split. element must contain vector \"ID\"'s correspond imputation_single() ID's required within sample. total number ID's must equal length list_of_singles","code":""},{"path":"/reference/split_imputations.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations","text":"function converts list imputations structured per patient structured per sample .e. converts :","code":"obj <- list( imputation_single(\"Ben\", numeric(0)), imputation_single(\"Ben\", numeric(0)), imputation_single(\"Ben\", numeric(0)), imputation_single(\"Harry\", c(1, 2)), imputation_single(\"Phil\", c(3, 4)), imputation_single(\"Phil\", c(5, 6)), imputation_single(\"Tom\", c(7, 8, 9)) ) index <- list( c(\"Ben\", \"Harry\", \"Phil\", \"Tom\"), c(\"Ben\", \"Ben\", \"Phil\") ) output <- list( imputation_df( imputation_single(id = \"Ben\", values = numeric(0)), imputation_single(id = \"Harry\", values = c(1, 2)), imputation_single(id = \"Phil\", values = c(3, 4)), imputation_single(id = \"Tom\", values = c(7, 8, 9)) ), imputation_df( imputation_single(id = \"Ben\", values = numeric(0)), imputation_single(id = \"Ben\", values = numeric(0)), imputation_single(id = \"Phil\", values = c(5, 6)) ) )"},{"path":"/reference/str_contains.html","id":null,"dir":"Reference","previous_headings":"","what":"Does a string contain a substring — str_contains","title":"Does a string contain a substring — str_contains","text":"Returns vector TRUE/FALSE element x contains element subs .e.","code":"str_contains( c(\"ben\", \"tom\", \"harry\"), c(\"e\", \"y\")) [1] TRUE FALSE TRUE"},{"path":"/reference/str_contains.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Does a string contain a substring — str_contains","text":"","code":"str_contains(x, subs)"},{"path":"/reference/str_contains.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Does a string contain a substring — str_contains","text":"x character vector subs character vector substrings look ","code":""},{"path":"/reference/strategies.html","id":null,"dir":"Reference","previous_headings":"","what":"Strategies — strategies","title":"Strategies — strategies","text":"functions used implement various reference based imputation strategies combining subjects distribution reference distribution based upon visits failed meet Missing--Random (MAR) assumption.","code":""},{"path":"/reference/strategies.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Strategies — strategies","text":"","code":"strategy_MAR(pars_group, pars_ref, index_mar) strategy_JR(pars_group, pars_ref, index_mar) strategy_CR(pars_group, pars_ref, index_mar) strategy_CIR(pars_group, pars_ref, index_mar) strategy_LMCF(pars_group, pars_ref, index_mar)"},{"path":"/reference/strategies.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Strategies — strategies","text":"pars_group list parameters subject's group. See details. pars_ref list parameters subject's reference group. See details. index_mar logical vector indicating visits meet MAR assumption subject. .e. identifies observations non-MAR intercurrent event (ICE).","code":""},{"path":"/reference/strategies.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Strategies — strategies","text":"pars_group pars_ref must list containing elements mu sigma. mu must numeric vector sigma must square matrix symmetric covariance matrix dimensions equal length mu index_mar. e.g. Users can define strategy functions include via strategies argument impute() using getStrategies(). said following strategies available \"box\": Missing Random (MAR) Jump Reference (JR) Copy Reference (CR) Copy Increments Reference (CIR) Last Mean Carried Forward (LMCF)","code":"list( mu = c(1,2,3), sigma = matrix(c(4,3,2,3,5,4,2,4,6), nrow = 3, ncol = 3) )"},{"path":"/reference/string_pad.html","id":null,"dir":"Reference","previous_headings":"","what":"string_pad — string_pad","title":"string_pad — string_pad","text":"Utility function used replicate str_pad. Adds white space either end string get equal desired length","code":""},{"path":"/reference/string_pad.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"string_pad — string_pad","text":"","code":"string_pad(x, width)"},{"path":"/reference/string_pad.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"string_pad — string_pad","text":"x string width desired length","code":""},{"path":"/reference/transpose_imputations.html","id":null,"dir":"Reference","previous_headings":"","what":"Transpose imputations — transpose_imputations","title":"Transpose imputations — transpose_imputations","text":"Takes imputation_df object transposes e.g.","code":"list( list(id = \"a\", values = c(1,2,3)), list(id = \"b\", values = c(4,5,6) ) )"},{"path":"/reference/transpose_imputations.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Transpose imputations — transpose_imputations","text":"","code":"transpose_imputations(imputations)"},{"path":"/reference/transpose_imputations.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Transpose imputations — transpose_imputations","text":"imputations imputation_df object created imputation_df()","code":""},{"path":"/reference/transpose_imputations.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Transpose imputations — transpose_imputations","text":"becomes","code":"list( ids = c(\"a\", \"b\"), values = c(1,2,3,4,5,6) )"},{"path":"/reference/transpose_results.html","id":null,"dir":"Reference","previous_headings":"","what":"Transpose results object — transpose_results","title":"Transpose results object — transpose_results","text":"Transposes Results object (created analyse()) order group estimates together vectors.","code":""},{"path":"/reference/transpose_results.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Transpose results object — transpose_results","text":"","code":"transpose_results(results, components)"},{"path":"/reference/transpose_results.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Transpose results object — transpose_results","text":"results list results. components character vector components extract (.e. \"est\", \"se\").","code":""},{"path":"/reference/transpose_results.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Transpose results object — transpose_results","text":"Essentially function takes object format: produces:","code":"x <- list( list( \"trt1\" = list( est = 1, se = 2 ), \"trt2\" = list( est = 3, se = 4 ) ), list( \"trt1\" = list( est = 5, se = 6 ), \"trt2\" = list( est = 7, se = 8 ) ) ) list( trt1 = list( est = c(1,5), se = c(2,6) ), trt2 = list( est = c(3,7), se = c(4,8) ) )"},{"path":"/reference/transpose_samples.html","id":null,"dir":"Reference","previous_headings":"","what":"Transpose samples — transpose_samples","title":"Transpose samples — transpose_samples","text":"Transposes samples generated draws() grouped subjid instead sample number.","code":""},{"path":"/reference/transpose_samples.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Transpose samples — transpose_samples","text":"","code":"transpose_samples(samples)"},{"path":"/reference/transpose_samples.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Transpose samples — transpose_samples","text":"samples list samples generated draws().","code":""},{"path":"/reference/validate.analysis.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate analysis objects — validate.analysis","title":"Validate analysis objects — validate.analysis","text":"Validates return object analyse() function.","code":""},{"path":"/reference/validate.analysis.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate analysis objects — validate.analysis","text":"","code":"# S3 method for class 'analysis' validate(x, ...)"},{"path":"/reference/validate.analysis.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate analysis objects — validate.analysis","text":"x analysis results object (class \"jackknife\", \"bootstrap\", \"rubin\"). ... used.","code":""},{"path":"/reference/validate.draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate draws object — validate.draws","title":"Validate draws object — validate.draws","text":"Validate draws object","code":""},{"path":"/reference/validate.draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate draws object — validate.draws","text":"","code":"# S3 method for class 'draws' validate(x, ...)"},{"path":"/reference/validate.draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate draws object — validate.draws","text":"x draws object generated as_draws(). ... used.","code":""},{"path":"/reference/validate.html","id":null,"dir":"Reference","previous_headings":"","what":"Generic validation method — validate","title":"Generic validation method — validate","text":"function used perform assertions object conforms expected structure basic assumptions violated. throw error checks pass.","code":""},{"path":"/reference/validate.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generic validation method — validate","text":"","code":"validate(x, ...)"},{"path":"/reference/validate.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generic validation method — validate","text":"x object validated. ... additional arguments pass specific validation method.","code":""},{"path":"/reference/validate.is_mar.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate is_mar for a given subject — validate.is_mar","title":"Validate is_mar for a given subject — validate.is_mar","text":"Checks longitudinal data patient divided MAR followed non-MAR data; non-MAR observation followed MAR observation allowed.","code":""},{"path":"/reference/validate.is_mar.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate is_mar for a given subject — validate.is_mar","text":"","code":"# S3 method for class 'is_mar' validate(x, ...)"},{"path":"/reference/validate.is_mar.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate is_mar for a given subject — validate.is_mar","text":"x Object class is_mar. Logical vector indicating whether observations MAR. ... used.","code":""},{"path":"/reference/validate.is_mar.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate is_mar for a given subject — validate.is_mar","text":"error issue otherwise return TRUE.","code":""},{"path":"/reference/validate.ivars.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate inputs for vars — validate.ivars","title":"Validate inputs for vars — validate.ivars","text":"Checks required variable names defined within vars appropriate datatypes","code":""},{"path":"/reference/validate.ivars.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate inputs for vars — validate.ivars","text":"","code":"# S3 method for class 'ivars' validate(x, ...)"},{"path":"/reference/validate.ivars.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate inputs for vars — validate.ivars","text":"x named list indicating names key variables source dataset ... used","code":""},{"path":"/reference/validate.references.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate user supplied references — validate.references","title":"Validate user supplied references — validate.references","text":"Checks ensure user specified references expect values (.e. found within source data).","code":""},{"path":"/reference/validate.references.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate user supplied references — validate.references","text":"","code":"# S3 method for class 'references' validate(x, control, ...)"},{"path":"/reference/validate.references.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate user supplied references — validate.references","text":"x named character vector. control factor variable (group variable source dataset). ... used.","code":""},{"path":"/reference/validate.references.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate user supplied references — validate.references","text":"error issue otherwise return TRUE.","code":""},{"path":"/reference/validate.sample_list.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate sample_list object — validate.sample_list","title":"Validate sample_list object — validate.sample_list","text":"Validate sample_list object","code":""},{"path":"/reference/validate.sample_list.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate sample_list object — validate.sample_list","text":"","code":"# S3 method for class 'sample_list' validate(x, ...)"},{"path":"/reference/validate.sample_list.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate sample_list object — validate.sample_list","text":"x sample_list object generated sample_list(). ... used.","code":""},{"path":"/reference/validate.sample_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate sample_single object — validate.sample_single","title":"Validate sample_single object — validate.sample_single","text":"Validate sample_single object","code":""},{"path":"/reference/validate.sample_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate sample_single object — validate.sample_single","text":"","code":"# S3 method for class 'sample_single' validate(x, ...)"},{"path":"/reference/validate.sample_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate sample_single object — validate.sample_single","text":"x sample_single object generated sample_single(). ... used.","code":""},{"path":"/reference/validate.simul_pars.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate a simul_pars object — validate.simul_pars","title":"Validate a simul_pars object — validate.simul_pars","text":"Validate simul_pars object","code":""},{"path":"/reference/validate.simul_pars.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate a simul_pars object — validate.simul_pars","text":"","code":"# S3 method for class 'simul_pars' validate(x, ...)"},{"path":"/reference/validate.simul_pars.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate a simul_pars object — validate.simul_pars","text":"x simul_pars object generated set_simul_pars(). ... used.","code":""},{"path":"/reference/validate.stan_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate a stan_data object — validate.stan_data","title":"Validate a stan_data object — validate.stan_data","text":"Validate stan_data object","code":""},{"path":"/reference/validate.stan_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate a stan_data object — validate.stan_data","text":"","code":"# S3 method for class 'stan_data' validate(x, ...)"},{"path":"/reference/validate.stan_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate a stan_data object — validate.stan_data","text":"x stan_data object. ... used.","code":""},{"path":"/reference/validate_analyse_pars.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate analysis results — validate_analyse_pars","title":"Validate analysis results — validate_analyse_pars","text":"Validates analysis results generated analyse().","code":""},{"path":"/reference/validate_analyse_pars.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate analysis results — validate_analyse_pars","text":"","code":"validate_analyse_pars(results, pars)"},{"path":"/reference/validate_analyse_pars.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate analysis results — validate_analyse_pars","text":"results list results generated analysis fun used analyse(). pars list expected parameters analysis. lists .e. c(\"est\", \"se\", \"df\").","code":""},{"path":"/reference/validate_datalong.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate a longdata object — validate_datalong","title":"Validate a longdata object — validate_datalong","text":"Validate longdata object","code":""},{"path":"/reference/validate_datalong.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate a longdata object — validate_datalong","text":"","code":"validate_datalong(data, vars) validate_datalong_varExists(data, vars) validate_datalong_types(data, vars) validate_datalong_notMissing(data, vars) validate_datalong_complete(data, vars) validate_datalong_unifromStrata(data, vars) validate_dataice(data, data_ice, vars, update = FALSE)"},{"path":"/reference/validate_datalong.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate a longdata object — validate_datalong","text":"data data.frame containing longitudinal outcome data + covariates multiple subjects vars vars object created set_vars() data_ice data.frame containing subjects ICE data. See draws() details. update logical, indicates ICE data set first time update applied","code":""},{"path":"/reference/validate_datalong.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Validate a longdata object — validate_datalong","text":"functions used validate various different parts longdata object used draws(), impute(), analyse() pool(). particular: validate_datalong_varExists - Checks variable listed vars actually exists data validate_datalong_types - Checks types key variable expected .e. visit factor variable validate_datalong_notMissing - Checks none key variables (except outcome variable) contain missing values validate_datalong_complete - Checks data complete .e. 1 row subject * visit combination. e.g. nrow(data) == length(unique(subjects)) * length(unique(visits)) validate_datalong_unifromStrata - Checks make sure variables listed stratification variables vary time. e.g. subjects switch stratification groups.","code":""},{"path":"/reference/validate_strategies.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate user specified strategies — validate_strategies","title":"Validate user specified strategies — validate_strategies","text":"Compares user provided strategies required (reference). throw error values reference defined.","code":""},{"path":"/reference/validate_strategies.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate user specified strategies — validate_strategies","text":"","code":"validate_strategies(strategies, reference)"},{"path":"/reference/validate_strategies.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate user specified strategies — validate_strategies","text":"strategies named list strategies. reference list character vector strategies need defined.","code":""},{"path":"/reference/validate_strategies.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate user specified strategies — validate_strategies","text":"throw error issue otherwise return TRUE.","code":""},{"path":"/news/index.html","id":"rbmi-131","dir":"Changelog","previous_headings":"","what":"rbmi 1.3.1","title":"rbmi 1.3.1","text":"Fixed bug stale caches rstan model correctly cleared (#459)","code":""},{"path":"/news/index.html","id":"rbmi-130","dir":"Changelog","previous_headings":"","what":"rbmi 1.3.0","title":"rbmi 1.3.0","text":"CRAN release: 2024-10-16","code":""},{"path":"/news/index.html","id":"breaking-changes-1-3-0","dir":"Changelog","previous_headings":"","what":"Breaking Changes","title":"rbmi 1.3.0","text":"Convert rstan suggested package simplify installation process. means Bayesian imputation functionality available default. use feature, need install rstan separately (#441) Deprecated seed argument method_bayes() favour using base set.seed() function (#431)","code":""},{"path":"/news/index.html","id":"new-features-1-3-0","dir":"Changelog","previous_headings":"","what":"New Features","title":"rbmi 1.3.0","text":"Added vignette implement retrieved dropout models time-varying intercurrent event (ICE) indicators (#414) Added vignette obtain frequentist information-anchored inference conditional mean imputation using rbmi (#406) Added FAQ vignette including statement validation (#407 #440) Renamed lsmeans(..., weights = \"proportional\") lsmeans(..., weights = \"counterfactual\")accurately reflect weights used calculation. Added lsmeans(..., weights = \"proportional_em\") provides consistent results emmeans(..., weights = \"proportional\") lsmeans(..., weights = \"proportional\") left package backwards compatibility alias lsmeans(..., weights = \"counterfactual\") now gives message prompting users use either “proptional_em” “counterfactual” instead. Added support parallel processing analyse() function (#370) Added documentation clarifying potential false-positive warnings rstan (#288) Added support covariance structures supported mmrm package (#437) Updated rbmi citation detail (#423 #425)","code":""},{"path":"/news/index.html","id":"miscellaneous-bug-fixes-1-3-0","dir":"Changelog","previous_headings":"","what":"Miscellaneous Bug Fixes","title":"rbmi 1.3.0","text":"Stopped warning messages accidentally supressed changing ICE type impute() (#408) Fixed equations rendering properly pkgdown website (#433)","code":""},{"path":"/news/index.html","id":"rbmi-126","dir":"Changelog","previous_headings":"","what":"rbmi 1.2.6","title":"rbmi 1.2.6","text":"CRAN release: 2023-11-24 Updated unit tests fix false-positive error CRAN’s testing servers","code":""},{"path":"/news/index.html","id":"rbmi-125","dir":"Changelog","previous_headings":"","what":"rbmi 1.2.5","title":"rbmi 1.2.5","text":"CRAN release: 2023-09-20 Updated internal Stan code ensure future compatibility (@andrjohns, #390) Updated package description include relevant references (#393) Fixed documentation typos (#393)","code":""},{"path":"/news/index.html","id":"rbmi-123","dir":"Changelog","previous_headings":"","what":"rbmi 1.2.3","title":"rbmi 1.2.3","text":"CRAN release: 2022-11-14 Minor internal tweaks ensure compatibility packages rbmi depends ","code":""},{"path":"/news/index.html","id":"rbmi-121","dir":"Changelog","previous_headings":"","what":"rbmi 1.2.1","title":"rbmi 1.2.1","text":"CRAN release: 2022-10-25 Removed native pipes |> testing code package backwards compatible older servers Replaced glmmTMB dependency mmrm package. resulted package stable (less model fitting convergence issues) well speeding run times 3-fold.","code":""},{"path":"/news/index.html","id":"rbmi-114","dir":"Changelog","previous_headings":"","what":"rbmi 1.1.4","title":"rbmi 1.1.4","text":"CRAN release: 2022-05-18 Updated urls references vignettes Fixed bug visit factor levels re-constructed incorrectly delta_template() Fixed bug wrong visit displayed error message specific visit doesn’t data draws() Fixed bug wrong input parameter displayed error message simulate_data()","code":""},{"path":"/news/index.html","id":"rbmi-111--113","dir":"Changelog","previous_headings":"","what":"rbmi 1.1.1 & 1.1.3","title":"rbmi 1.1.1 & 1.1.3","text":"CRAN release: 2022-03-08 change functionality 1.1.0 Various minor tweaks address CRAN checks messages","code":""},{"path":"/news/index.html","id":"rbmi-110","dir":"Changelog","previous_headings":"","what":"rbmi 1.1.0","title":"rbmi 1.1.0","text":"CRAN release: 2022-03-02 Initial public release","code":""}] +[{"path":"/CONTRIBUTING.html","id":null,"dir":"","previous_headings":"","what":"Contributing to rbmi","title":"Contributing to rbmi","text":"file outlines propose make changes rbmi well providing details obscure aspects package’s development process.","code":""},{"path":"/CONTRIBUTING.html","id":"setup","dir":"","previous_headings":"","what":"Setup","title":"Contributing to rbmi","text":"order develop contribute rbmi need access C/C++ compiler. Windows install rtools macOS install Xcode. Likewise, also need install package’s development dependencies. can done launching R within project root executing:","code":"devtools::install_dev_deps()"},{"path":"/CONTRIBUTING.html","id":"code-changes","dir":"","previous_headings":"","what":"Code changes","title":"Contributing to rbmi","text":"want make code contribution, ’s good idea first file issue make sure someone team agrees ’s needed. ’ve found bug, please file issue illustrates bug minimal reprex (also help write unit test, needed).","code":""},{"path":"/CONTRIBUTING.html","id":"pull-request-process","dir":"","previous_headings":"Code changes","what":"Pull request process","title":"Contributing to rbmi","text":"project uses simple GitHub flow model development. , code changes done feature branch based main branch merged back main branch complete. Pull Requests accepted unless CI/CD checks passed. (See CI/CD section information). Pull Requests relating package’s core R code must accompanied corresponding unit test. pull requests containing changes core R code contain unit test demonstrate working intended accepted. (See Unit Testing section information). Pull Requests add lines changed NEWS.md file.","code":""},{"path":"/CONTRIBUTING.html","id":"coding-considerations","dir":"","previous_headings":"Code changes","what":"Coding Considerations","title":"Contributing to rbmi","text":"use roxygen2, Markdown syntax, documentation. Please ensure code conforms lintr. can check running lintr::lint(\"FILE NAME\") files modified ensuring findings kept possible. hard requirements following lintr’s conventions encourage developers follow guidance closely possible. project uses 4 space indents, contributions following accepted. project makes use S3 R6 OOP. Usage S4 OOP systems avoided unless absolutely necessary ensure consistency. said recommended stick S3 unless modification place R6 specific features required. current desire package keep dependency tree small possible. end discouraged adding additional packages “Depends” / “Imports” section unless absolutely essential. importing package just use single function consider just copying source code function instead, though please check licence include proper attribution/notices. expectations “Suggests” free use package vignettes / unit tests, though please mindful unnecessarily excessive .","code":""},{"path":"/CONTRIBUTING.html","id":"unit-testing--cicd","dir":"","previous_headings":"","what":"Unit Testing & CI/CD","title":"Contributing to rbmi","text":"project uses testthat perform unit testing combination GitHub Actions CI/CD.","code":""},{"path":"/CONTRIBUTING.html","id":"scheduled-testing","dir":"","previous_headings":"Unit Testing & CI/CD","what":"Scheduled Testing","title":"Contributing to rbmi","text":"Due stochastic nature package unit tests take considerable amount time execute. avoid issues usability, unit tests take couple seconds run deferred scheduled testing. tests run occasionally periodic basis (currently twice month) every pull request / push event. defer test scheduled build simply include skip_if_not(is_full_test()) top test_that() block .e. scheduled tests can also manually activated going “https://github.com/insightsengineering/rbmi” -> “Actions” -> “Bi-Weekly” -> “Run Workflow”. advisable releasing CRAN.","code":"test_that(\"some unit test\", { skip_if_not(is_full_test()) expect_equal(1,1) })"},{"path":"/CONTRIBUTING.html","id":"docker-images","dir":"","previous_headings":"Unit Testing & CI/CD","what":"Docker Images","title":"Contributing to rbmi","text":"support CI/CD, terms reducing setup time, Docker images created contains packages system dependencies required project. image can found : ghcr.io/insightsengineering/rbmi:latest image automatically re-built month contain latest version R packages. code create images can found misc/docker. build image locally run following project root directory:","code":"docker build -f misc/docker/Dockerfile -t rbmi:latest ."},{"path":"/CONTRIBUTING.html","id":"reproducibility-print-tests--snaps","dir":"","previous_headings":"Unit Testing & CI/CD","what":"Reproducibility, Print Tests & Snaps","title":"Contributing to rbmi","text":"particular issue testing package reproducibility. part handled well via set.seed() however stan/rstan guarantee reproducibility even seed run different hardware. issue surfaces testing print messages pool object displays treatment estimates thus identical run different machines. address issue pre-made pool objects generated stored R/sysdata.rda (generated data-raw/create_print_test_data.R). generated print messages compared expected values stored tests/testthat/_snaps/ (automatically created testthat::expect_snapshot())","code":""},{"path":"/CONTRIBUTING.html","id":"fitting-mmrms","dir":"","previous_headings":"","what":"Fitting MMRM’s","title":"Contributing to rbmi","text":"package currently uses mmrm package fit MMRM models. package still fairly new far proven stable, fast reliable. spot issues MMRM package please raise corresponding GitHub Repository - link mmrm package uses TMB uncommon see warnings either inconsistent versions TMB Matrix package compiled . order resolve may wish re-compile packages source using: Note need rtools installed Windows machine Xcode running macOS (somehow else access C/C++ compiler).","code":"install.packages(c(\"TMB\", \"mmrm\"), type = \"source\")"},{"path":"/CONTRIBUTING.html","id":"rstan","dir":"","previous_headings":"","what":"rstan","title":"Contributing to rbmi","text":"Bayesian models fitted package implemented via stan/rstan. code can found inst/stan/MMRM.stan. Note package automatically take care compiling code install run devtools::load_all(). Please note package won’t recompile code unless changed source code delete src directory.","code":""},{"path":"/CONTRIBUTING.html","id":"vignettes","dir":"","previous_headings":"","what":"Vignettes","title":"Contributing to rbmi","text":"CRAN imposes 10-minute run limit building, compiling testing package. keep limit vignettes pre-built; say simply changing source code automatically update vignettes, need manually re-build . need run: re-built need commit updated *.html files git repository. reference static vignette process works using “asis” vignette engine provided R.rsp. works getting R recognise vignettes files ending *.html.asis; builds simply copying corresponding files ending *.html relevent docs/ folder built package.","code":"Rscript vignettes/build.R"},{"path":"/CONTRIBUTING.html","id":"misc--local-folders","dir":"","previous_headings":"","what":"Misc & Local Folders","title":"Contributing to rbmi","text":"misc/ folder project used hold useful scripts, analyses, simulations & infrastructure code wish keep isn’t essential build deployment package. Feel free store additional stuff feel worth keeping. Likewise, local/ added .gitignore file meaning anything stored folder won’t committed repository. example, may find useful storing personal scripts testing generally exploring package development.","code":""},{"path":"/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"Apache License","title":"Apache License","text":"Version 2.0, January 2004 ","code":""},{"path":[]},{"path":"/LICENSE.html","id":"id_1-definitions","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"1. Definitions","title":"Apache License","text":"“License” shall mean terms conditions use, reproduction, distribution defined Sections 1 9 document. “Licensor” shall mean copyright owner entity authorized copyright owner granting License. “Legal Entity” shall mean union acting entity entities control, controlled , common control entity. purposes definition, “control” means () power, direct indirect, cause direction management entity, whether contract otherwise, (ii) ownership fifty percent (50%) outstanding shares, (iii) beneficial ownership entity. “” (“”) shall mean individual Legal Entity exercising permissions granted License. “Source” form shall mean preferred form making modifications, including limited software source code, documentation source, configuration files. “Object” form shall mean form resulting mechanical transformation translation Source form, including limited compiled object code, generated documentation, conversions media types. “Work” shall mean work authorship, whether Source Object form, made available License, indicated copyright notice included attached work (example provided Appendix ). “Derivative Works” shall mean work, whether Source Object form, based (derived ) Work editorial revisions, annotations, elaborations, modifications represent, whole, original work authorship. purposes License, Derivative Works shall include works remain separable , merely link (bind name) interfaces , Work Derivative Works thereof. “Contribution” shall mean work authorship, including original version Work modifications additions Work Derivative Works thereof, intentionally submitted Licensor inclusion Work copyright owner individual Legal Entity authorized submit behalf copyright owner. purposes definition, “submitted” means form electronic, verbal, written communication sent Licensor representatives, including limited communication electronic mailing lists, source code control systems, issue tracking systems managed , behalf , Licensor purpose discussing improving Work, excluding communication conspicuously marked otherwise designated writing copyright owner “Contribution.” “Contributor” shall mean Licensor individual Legal Entity behalf Contribution received Licensor subsequently incorporated within Work.","code":""},{"path":"/LICENSE.html","id":"id_2-grant-of-copyright-license","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"2. Grant of Copyright License","title":"Apache License","text":"Subject terms conditions License, Contributor hereby grants perpetual, worldwide, non-exclusive, -charge, royalty-free, irrevocable copyright license reproduce, prepare Derivative Works , publicly display, publicly perform, sublicense, distribute Work Derivative Works Source Object form.","code":""},{"path":"/LICENSE.html","id":"id_3-grant-of-patent-license","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"3. Grant of Patent License","title":"Apache License","text":"Subject terms conditions License, Contributor hereby grants perpetual, worldwide, non-exclusive, -charge, royalty-free, irrevocable (except stated section) patent license make, made, use, offer sell, sell, import, otherwise transfer Work, license applies patent claims licensable Contributor necessarily infringed Contribution(s) alone combination Contribution(s) Work Contribution(s) submitted. institute patent litigation entity (including cross-claim counterclaim lawsuit) alleging Work Contribution incorporated within Work constitutes direct contributory patent infringement, patent licenses granted License Work shall terminate date litigation filed.","code":""},{"path":"/LICENSE.html","id":"id_4-redistribution","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"4. Redistribution","title":"Apache License","text":"may reproduce distribute copies Work Derivative Works thereof medium, without modifications, Source Object form, provided meet following conditions: () must give recipients Work Derivative Works copy License; (b) must cause modified files carry prominent notices stating changed files; (c) must retain, Source form Derivative Works distribute, copyright, patent, trademark, attribution notices Source form Work, excluding notices pertain part Derivative Works; (d) Work includes “NOTICE” text file part distribution, Derivative Works distribute must include readable copy attribution notices contained within NOTICE file, excluding notices pertain part Derivative Works, least one following places: within NOTICE text file distributed part Derivative Works; within Source form documentation, provided along Derivative Works; , within display generated Derivative Works, wherever third-party notices normally appear. contents NOTICE file informational purposes modify License. may add attribution notices within Derivative Works distribute, alongside addendum NOTICE text Work, provided additional attribution notices construed modifying License. may add copyright statement modifications may provide additional different license terms conditions use, reproduction, distribution modifications, Derivative Works whole, provided use, reproduction, distribution Work otherwise complies conditions stated License.","code":""},{"path":"/LICENSE.html","id":"id_5-submission-of-contributions","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"5. Submission of Contributions","title":"Apache License","text":"Unless explicitly state otherwise, Contribution intentionally submitted inclusion Work Licensor shall terms conditions License, without additional terms conditions. Notwithstanding , nothing herein shall supersede modify terms separate license agreement may executed Licensor regarding Contributions.","code":""},{"path":"/LICENSE.html","id":"id_6-trademarks","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"6. Trademarks","title":"Apache License","text":"License grant permission use trade names, trademarks, service marks, product names Licensor, except required reasonable customary use describing origin Work reproducing content NOTICE file.","code":""},{"path":"/LICENSE.html","id":"id_7-disclaimer-of-warranty","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"7. Disclaimer of Warranty","title":"Apache License","text":"Unless required applicable law agreed writing, Licensor provides Work (Contributor provides Contributions) “” BASIS, WITHOUT WARRANTIES CONDITIONS KIND, either express implied, including, without limitation, warranties conditions TITLE, NON-INFRINGEMENT, MERCHANTABILITY, FITNESS PARTICULAR PURPOSE. solely responsible determining appropriateness using redistributing Work assume risks associated exercise permissions License.","code":""},{"path":"/LICENSE.html","id":"id_8-limitation-of-liability","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"8. Limitation of Liability","title":"Apache License","text":"event legal theory, whether tort (including negligence), contract, otherwise, unless required applicable law (deliberate grossly negligent acts) agreed writing, shall Contributor liable damages, including direct, indirect, special, incidental, consequential damages character arising result License use inability use Work (including limited damages loss goodwill, work stoppage, computer failure malfunction, commercial damages losses), even Contributor advised possibility damages.","code":""},{"path":"/LICENSE.html","id":"id_9-accepting-warranty-or-additional-liability","dir":"","previous_headings":"Terms and Conditions for use, reproduction, and distribution","what":"9. Accepting Warranty or Additional Liability","title":"Apache License","text":"redistributing Work Derivative Works thereof, may choose offer, charge fee , acceptance support, warranty, indemnity, liability obligations /rights consistent License. However, accepting obligations, may act behalf sole responsibility, behalf Contributor, agree indemnify, defend, hold Contributor harmless liability incurred , claims asserted , Contributor reason accepting warranty additional liability. END TERMS CONDITIONS","code":""},{"path":"/LICENSE.html","id":"appendix-how-to-apply-the-apache-license-to-your-work","dir":"","previous_headings":"","what":"APPENDIX: How to apply the Apache License to your work","title":"Apache License","text":"apply Apache License work, attach following boilerplate notice, fields enclosed brackets [] replaced identifying information. (Don’t include brackets!) text enclosed appropriate comment syntax file format. also recommend file class name description purpose included “printed page” copyright notice easier identification within third-party archives.","code":"Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."},{"path":"/articles/CondMean_Inference.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"rbmi: Inference with Conditional Mean Imputation","text":"described section 3.10.2 statistical specifications package (vignette(topic = \"stat_specs\", package = \"rbmi\")), two different types variance estimators proposed reference-based imputation methods statistical literature (Bartlett (2023)). first frequentist variance describes actual repeated sampling variability estimator results inference correct frequentist sense, .e. hypothesis tests accurate type error control confidence intervals correct coverage probabilities repeated sampling reference-based assumption correctly specified (Bartlett (2023), Wolbers et al. (2022)). Reference-based missing data assumption strong borrow information control arm imputation active arm. consequence, size frequentist standard errors treatment effects may decrease increasing amounts missing data. second -called “information-anchored” variance originally proposed context sensitivity analyses (Cro, Carpenter, Kenward (2019)). variance estimator based disentangling point estimation variance estimation altogether. resulting information-anchored variance typically similar variance missing--random (MAR) imputation increases increasing amounts missing data approximately rate MAR imputation. However, information-anchored variance reflect actual variability reference-based estimator resulting frequentist inference highly conservative resulting substantial power loss. Reference-based conditional mean imputation combined resampling method jackknife bootstrap first introduced Wolbers et al. (2022). approach naturally targets frequentist variance. information-anchored variance typically estimated using Rubin’s rules Bayesian multiple imputation applicable within conditional mean imputation framework. However, alternative information-anchored variance proposed Lu (2021) can easily obtained show . basic idea Lu (2021) obtain information-anchored variance via MAR imputation combined delta-adjustment delta selected data-driven way match reference-based estimator. conditional mean imputation, proposal Lu (2021) can implemented choosing delta-adjustment difference conditional mean imputation chosen reference-based assumption MAR original dataset. variance can obtained via jackknife bootstrap keeping delta-adjustment fixed. resulting variance estimate similar Rubin’s variance. Moreover, shown Cro, Carpenter, Kenward (2019), variance MAR-imputation combined delta-adjustment achieves even better information-anchoring properties Rubin’s variance reference-based imputation. Reference-based missing data assumptions strong borrow information control arm imputation active arm. vignette demonstrates first obtain frequentist inference using reference-based conditional mean imputation using rbmi, shows information-anchored inference can also easily implemented using package.","code":""},{"path":"/articles/CondMean_Inference.html","id":"data-and-model-specification","dir":"Articles","previous_headings":"","what":"Data and model specification","title":"rbmi: Inference with Conditional Mean Imputation","text":"use publicly available example dataset antidepressant clinical trial active drug versus placebo. relevant endpoint Hamilton 17-item depression rating scale (HAMD17) assessed baseline weeks 1, 2, 4, 6. Study drug discontinuation occurred 24% subjects active drug 26% subjects placebo. data study drug discontinuation missing single additional intermittent missing observation. consider imputation model mean change baseline HAMD17 score outcome (variable CHANGE dataset). following covariates included imputation model: treatment group (THERAPY), (categorical) visit (VISIT), treatment--visit interactions, baseline HAMD17 score (BASVAL), baseline HAMD17 score--visit interactions. common unstructured covariance matrix structure assumed groups. analysis model ANCOVA model treatment group primary factor adjustment baseline HAMD17 score. example, assume imputation strategy ICE “study-drug discontinuation” Jump Reference (JR) subjects imputation based conditional mean imputation combined jackknife resampling (bootstrap also selected).","code":""},{"path":"/articles/CondMean_Inference.html","id":"reference-based-conditional-mean-imputation---frequentist-inference","dir":"Articles","previous_headings":"","what":"Reference-based conditional mean imputation - frequentist inference","title":"rbmi: Inference with Conditional Mean Imputation","text":"Conditional mean imputation combined resampling method jackknife bootstrap naturally targets frequentist estimation standard error treatment effect, thus providing valid frequentist inference. provide code obtain frequentist inference reference-based conditional mean imputation using rbmi. code used section almost identical code quickstart vignette (vignette(topic = \"quickstart\", package = \"rbmi\")) except use conditional mean imputation combined jackknife (method_condmean(type = \"jackknife\")) rather Bayesian multiple imputation (method_bayes()). therefore refer vignette help files individual functions explanations details.","code":""},{"path":"/articles/CondMean_Inference.html","id":"draws","dir":"Articles","previous_headings":"3 Reference-based conditional mean imputation - frequentist inference","what":"Draws","title":"rbmi: Inference with Conditional Mean Imputation","text":"make use rbmi::expand_locf() expand dataset order one row per subject per visit missing outcomes denoted NA. construct data_ice, vars method input arguments first core rbmi function, draws(). Finally, call function draws() derive parameter estimates base imputation model full dataset leave-one-subject-samples.","code":"library(rbmi) library(dplyr) #> #> Attaching package: 'dplyr' #> The following objects are masked from 'package:stats': #> #> filter, lag #> The following objects are masked from 'package:base': #> #> intersect, setdiff, setequal, union dat <- antidepressant_data # Use expand_locf to add rows corresponding to visits with missing outcomes to # the dataset dat <- expand_locf( dat, PATIENT = levels(dat$PATIENT), # expand by PATIENT and VISIT VISIT = levels(dat$VISIT), vars = c(\"BASVAL\", \"THERAPY\"), # fill with LOCF BASVAL and THERAPY group = c(\"PATIENT\"), order = c(\"PATIENT\", \"VISIT\") ) # create data_ice and set the imputation strategy to JR for # each patient with at least one missing observation dat_ice <- dat %>% arrange(PATIENT, VISIT) %>% filter(is.na(CHANGE)) %>% group_by(PATIENT) %>% slice(1) %>% ungroup() %>% select(PATIENT, VISIT) %>% mutate(strategy = \"JR\") # In this dataset, subject 3618 has an intermittent missing values which # does not correspond to a study drug discontinuation. We therefore remove # this subject from `dat_ice`. (In the later imputation step, it will # automatically be imputed under the default MAR assumption.) dat_ice <- dat_ice[-which(dat_ice$PATIENT == 3618),] # Define the names of key variables in our dataset and # the covariates included in the imputation model using `set_vars()` vars <- set_vars( outcome = \"CHANGE\", visit = \"VISIT\", subjid = \"PATIENT\", group = \"THERAPY\", covariates = c(\"BASVAL*VISIT\", \"THERAPY*VISIT\") ) # Define which imputation method to use (here: conditional mean imputation # with jackknife as resampling) method <- method_condmean(type = \"jackknife\") # Create samples for the imputation parameters by running the draws() function drawObj <- draws( data = dat, data_ice = dat_ice, vars = vars, method = method, quiet = TRUE ) drawObj #> #> Draws Object #> ------------ #> Number of Samples: 1 + 172 #> Number of Failed Samples: 0 #> Model Formula: CHANGE ~ 1 + THERAPY + VISIT + BASVAL * VISIT + THERAPY * VISIT #> Imputation Type: condmean #> Method: #> name: Conditional Mean #> covariance: us #> threshold: 0.01 #> same_cov: TRUE #> REML: TRUE #> type: jackknife"},{"path":"/articles/CondMean_Inference.html","id":"impute","dir":"Articles","previous_headings":"3 Reference-based conditional mean imputation - frequentist inference","what":"Impute","title":"rbmi: Inference with Conditional Mean Imputation","text":"can use now function impute() perform imputation original dataset leave-one-samples using results obtained previous step.","code":"references <- c(\"DRUG\" = \"PLACEBO\", \"PLACEBO\" = \"PLACEBO\") imputeObj <- impute(drawObj, references) imputeObj #> #> Imputation Object #> ----------------- #> Number of Imputed Datasets: 1 + 172 #> Fraction of Missing Data (Original Dataset): #> 4: 0% #> 5: 8% #> 6: 13% #> 7: 25% #> References: #> DRUG -> PLACEBO #> PLACEBO -> PLACEBO"},{"path":"/articles/CondMean_Inference.html","id":"analyse","dir":"Articles","previous_headings":"3 Reference-based conditional mean imputation - frequentist inference","what":"Analyse","title":"rbmi: Inference with Conditional Mean Imputation","text":"datasets imputed, can call analyse() function apply complete-data analysis model (ANCOVA) imputed dataset.","code":"# Set analysis variables using `rbmi` function \"set_vars\" vars_an <- set_vars( group = vars$group, visit = vars$visit, outcome = vars$outcome, covariates = \"BASVAL\" ) # Analyse MAR imputation with derived delta adjustment anaObj <- analyse( imputeObj, rbmi::ancova, vars = vars_an ) anaObj #> #> Analysis Object #> --------------- #> Number of Results: 1 + 172 #> Analysis Function: rbmi::ancova #> Delta Applied: FALSE #> Analysis Estimates: #> trt_4 #> lsm_ref_4 #> lsm_alt_4 #> trt_5 #> lsm_ref_5 #> lsm_alt_5 #> trt_6 #> lsm_ref_6 #> lsm_alt_6 #> trt_7 #> lsm_ref_7 #> lsm_alt_7"},{"path":"/articles/CondMean_Inference.html","id":"pool","dir":"Articles","previous_headings":"3 Reference-based conditional mean imputation - frequentist inference","what":"Pool","title":"rbmi: Inference with Conditional Mean Imputation","text":"Finally, can extract treatment effect estimates perform inference using jackknife variance estimator. done calling pool() function. gives estimated treatment effect 2.13 (95% CI 0.44 3.81) last visit associated p-value 0.013.","code":"poolObj <- pool(anaObj) poolObj #> #> Pool Object #> ----------- #> Number of Results Combined: 1 + 172 #> Method: jackknife #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_4 -0.092 0.695 -1.453 1.27 0.895 #> lsm_ref_4 -1.616 0.588 -2.767 -0.464 0.006 #> lsm_alt_4 -1.708 0.396 -2.484 -0.931 <0.001 #> trt_5 1.305 0.878 -0.416 3.027 0.137 #> lsm_ref_5 -4.133 0.688 -5.481 -2.785 <0.001 #> lsm_alt_5 -2.828 0.604 -4.011 -1.645 <0.001 #> trt_6 1.929 0.862 0.239 3.619 0.025 #> lsm_ref_6 -6.088 0.671 -7.402 -4.773 <0.001 #> lsm_alt_6 -4.159 0.686 -5.503 -2.815 <0.001 #> trt_7 2.126 0.858 0.444 3.807 0.013 #> lsm_ref_7 -6.965 0.685 -8.307 -5.622 <0.001 #> lsm_alt_7 -4.839 0.762 -6.333 -3.346 <0.001 #> --------------------------------------------------"},{"path":"/articles/CondMean_Inference.html","id":"reference-based-conditional-mean-imputation---information-anchored-inference","dir":"Articles","previous_headings":"","what":"Reference-based conditional mean imputation - information-anchored inference","title":"rbmi: Inference with Conditional Mean Imputation","text":"section, present estimation process based conditional mean imputation combined jackknife can adapted obtain information-anchored variance following proposal Lu (2021).","code":""},{"path":"/articles/CondMean_Inference.html","id":"draws-1","dir":"Articles","previous_headings":"4 Reference-based conditional mean imputation - information-anchored inference","what":"Draws","title":"rbmi: Inference with Conditional Mean Imputation","text":"code pre-processing dataset “draws” step equivalent code provided frequentist inference. Please refer section details step.","code":"library(rbmi) library(dplyr) dat <- antidepressant_data # Use expand_locf to add rows corresponding to visits with missing outcomes to # the dataset dat <- expand_locf( dat, PATIENT = levels(dat$PATIENT), # expand by PATIENT and VISIT VISIT = levels(dat$VISIT), vars = c(\"BASVAL\", \"THERAPY\"), # fill with LOCF BASVAL and THERAPY group = c(\"PATIENT\"), order = c(\"PATIENT\", \"VISIT\") ) # create data_ice and set the imputation strategy to JR for # each patient with at least one missing observation dat_ice <- dat %>% arrange(PATIENT, VISIT) %>% filter(is.na(CHANGE)) %>% group_by(PATIENT) %>% slice(1) %>% ungroup() %>% select(PATIENT, VISIT) %>% mutate(strategy = \"JR\") # In this dataset, subject 3618 has an intermittent missing values which # does not correspond to a study drug discontinuation. We therefore remove # this subject from `dat_ice`. (In the later imputation step, it will # automatically be imputed under the default MAR assumption.) dat_ice <- dat_ice[-which(dat_ice$PATIENT == 3618),] # Define the names of key variables in our dataset and # the covariates included in the imputation model using `set_vars()` vars <- set_vars( outcome = \"CHANGE\", visit = \"VISIT\", subjid = \"PATIENT\", group = \"THERAPY\", covariates = c(\"BASVAL*VISIT\", \"THERAPY*VISIT\") ) # Define which imputation method to use (here: conditional mean imputation # with jackknife as resampling) method <- method_condmean(type = \"jackknife\") # Create samples for the imputation parameters by running the draws() function drawObj <- draws( data = dat, data_ice = dat_ice, vars = vars, method = method, quiet = TRUE ) drawObj"},{"path":"/articles/CondMean_Inference.html","id":"imputation-step-including-calculation-of-delta-adjustment","dir":"Articles","previous_headings":"4 Reference-based conditional mean imputation - information-anchored inference","what":"Imputation step including calculation of delta-adjustment","title":"rbmi: Inference with Conditional Mean Imputation","text":"proposal Lu (2021) replace reference-based imputation MAR imputation combined delta-adjustment delta selected data-driven way match reference-based estimator. rbmi, implemented first performing imputation defined reference-based imputation strategy (JR) well MAR separately. Second, delta-adjustment defined difference conditional mean imputation reference-based MAR imputation, respectively, original dataset. simplify implementation, written function get_delta_match_refBased performs step. function takes input arguments draws object, data_ice (.e. data.frame containing information intercurrent events imputation strategies), references, named vector identifies references used reference-based imputation methods. function returns list containing imputation objects reference-based MAR imputation, plus data.frame contains delta-adjustment.","code":"#' Get delta adjustment that matches reference-based imputation #' #' @param draws: A `draws` object created by `draws()`. #' @param data_ice: `data.frame` containing the information about the intercurrent #' events and the imputation strategies. Must represent the desired imputation #' strategy and not the MAR-variant. #' @param references: A named vector. Identifies the references to be used #' for reference-based imputation methods. #' #' @return #' The function returns a list containing the imputation objects under both #' reference-based and MAR imputation, plus a `data.frame` which contains the #' delta-adjustment. #' #' @seealso `draws()`, `impute()`. get_delta_match_refBased <- function(draws, data_ice, references) { # Impute according to `data_ice` imputeObj <- impute( draws = drawObj, update_strategy = data_ice, references = references ) vars <- imputeObj$data$vars # Access imputed dataset (index=1 for method_condmean(type = \"jackknife\")) cmi <- extract_imputed_dfs(imputeObj, index = 1, idmap = TRUE)[[1]] idmap <- attributes(cmi)$idmap cmi <- cmi[, c(vars$subjid, vars$visit, vars$outcome)] colnames(cmi)[colnames(cmi) == vars$outcome] <- \"y_imp\" # Map back original patients id since `rbmi` re-code ids to ensure id uniqueness cmi[[vars$subjid]] <- idmap[match(cmi[[vars$subjid]], names(idmap))] # Derive conditional mean imputations under MAR dat_ice_MAR <- data_ice dat_ice_MAR[[vars$strategy]] <- \"MAR\" # Impute under MAR # Note that in this specific context, it is desirable that an update # from a reference-based strategy to MAR uses the exact same data for # fitting the imputation models, i.e. that available post-ICE data are # omitted from the imputation model for both. This is the case when # using argument update_strategy in function impute(). # However, for other settings (i.e. if one is interested in switching to # a standard MAR imputation strategy altogether), this behavior is # undesirable and, consequently, the function throws a warning which # we suppress here. suppressWarnings( imputeObj_MAR <- impute( draws, update_strategy = dat_ice_MAR ) ) # Access imputed dataset (index=1 for method_condmean(type = \"jackknife\")) cmi_MAR <- extract_imputed_dfs(imputeObj_MAR, index = 1, idmap = TRUE)[[1]] idmap <- attributes(cmi_MAR)$idmap cmi_MAR <- cmi_MAR[, c(vars$subjid, vars$visit, vars$outcome)] colnames(cmi_MAR)[colnames(cmi_MAR) == vars$outcome] <- \"y_MAR\" # Map back original patients id since `rbmi` re-code ids to ensure id uniqueness cmi_MAR[[vars$subjid]] <- idmap[match(cmi_MAR[[vars$subjid]], names(idmap))] # Derive delta adjustment \"aligned with ref-based imputation\", # i.e. difference between ref-based imputation and MAR imputation delta_adjust <- merge(cmi, cmi_MAR, by = c(vars$subjid, vars$visit), all = TRUE) delta_adjust$delta <- delta_adjust$y_imp - delta_adjust$y_MAR ret_obj <- list( imputeObj = imputeObj, imputeObj_MAR = imputeObj_MAR, delta_adjust = delta_adjust ) return(ret_obj) } references <- c(\"DRUG\" = \"PLACEBO\", \"PLACEBO\" = \"PLACEBO\") res_delta_adjust <- get_delta_match_refBased(drawObj, dat_ice, references)"},{"path":"/articles/CondMean_Inference.html","id":"analyse-1","dir":"Articles","previous_headings":"4 Reference-based conditional mean imputation - information-anchored inference","what":"Analyse","title":"rbmi: Inference with Conditional Mean Imputation","text":"use function analyse() add delta-adjustment perform analysis imputed datasets MAR. analyse() take input argument imputations = res_delta_adjust$imputeObj_MAR, .e. imputation object corresponding MAR imputation (JR imputation). argument delta can used add delta-adjustment prior analysis set delta-adjustment obtained previous step: delta = res_delta_adjust$delta_adjust.","code":"# Set analysis variables using `rbmi` function \"set_vars\" vars_an <- set_vars( group = vars$group, visit = vars$visit, outcome = vars$outcome, covariates = \"BASVAL\" ) # Analyse MAR imputation with derived delta adjustment anaObj_MAR_delta <- analyse( res_delta_adjust$imputeObj_MAR, rbmi::ancova, delta = res_delta_adjust$delta_adjust, vars = vars_an )"},{"path":"/articles/CondMean_Inference.html","id":"pool-1","dir":"Articles","previous_headings":"4 Reference-based conditional mean imputation - information-anchored inference","what":"Pool","title":"rbmi: Inference with Conditional Mean Imputation","text":"can finally use pool() function extract treatment effect estimate (well estimated marginal means) visit apply jackknife variance estimator analysis estimates imputed leave-one-samples. gives estimated treatment effect 2.13 (95% CI -0.08 4.33) last visit associated p-value 0.058. Per construction delta-adjustment, point estimate identical frequentist analysis. However, standard error much larger (1.12 vs. 0.86). Indeed, information-anchored standard error (resulting inference) similar results Baysesian multiple imputation using Rubin’s rules standard error 1.13 reported quickstart vignette (vignette(topic = \"quickstart\", package = \"rbmi\"). note, shown e.g. Wolbers et al. (2022), hypothesis testing based information-anchored inference conservative, .e. actual type error much lower nominal value. Hence, confidence intervals \\(p\\)-values based information-anchored inference interpreted caution.","code":"poolObj_MAR_delta <- pool(anaObj_MAR_delta) poolObj_MAR_delta #> #> Pool Object #> ----------- #> Number of Results Combined: 1 + 172 #> Method: jackknife #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_4 -0.092 0.695 -1.453 1.27 0.895 #> lsm_ref_4 -1.616 0.588 -2.767 -0.464 0.006 #> lsm_alt_4 -1.708 0.396 -2.484 -0.931 <0.001 #> trt_5 1.305 0.944 -0.545 3.156 0.167 #> lsm_ref_5 -4.133 0.738 -5.579 -2.687 <0.001 #> lsm_alt_5 -2.828 0.603 -4.01 -1.646 <0.001 #> trt_6 1.929 0.993 -0.018 3.876 0.052 #> lsm_ref_6 -6.088 0.758 -7.574 -4.602 <0.001 #> lsm_alt_6 -4.159 0.686 -5.504 -2.813 <0.001 #> trt_7 2.126 1.123 -0.076 4.327 0.058 #> lsm_ref_7 -6.965 0.85 -8.63 -5.299 <0.001 #> lsm_alt_7 -4.839 0.763 -6.335 -3.343 <0.001 #> --------------------------------------------------"},{"path":[]},{"path":"/articles/FAQ.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"rbmi: Frequently Asked Questions","text":"document provides answers common questions rbmi package. intended read rbmi: Quickstart vignette.","code":""},{"path":"/articles/FAQ.html","id":"is-rbmi-validated","dir":"Articles","previous_headings":"1 Introduction","what":"Is rbmi validated?","title":"rbmi: Frequently Asked Questions","text":"regards software pharmaceutical industry, validation act ensuring software meets needs requirements users given conditions actual use. FDA provides general principles guidance validation leaves individual sponsors define specific validation processes. Therefore, individual R package can claim ‘validated’ independently, validation depends entire software stack specific processes company. said, core components validation process design specification (software supposed ) well testing / test results demonstrate design specification met. rbmi, design specification documented extensively, macro level vignettes literature publications, micro level detailed function manuals. supported extensive suite unit integration tests, ensure software consistently produces correct output across wide range input scenarios. documentation test coverage enable rbmi easily installed integrated R system, alignment system’s broader validation process.","code":""},{"path":"/articles/FAQ.html","id":"how-do-the-methods-in-rbmi-compare-to-the-mixed-model-for-repeated-measures-mmrm-implemented-in-the-mmrm-package","dir":"Articles","previous_headings":"1 Introduction","what":"How do the methods in rbmi compare to the mixed model for repeated measures (MMRM) implemented in the mmrm package?","title":"rbmi: Frequently Asked Questions","text":"rbmi designed complement , occasionally, replace standard MMRM analyses clinical trials longitudinal endpoints. Strengths rbmi compared standard MMRM model : rbmi designed allow analyses fully aligned estimand definition. facilitate , implements methods range different missing data assumptions including standard missing--random (MAR), extended MAR (via inclusion time-varying covariates), reference-based missingness, missing--random random (NMAR; via \\(\\delta\\)-adjustments). contrast, standard MMRM model valid standard MAR assumption always plausible. example, standard MAR assumption rather implausible implementing treatment policy strategy intercurrent event “treatment discontinuation” substantial proportion subjects lost--follow-discontinuation. \\(\\delta\\)-adjustment methods implemented rbmi can used sensitivity analyses primary MMRM- rbmi-type analysis. Weaknesses rbmi compared standard MMRM model : MMRM models de-facto standard analysis method decade. rbmi currently less established. rbmi computationally intensive using requires careful planning.","code":""},{"path":"/articles/FAQ.html","id":"how-does-rbmi-compare-to-general-purpose-software-for-multiple-imputation-mi-such-as-mice","dir":"Articles","previous_headings":"1 Introduction","what":"How does rbmi compare to general-purpose software for multiple imputation (MI) such as mice?","title":"rbmi: Frequently Asked Questions","text":"rbmi covers “MMRM-type” settings, .e. settings single longitudinal continuous outcome may missing visits hence require imputation. settings, several advantages general-purpose MI software: rbmi supports imputation range different missing data assumptions whereas general-purpose MI software mostly focused MAR-based imputation. particular, unclear implement jump reference (JR) copy increments reference (CIR) methods software. rbmi interface fully streamlined setting arguably makes implementation straightforward general-purpose MI software. MICE algorithm stochastic inference always based Rubin’s rules. contrast, method “conditional mean imputation plus jackknifing” (method=\"method_condmean(type = \"jackknife\")\") rbmi require tuning parameters, fully deterministic, provides frequentist-consistent inference also reference-based imputations (Rubin’s rule conservative leading actual type error rates can far nominal values). However, rbmi much limited functionality general-purpose MI software.","code":""},{"path":"/articles/FAQ.html","id":"how-to-handle-missing-data-in-baseline-covariates-in-rbmi","dir":"Articles","previous_headings":"1 Introduction","what":"How to handle missing data in baseline covariates in rbmi?","title":"rbmi: Frequently Asked Questions","text":"rbmi support imputation missing baseline covariates. Therefore, missing baseline covariates need handled outside rbmi. best approach handling missing baseline covariates needs made case--case basis context randomized trials, relatively simple approach often sufficient (White Thompson (2005)).","code":""},{"path":"/articles/FAQ.html","id":"why-does-rbmi-by-default-use-an-ancova-analysis-model-and-not-an-mmrm-analysis-model","dir":"Articles","previous_headings":"1 Introduction","what":"Why does rbmi by default use an ANCOVA analysis model and not an MMRM analysis model?","title":"rbmi: Frequently Asked Questions","text":"theoretical justification conditional mean imputation method requires analysis model leads point estimator linear function outcome vector (Wolbers et al. (2022)). case ANCOVA general MMRM models. imputation methods, ANCOVA MMRM valid analysis methods. MMRM analysis model implemented providing custom analysis function analyse() function. expalanations, also cite end section 2.4 conditional mean imputation paper (Wolbers et al. (2022)): proof relies fact ANCOVA estimator linear function outcome vector. complete data, ANCOVA estimator leads identical parameter estimates MMRM model longitudinal outcomes arbitrary common covariance structure across treatment groups treatment--visit interactions well covariate--visit-interactions included analysis model covariates,17 (p. 197). Hence, proof also applies MMRM models. expect conditional mean imputation also valid general MMRM model used analysis involved argument required formally justify .","code":""},{"path":"/articles/FAQ.html","id":"how-can-i-analyse-the-change-from-baseline-in-the-analysis-model-when-imputation-was-done-on-the-original-outcomes","dir":"Articles","previous_headings":"1 Introduction","what":"How can I analyse the change-from-baseline in the analysis model when imputation was done on the original outcomes?","title":"rbmi: Frequently Asked Questions","text":"can achieved using custom analysis functions outlined Section 7 Advanced Vignette. e.g.","code":"ancova_modified <- function(data, ...) { data2 <- data %>% mutate(ENDPOINT = ENDPOINT - BASELINE) rbmi::ancova(data2, ...) } anaObj <- rbmi::analyse( imputeObj, ancova_modified, vars = vars )"},{"path":"/articles/advanced.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"rbmi: Advanced Functionality","text":"purpose vignette provide overview advanced features rbmi package. sections vignette relatively self-contained, .e. readers able jump directly section covers functionality interested .","code":""},{"path":"/articles/advanced.html","id":"sec:dataSimul","dir":"Articles","previous_headings":"","what":"Data simulation using function simulate_data()","title":"rbmi: Advanced Functionality","text":"order demonstrate advanced functions first create simulated dataset rbmi function simulate_data(). simulate_data() function generates data randomized clinical trial longitudinal continuous outcomes two different types intercurrent events (ICEs). One intercurrent event (ICE1) may thought discontinuation study treatment due study drug condition related (SDCR) reasons. event (ICE2) may thought discontinuation study treatment due study drug condition related (NSDCR) reasons. purpose vignette, simulate data similarly simulation study reported Wolbers et al. (2022) (though change simulation parameters) include one ICE type (ICE1). Specifically, simulate 1:1 randomized trial active drug (intervention) versus placebo (control) 100 subjects per group 6 post-baseline assessments (bi-monthly visits 12 months) following assumptions: mean outcome trajectory placebo group increases linearly 50 baseline (visit 0) 60 visit 6, .e. slope 10 points/year. mean outcome trajectory intervention group identical placebo group visit 2. visit 2 onward, slope decreases 50% 5 points/year. covariance structure baseline follow-values groups implied random intercept slope model standard deviation 5 intercept slope, correlation 0.25. addition, independent residual error standard deviation 2.5 added assessment. probability study drug discontinuation visit calculated according logistic model depends observed outcome visit. Specifically, visit-wise discontinuation probability 2% 3% control intervention group, respectively, specified case observed outcome equal 50 (mean value baseline). odds discontinuation simulated increase +10% +1 point increase observed outcome. Study drug discontinuation simulated effect mean trajectory placebo group. intervention group, subjects discontinue follow slope mean trajectory placebo group time point onward. compatible copy increments reference (CIR) assumption. Study drop-study drug discontinuation visit occurs probability 50% leading missing outcome data time point onward. function simulate_data() requires 3 arguments (see function documentation help(simulate_data) details): pars_c: simulation parameters control group pars_t: simulation parameters intervention group post_ice1_traj: Specifies observed outcomes ICE1 simulated , report data according specifications can simulated function simulate_data():","code":"library(rbmi) library(dplyr) library(ggplot2) library(purrr) set.seed(122) n <- 100 time <- c(0, 2, 4, 6, 8, 10, 12) # Mean trajectory control muC <- c(50.0, 51.66667, 53.33333, 55.0, 56.66667, 58.33333, 60.0) # Mean trajectory intervention muT <- c(50.0, 51.66667, 53.33333, 54.16667, 55.0, 55.83333, 56.66667) # Create Sigma sd_error <- 2.5 covRE <- rbind( c(25.0, 6.25), c(6.25, 25.0) ) Sigma <- cbind(1, time / 12) %*% covRE %*% rbind(1, time / 12) + diag(sd_error^2, nrow = length(time)) # Set probability of discontinuation probDisc_C <- 0.02 probDisc_T <- 0.03 or_outcome <- 1.10 # +1 point increase => +10% odds of discontinuation # Set drop-out rate following discontinuation prob_dropout <- 0.5 # Set simulation parameters of the control group parsC <- set_simul_pars( mu = muC, sigma = Sigma, n = n, prob_ice1 = probDisc_C, or_outcome_ice1 = or_outcome, prob_post_ice1_dropout = prob_dropout ) # Set simulation parameters of the intervention group parsT <- parsC parsT$mu <- muT parsT$prob_ice1 <- probDisc_T # Set assumption about post-ice trajectory post_ice_traj <- \"CIR\" # Simulate data data <- simulate_data( pars_c = parsC, pars_t = parsT, post_ice1_traj = post_ice_traj ) head(data) #> id visit group outcome_bl outcome_noICE ind_ice1 ind_ice2 dropout_ice1 #> 1 id_1 0 Control 57.32704 57.32704 0 0 0 #> 2 id_1 1 Control 57.32704 54.69751 1 0 1 #> 3 id_1 2 Control 57.32704 58.60702 1 0 1 #> 4 id_1 3 Control 57.32704 61.50119 1 0 1 #> 5 id_1 4 Control 57.32704 56.68363 1 0 1 #> 6 id_1 5 Control 57.32704 66.14799 1 0 1 #> outcome #> 1 57.32704 #> 2 NA #> 3 NA #> 4 NA #> 5 NA #> 6 NA # As a simple descriptive of the simulated data, summarize the number of subjects with ICEs and missing data data %>% group_by(id) %>% summarise( group = group[1], any_ICE = (any(ind_ice1 == 1)), any_NA = any(is.na(outcome))) %>% group_by(group) %>% summarise( subjects_with_ICE = sum(any_ICE), subjects_with_missings = sum(any_NA) ) #> # A tibble: 2 × 3 #> group subjects_with_ICE subjects_with_missings #> #> 1 Control 18 8 #> 2 Intervention 25 14"},{"path":"/articles/advanced.html","id":"sec:postICEobs","dir":"Articles","previous_headings":"","what":"Handling of observed post-ICE data in rbmi under reference-based imputation","title":"rbmi: Advanced Functionality","text":"rbmi always uses non-missing outcome data input data set, .e. data never overwritten imputation step removed analysis step. implies data considered irrelevant treatment effect estimation (e.g. data ICE estimand specified hypothetical strategy), data need removed input data set user prior calling rbmi functions. imputation missing random (MAR) strategy, observed outcome data also included fitting base imputation model. However, ICEs handled using reference-based imputation methods (CIR, CR, JR), rbmi excludes observed post-ICE data base imputation model. data excluded, base imputation model mistakenly estimate mean trajectories based mixture observed pre- post-ICE data relevant reference-based imputations. However, observed post-ICE data added back data set fitting base imputation model included subsequent imputation analysis steps. Post-ICE data control reference group also excluded base imputation model user specifies reference-based imputation strategy ICEs. ensures ICE impact data included base imputation model regardless whether ICE occurred control intervention group. hand, imputation reference group based MAR assumption even reference-based imputation methods may preferable settings include post-ICE data control group base imputation model. can implemented specifying MAR strategy ICE control group reference-based strategy ICE intervention group. use latter approach example . simulated trial data section 2 assumed outcomes intervention group observed ICE “treatment discontinuation” follow increments observed control group. Thus imputation missing data intervention group treatment discontinuation might performed reference-based copy increments reference (CIR) assumption. Specifically, implement estimator following assumptions: endpoint interest change outcome baseline visit. imputation model includes treatment group, (categorical) visit, treatment--visit interactions, baseline outcome, baseline outcome--visit interactions covariates. imputation model assumes common unstructured covariance matrix treatment groups control group, missing data imputed MAR whereas intervention group, missing post-ICE data imputed CIR assumption analysis model endpoint imputed datasets separate ANCOVA model visit treatment group primary covariate adjustment baseline outcome value. illustration purposes, chose MI based approximate Bayesian posterior draws 20 random imputations demanding computational perspective. practical applications, number random imputations may need increased. Moreover, imputations also supported rbmi. guidance regarding choice imputation approach, refer user comparison implemented approaches Section 3.9 “Statistical Specifications” vignette (vignette(\"stat_specs\", package = \"rbmi\")). first report code set variables imputation analysis models. yet familiar syntax, recommend first check “quickstart” vignette (vignette(\"quickstart\", package = \"rbmi\")). chosen imputation method can set function method_approxbayes() follows: can now sequentially call 4 key functions rbmi perform multiple imputation. Please note management observed post-ICE data performed without additional complexity user. draws() automatically excludes post-ICE data handled reference-based method (keeps post-ICE data handled using MAR) using information provided argument data_ice. impute() impute truly missing data data[[vars$outcome]]. last output gives estimated difference -4.537 (95% CI -6.420 -2.655) two groups last visit associated p-value lower 0.001.","code":"# Create data_ice including the subject's first visit affected by the ICE and the imputation strategy # Imputation strategy for post-ICE data is CIR in the intervention group and MAR for the control group # (note that ICEs which are handled using MAR are optional and do not impact the analysis # because imputation of missing data under MAR is the default) data_ice_CIR <- data %>% group_by(id) %>% filter(ind_ice1 == 1) %>% # select visits with ICEs mutate(strategy = ifelse(group == \"Intervention\", \"CIR\", \"MAR\")) %>% summarise( visit = visit[1], # Select first visit affected by the ICE strategy = strategy[1] ) # Compute endpoint of interest: change from baseline and # remove rows corresponding to baseline visits data <- data %>% filter(visit != 0) %>% mutate( change = outcome - outcome_bl, visit = factor(visit, levels = unique(visit)) ) # Define key variables for the imputation and analysis models vars <- set_vars( subjid = \"id\", visit = \"visit\", outcome = \"change\", group = \"group\", covariates = c(\"visit*outcome_bl\", \"visit*group\"), strategy = \"strategy\" ) vars_an <- vars vars_an$covariates <- \"outcome_bl\" method <- method_approxbayes(n_sample = 20) draw_obj <- draws( data = data, data_ice = data_ice_CIR, vars = vars, method = method, quiet = TRUE, ncores = 2 ) impute_obj_CIR <- impute( draw_obj, references = c(\"Control\" = \"Control\", \"Intervention\" = \"Control\") ) ana_obj_CIR <- analyse( impute_obj_CIR, vars = vars_an ) pool_obj_CIR <- pool(ana_obj_CIR) pool_obj_CIR #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_1 -0.486 0.512 -1.496 0.524 0.343 #> lsm_ref_1 2.62 0.362 1.907 3.333 <0.001 #> lsm_alt_1 2.133 0.362 1.42 2.847 <0.001 #> trt_2 -0.066 0.542 -1.135 1.004 0.904 #> lsm_ref_2 3.707 0.384 2.95 4.464 <0.001 #> lsm_alt_2 3.641 0.383 2.885 4.397 <0.001 #> trt_3 -1.782 0.607 -2.979 -0.585 0.004 #> lsm_ref_3 5.841 0.428 4.997 6.685 <0.001 #> lsm_alt_3 4.059 0.428 3.214 4.904 <0.001 #> trt_4 -2.518 0.692 -3.884 -1.152 <0.001 #> lsm_ref_4 7.656 0.492 6.685 8.627 <0.001 #> lsm_alt_4 5.138 0.488 4.176 6.1 <0.001 #> trt_5 -3.658 0.856 -5.346 -1.97 <0.001 #> lsm_ref_5 9.558 0.598 8.379 10.737 <0.001 #> lsm_alt_5 5.9 0.608 4.699 7.101 <0.001 #> trt_6 -4.537 0.954 -6.42 -2.655 <0.001 #> lsm_ref_6 11.048 0.666 9.735 12.362 <0.001 #> lsm_alt_6 6.511 0.674 5.181 7.841 <0.001 #> --------------------------------------------------"},{"path":"/articles/advanced.html","id":"efficiently-changing-reference-based-imputation-strategies","dir":"Articles","previous_headings":"","what":"Efficiently changing reference-based imputation strategies","title":"rbmi: Advanced Functionality","text":"draws() function far computationally intensive function rbmi. settings, may important explore impact change reference-based imputation strategy results. change affect imputation model affect subsequent imputation step. order allow changes imputation strategy without re-run draws() function, function impute() additional argument update_strategies. However, please note functionality comes important limitations: described beginning Section 3, post-ICE outcomes included input dataset base imputation model imputation method MAR excluded reference-based imputation methods (CIR, CR, JR). Therefore, updata_strategies applied imputation strategy changed MAR non-MAR strategy presence observed post-ICE outcomes. Similarly, change non-MAR strategy MAR triggers warning presence observed post-ICE outcomes base imputation model fitted relevant data MAR. Finally, update_strategies applied timing ICEs changed (argument data_ice) addition imputation strategy. example, described analysis copy increments reference (CIR) assumption previous section. Let’s assume want change strategy jump reference imputation strategy sensitivity analysis. can efficiently implemented using update_strategies follows: imputations jump reference assumption, get estimated difference -4.360 (95% CI -6.238 -2.482) two groups last visit associated p-value <0.001.","code":"# Change ICE strategy from CIR to JR data_ice_JR <- data_ice_CIR %>% mutate(strategy = ifelse(strategy == \"CIR\", \"JR\", strategy)) impute_obj_JR <- impute( draw_obj, references = c(\"Control\" = \"Control\", \"Intervention\" = \"Control\"), update_strategy = data_ice_JR ) ana_obj_JR <- analyse( impute_obj_JR, vars = vars_an ) pool_obj_JR <- pool(ana_obj_JR) pool_obj_JR #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_1 -0.485 0.513 -1.496 0.526 0.346 #> lsm_ref_1 2.609 0.363 1.892 3.325 <0.001 #> lsm_alt_1 2.124 0.361 1.412 2.836 <0.001 #> trt_2 -0.06 0.535 -1.115 0.995 0.911 #> lsm_ref_2 3.694 0.378 2.948 4.441 <0.001 #> lsm_alt_2 3.634 0.381 2.882 4.387 <0.001 #> trt_3 -1.767 0.598 -2.948 -0.587 0.004 #> lsm_ref_3 5.845 0.422 5.012 6.677 <0.001 #> lsm_alt_3 4.077 0.432 3.225 4.93 <0.001 #> trt_4 -2.529 0.686 -3.883 -1.175 <0.001 #> lsm_ref_4 7.637 0.495 6.659 8.614 <0.001 #> lsm_alt_4 5.108 0.492 4.138 6.078 <0.001 #> trt_5 -3.523 0.856 -5.212 -1.833 <0.001 #> lsm_ref_5 9.554 0.61 8.351 10.758 <0.001 #> lsm_alt_5 6.032 0.611 4.827 7.237 <0.001 #> trt_6 -4.36 0.952 -6.238 -2.482 <0.001 #> lsm_ref_6 11.003 0.676 9.669 12.337 <0.001 #> lsm_alt_6 6.643 0.687 5.287 8 <0.001 #> --------------------------------------------------"},{"path":"/articles/advanced.html","id":"imputation-under-mar-with-time-varying-covariates","dir":"Articles","previous_headings":"","what":"Imputation under MAR with time-varying covariates","title":"rbmi: Advanced Functionality","text":"rbmi package supports inclusion time-varying covariates imputation model. particularly useful implementing -called retrieved dropout models. vignette “Implementation retrieved-dropout models using rbmi” (vignette(topic = \"retrieved_dropout\", package = \"rbmi\")) contains examples models.","code":""},{"path":"/articles/advanced.html","id":"custom-imputation-strategies","dir":"Articles","previous_headings":"","what":"Custom imputation strategies","title":"rbmi: Advanced Functionality","text":"following imputation strategies implemented rbmi: Missing Random (MAR) Jump Reference (JR) Copy Reference (CR) Copy Increments Reference (CIR) Last Mean Carried Forward (LMCF) addition, rbmi allows user implement imputation strategy. , user needs three things: Define function implementing new imputation strategy. Specify patients use strategy data_ice dataset provided draws(). Provide imputation strategy function impute(). imputation strategy function must take 3 arguments (pars_group, pars_ref, index_mar) calculates mean covariance matrix subject’s marginal imputation distribution applied subjects strategy applies. , pars_group contains predicted mean trajectory (pars_group$mu, numeric vector) covariance matrix (pars_group$sigma) subject conditional assigned treatment group covariates. pars_ref contains corresponding mean trajectory covariance matrix conditional reference group subject’s covariates. index_mar logical vector specifies visit whether visit unaffected ICE handled using non-MAR method . example, user can check CIR strategy implemented looking function strategy_CIR(). illustrate simple example, assume new strategy implemented follows: - marginal mean imputation distribution equal marginal mean trajectory subject according assigned group covariates ICE. - ICE marginal mean imputation distribution equal average visit-wise marginal means based subjects covariates assigned group reference group, respectively. - covariance matrix marginal imputation distribution, covariance matrix assigned group taken. , first need define imputation function example coded follows: example showing use: incorporate rbmi, data_ice needs updated strategy AVG specified visits affected ICE. Additionally, function needs provided impute() via getStrategies() function shown : , analysis proceed calling analyse() pool() .","code":"strategy_CIR #> function (pars_group, pars_ref, index_mar) #> { #> if (all(index_mar)) { #> return(pars_group) #> } #> else if (all(!index_mar)) { #> return(pars_ref) #> } #> mu <- pars_group$mu #> last_mar <- which(!index_mar)[1] - 1 #> increments_from_last_mar_ref <- pars_ref$mu[!index_mar] - #> pars_ref$mu[last_mar] #> mu[!index_mar] <- mu[last_mar] + increments_from_last_mar_ref #> sigma <- compute_sigma(sigma_group = pars_group$sigma, sigma_ref = pars_ref$sigma, #> index_mar = index_mar) #> pars <- list(mu = mu, sigma = sigma) #> return(pars) #> } #> #> strategy_AVG <- function(pars_group, pars_ref, index_mar) { mu_mean <- (pars_group$mu + pars_ref$mu) / 2 x <- pars_group x$mu[!index_mar] <- mu_mean[!index_mar] return(x) } pars_group <- list( mu = c(1, 2, 3), sigma = as_vcov(c(1, 3, 2), c(0.4, 0.5, 0.45)) ) pars_ref <- list( mu = c(5, 6, 7), sigma = as_vcov(c(2, 1, 1), c(0.7, 0.8, 0.5)) ) index_mar <- c(TRUE, TRUE, FALSE) strategy_AVG(pars_group, pars_ref, index_mar) #> $mu #> [1] 1 2 5 #> #> $sigma #> [,1] [,2] [,3] #> [1,] 1.0 1.2 1.0 #> [2,] 1.2 9.0 2.7 #> [3,] 1.0 2.7 4.0 data_ice_AVG <- data_ice_CIR %>% mutate(strategy = ifelse(strategy == \"CIR\", \"AVG\", strategy)) draw_obj <- draws( data = data, data_ice = data_ice_AVG, vars = vars, method = method, quiet = TRUE ) impute_obj <- impute( draw_obj, references = c(\"Control\" = \"Control\", \"Intervention\" = \"Control\"), strategies = getStrategies(AVG = strategy_AVG) )"},{"path":"/articles/advanced.html","id":"custom-analysis-functions","dir":"Articles","previous_headings":"","what":"Custom analysis functions","title":"rbmi: Advanced Functionality","text":"default rbmi analyse data using ancova() function. analysis function fits ANCOVA model outcomes visit separately, returns “treatment effect” estimate well corresponding least square means group. user wants perform different analysis, return different statistics analysis, can done using custom analysis function. Beware validity conditional mean imputation method formally established analysis functions corresponding linear models (ANCOVA) caution required applying alternative analysis functions method. custom analysis function must take data.frame first argument return named list element list containing minimum point estimate, called est. method method_bayes() method_approxbayes(), list must additionally contain standard error (element se) , available, degrees freedom complete-data analysis model (element df). simple example, replicate ANCOVA analysis last visit CIR-based imputations user-defined analysis function : second example, assume supplementary analysis user wants compare proportion subjects change baseline >10 points last visit treatment groups baseline outcome additional covariate. lead following basic analysis function: Note user wants rbmi use normal approximation pooled test statistics, degrees freedom need set df = NA (per example). degrees freedom complete data test statistics known degrees freedom set df = Inf, rbmi pools degrees freedom across imputed datasets according rule Barnard Rubin (see “Statistical Specifications” vignette (vignette(\"stat_specs\", package = \"rbmi\") details). According rule, infinite degrees freedom complete data analysis imply pooled degrees freedom also infinite. Rather, case pooled degrees freedom (M-1)/lambda^2, M number imputations lambda fraction missing information (see Barnard Rubin (1999) details).","code":"compare_change_lastvisit <- function(data, ...) { fit <- lm(change ~ group + outcome_bl, data = data, subset = (visit == 6) ) res <- list( trt = list( est = coef(fit)[\"groupIntervention\"], se = sqrt(vcov(fit)[\"groupIntervention\", \"groupIntervention\"]), df = df.residual(fit) ) ) return(res) } ana_obj_CIR6 <- analyse( impute_obj_CIR, fun = compare_change_lastvisit, vars = vars_an ) pool(ana_obj_CIR6) #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================= #> parameter est se lci uci pval #> ------------------------------------------------- #> trt -4.537 0.954 -6.42 -2.655 <0.001 #> ------------------------------------------------- compare_prop_lastvisit <- function(data, ...) { fit <- glm( I(change > 10) ~ group + outcome_bl, family = binomial(), data = data, subset = (visit == 6) ) res <- list( trt = list( est = coef(fit)[\"groupIntervention\"], se = sqrt(vcov(fit)[\"groupIntervention\", \"groupIntervention\"]), df = NA ) ) return(res) } ana_obj_prop <- analyse( impute_obj_CIR, fun = compare_prop_lastvisit, vars = vars_an ) pool_obj_prop <- pool(ana_obj_prop) pool_obj_prop #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================= #> parameter est se lci uci pval #> ------------------------------------------------- #> trt -1.052 0.314 -1.667 -0.438 0.001 #> ------------------------------------------------- tmp <- as.data.frame(pool_obj_prop) %>% mutate( OR = exp(est), OR.lci = exp(lci), OR.uci = exp(uci) ) %>% select(parameter, OR, OR.lci, OR.uci) tmp #> parameter OR OR.lci OR.uci #> 1 trt 0.3491078 0.188807 0.6455073"},{"path":"/articles/advanced.html","id":"sensitivity-analyses-delta-adjustments-and-tipping-point-analyses","dir":"Articles","previous_headings":"","what":"Sensitivity analyses: Delta adjustments and tipping point analyses","title":"rbmi: Advanced Functionality","text":"Delta-adjustments used impute missing data missing random (NMAR) assumption. reflects belief unobserved outcomes systematically “worse” (“better”) “comparable” observed outcomes. extensive discussion delta-adjustment methods, refer Cro et al. (2020). rbmi, marginal delta-adjustment approach implemented. means delta-adjustment applied dataset data imputation MAR reference-based missing data assumptions prior analysis imputed data. Sensitivity analysis using delta-adjustments can therefore performed without re-fit imputation model. rbmi, implemented via delta argument analyse() function.","code":""},{"path":"/articles/advanced.html","id":"simple-delta-adjustments-and-tipping-point-analyses","dir":"Articles","previous_headings":"8 Sensitivity analyses: Delta adjustments and tipping point analyses","what":"Simple delta adjustments and tipping point analyses","title":"rbmi: Advanced Functionality","text":"delta argument analyse() allows users modify outcome variable prior analysis. , user needs provide data.frame contains columns subject visit (identify observation adjusted) plus additional column called delta specifies value added outcomes prior analysis. delta_template() function supports user creating data.frame: creates skeleton data.frame containing one row per subject visit value delta set 0 observations: Note output delta_template() contains additional information can used properly re-set variable delta. example, assume user wants implement delta-adjustment imputed values CIR described section 3. Specifically, assume fixed “worsening adjustment” +5 points applied imputed values regardless treatment group. programmed follows: approach can used implement tipping point analysis. , apply different delta-adjustments imputed data control intervention group, respectively. Assume delta-adjustments less -5 points +15 points considered implausible clinical perspective. Therefore, vary delta-values group -5 +15 points investigate delta combinations lead “tipping” primary analysis result, defined analysis p-value \\(\\geq 0.05\\). According analysis, significant test result primary analysis CIR tipped non-significant result rather extreme delta-adjustments. Please note real analysis recommended use smaller step size grid used .","code":"dat_delta <- delta_template(imputations = impute_obj_CIR) head(dat_delta) #> id visit group is_mar is_missing is_post_ice strategy delta #> 1 id_1 1 Control TRUE TRUE TRUE MAR 0 #> 2 id_1 2 Control TRUE TRUE TRUE MAR 0 #> 3 id_1 3 Control TRUE TRUE TRUE MAR 0 #> 4 id_1 4 Control TRUE TRUE TRUE MAR 0 #> 5 id_1 5 Control TRUE TRUE TRUE MAR 0 #> 6 id_1 6 Control TRUE TRUE TRUE MAR 0 # Set delta-value to 5 for all imputed (previously missing) outcomes and 0 for all other outcomes dat_delta <- delta_template(imputations = impute_obj_CIR) %>% mutate(delta = is_missing * 5) # Repeat the analyses with the delta-adjusted values and pool results ana_delta <- analyse( impute_obj_CIR, delta = dat_delta, vars = vars_an ) pool(ana_delta) #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_1 -0.482 0.524 -1.516 0.552 0.359 #> lsm_ref_1 2.718 0.37 1.987 3.448 <0.001 #> lsm_alt_1 2.235 0.37 1.505 2.966 <0.001 #> trt_2 -0.016 0.56 -1.12 1.089 0.978 #> lsm_ref_2 3.907 0.396 3.125 4.688 <0.001 #> lsm_alt_2 3.891 0.395 3.111 4.671 <0.001 #> trt_3 -1.684 0.641 -2.948 -0.42 0.009 #> lsm_ref_3 6.092 0.452 5.201 6.983 <0.001 #> lsm_alt_3 4.408 0.452 3.515 5.3 <0.001 #> trt_4 -2.359 0.741 -3.821 -0.897 0.002 #> lsm_ref_4 7.951 0.526 6.913 8.99 <0.001 #> lsm_alt_4 5.593 0.522 4.563 6.623 <0.001 #> trt_5 -3.34 0.919 -5.153 -1.526 <0.001 #> lsm_ref_5 9.899 0.643 8.631 11.168 <0.001 #> lsm_alt_5 6.559 0.653 5.271 7.848 <0.001 #> trt_6 -4.21 1.026 -6.236 -2.184 <0.001 #> lsm_ref_6 11.435 0.718 10.019 12.851 <0.001 #> lsm_alt_6 7.225 0.725 5.793 8.656 <0.001 #> -------------------------------------------------- perform_tipp_analysis <- function(delta_control, delta_intervention, cl) { # Derive delta offset based on control and intervention specific deltas delta_df <- delta_df_init %>% mutate( delta_ctl = (group == \"Control\") * is_missing * delta_control, delta_int = (group == \"Intervention\") * is_missing * delta_intervention, delta = delta_ctl + delta_int ) ana_delta <- analyse( impute_obj_CIR, fun = compare_change_lastvisit, vars = vars_an, delta = delta_df, ncores = cl ) pool_delta <- as.data.frame(pool(ana_delta)) list( trt_effect_6 = pool_delta[[\"est\"]], pval_6 = pool_delta[[\"pval\"]] ) } # Get initial delta template delta_df_init <- delta_template(impute_obj_CIR) tipp_frame_grid <- expand.grid( delta_control = seq(-5, 15, by = 2), delta_intervention = seq(-5, 15, by = 2) ) %>% as_tibble() # parallelise to speed up computation cl <- make_rbmi_cluster(2) tipp_frame <- tipp_frame_grid %>% mutate( results_list = map2(delta_control, delta_intervention, perform_tipp_analysis, cl = cl), trt_effect_6 = map_dbl(results_list, \"trt_effect_6\"), pval_6 = map_dbl(results_list, \"pval_6\") ) %>% select(-results_list) %>% mutate( pval = cut( pval_6, c(0, 0.001, 0.01, 0.05, 0.2, 1), right = FALSE, labels = c(\"<0.001\", \"0.001 - <0.01\", \"0.01- <0.05\", \"0.05 - <0.20\", \">= 0.20\") ) ) # Close cluster when done with it parallel::stopCluster(cl) # Show delta values which lead to non-significant analysis results tipp_frame %>% filter(pval_6 >= 0.05) #> # A tibble: 3 × 5 #> delta_control delta_intervention trt_effect_6 pval_6 pval #> #> 1 -5 15 -1.99 0.0935 0.05 - <0.20 #> 2 -3 15 -2.15 0.0704 0.05 - <0.20 #> 3 -1 15 -2.31 0.0527 0.05 - <0.20 ggplot(tipp_frame, aes(delta_control, delta_intervention, fill = pval)) + geom_raster() + scale_fill_manual(values = c(\"darkgreen\", \"lightgreen\", \"lightyellow\", \"orange\", \"red\"))"},{"path":"/articles/advanced.html","id":"more-flexible-delta-adjustments-using-the-dlag-and-delta-arguments-of-delta_template","dir":"Articles","previous_headings":"8 Sensitivity analyses: Delta adjustments and tipping point analyses","what":"More flexible delta-adjustments using the dlag and delta arguments of delta_template()","title":"rbmi: Advanced Functionality","text":"far, discussed simple delta arguments add value imputed values. However, user may want apply flexible delta-adjustments missing values intercurrent event (ICE) vary magnitude delta adjustment depending far away visit question ICE visit. facilitate creation flexible delta-adjustments, delta_template() function two optional additional arguments delta dlag. delta argument specifies default amount delta applied post-ICE visit, whilst dlag specifies scaling coefficient applied based upon visits proximity first visit affected ICE. default, delta added unobserved (.e. imputed) post-ICE outcomes can changed setting optional argument missing_only = FALSE. usage delta dlag arguments best illustrated examples: Assume setting 4 visits user specified delta = c(5,6,7,8) dlag=c(1,2,3,4). subject first visit affected ICE visit 2, values delta dlag imply following delta offset: , subject delta offset 0 applied visit v1, 6 visit v2, 20 visit v3 44 visit v4. Assume instead, subject’s first visit affected ICE visit 3. , values delta dlag imply following delta offset: apply constant delta value +5 visits affected ICE regardless proximity first ICE visit, one set delta = c(5,5,5,5) dlag = c(1,0,0,0). Alternatively, may straightforward setting call delta_template() function without delta dlag arguments overwrite delta column resulting data.frame described previous section (additionally relying is_post_ice variable). Another way using arguments set delta difference time visits dlag amount delta per unit time. example, let’s say visits occur weeks 1, 5, 6 9 want delta 3 applied week ICE. simplicity, assume ICE occurs immediately subject’s last visit affected ICE. achieved setting delta = c(1,4,1,3) (difference weeks visit) dlag = c(3, 3, 3, 3). Assume subject’s first visit affected ICE visit v2, values delta dlag imply following delta offsets: wrap , show action simulated dataset section 2 imputed datasets based CIR assumption section 3. simulation setting specified follow-visits months 2, 4, 6, 8, 10, 12. Assume want apply delta-adjustment 1 every month ICE unobserved post-ICE visits intervention group . (E.g. ICE occurred immediately month 4 visit, total delta applied missing value month 10 visit 6.) program , first use delta dlag arguments delta_template() set corresponding template data.frame: Next, can use additional metadata variables provided delta_template() manually reset delta values control group back 0: Finally, can use delta data.frame apply desired delta offset analysis:","code":"v1 v2 v3 v4 -------------- 5 6 7 8 # delta assigned to each visit 0 1 2 3 # scaling starting from the first visit after the subjects ICE -------------- 0 6 14 24 # delta * scaling -------------- 0 6 20 44 # cumulative sum (i.e. delta) to be applied to each visit v1 v2 v3 v4 -------------- 5 6 7 8 # delta assigned to each visit 0 0 1 2 # scaling starting from the first visit after the subjects ICE -------------- 0 0 7 16 # delta * scaling -------------- 0 0 7 23 # cumulative sum (i.e. delta) to be applied to each visit v1 v2 v3 v4 -------------- 1 4 1 3 # delta assigned to each visit 0 3 3 3 # scaling starting from the first visit after the subjects ICE -------------- 0 12 3 9 # delta * scaling -------------- 0 12 15 24 # cumulative sum (i.e. delta) to be applied to each visit delta_df <- delta_template( impute_obj_CIR, delta = c(2, 2, 2, 2, 2, 2), dlag = c(1, 1, 1, 1, 1, 1) ) head(delta_df) #> id visit group is_mar is_missing is_post_ice strategy delta #> 1 id_1 1 Control TRUE TRUE TRUE MAR 2 #> 2 id_1 2 Control TRUE TRUE TRUE MAR 4 #> 3 id_1 3 Control TRUE TRUE TRUE MAR 6 #> 4 id_1 4 Control TRUE TRUE TRUE MAR 8 #> 5 id_1 5 Control TRUE TRUE TRUE MAR 10 #> 6 id_1 6 Control TRUE TRUE TRUE MAR 12 delta_df2 <- delta_df %>% mutate(delta = if_else(group == \"Control\", 0, delta)) head(delta_df2) #> id visit group is_mar is_missing is_post_ice strategy delta #> 1 id_1 1 Control TRUE TRUE TRUE MAR 0 #> 2 id_1 2 Control TRUE TRUE TRUE MAR 0 #> 3 id_1 3 Control TRUE TRUE TRUE MAR 0 #> 4 id_1 4 Control TRUE TRUE TRUE MAR 0 #> 5 id_1 5 Control TRUE TRUE TRUE MAR 0 #> 6 id_1 6 Control TRUE TRUE TRUE MAR 0 ana_delta <- analyse(impute_obj_CIR, delta = delta_df2, vars = vars_an) pool(ana_delta) #> #> Pool Object #> ----------- #> Number of Results Combined: 20 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_1 -0.446 0.514 -1.459 0.567 0.386 #> lsm_ref_1 2.62 0.363 1.904 3.335 <0.001 #> lsm_alt_1 2.173 0.363 1.458 2.889 <0.001 #> trt_2 0.072 0.546 -1.006 1.15 0.895 #> lsm_ref_2 3.708 0.387 2.945 4.471 <0.001 #> lsm_alt_2 3.78 0.386 3.018 4.542 <0.001 #> trt_3 -1.507 0.626 -2.743 -0.272 0.017 #> lsm_ref_3 5.844 0.441 4.973 6.714 <0.001 #> lsm_alt_3 4.336 0.442 3.464 5.209 <0.001 #> trt_4 -2.062 0.731 -3.504 -0.621 0.005 #> lsm_ref_4 7.658 0.519 6.634 8.682 <0.001 #> lsm_alt_4 5.596 0.515 4.58 6.612 <0.001 #> trt_5 -2.938 0.916 -4.746 -1.13 0.002 #> lsm_ref_5 9.558 0.641 8.293 10.823 <0.001 #> lsm_alt_5 6.62 0.651 5.335 7.905 <0.001 #> trt_6 -3.53 1.045 -5.591 -1.469 0.001 #> lsm_ref_6 11.045 0.73 9.604 12.486 <0.001 #> lsm_alt_6 7.515 0.738 6.058 8.971 <0.001 #> --------------------------------------------------"},{"path":[]},{"path":"/articles/quickstart.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"rbmi: Quickstart","text":"purpose vignette provide 15 minute quickstart guide core functions rbmi package. rbmi package consists 4 core functions (plus several helper functions) typically called sequence: draws() - fits imputation models stores parameters impute() - creates multiple imputed datasets analyse() - analyses multiple imputed datasets pool() - combines analysis results across imputed datasets single statistic example vignette makes use Bayesian multiuple imputation; functionality requires installation suggested package rstan.","code":"install.packages(\"rstan\")"},{"path":"/articles/quickstart.html","id":"the-data","dir":"Articles","previous_headings":"","what":"The Data","title":"rbmi: Quickstart","text":"use publicly available example dataset antidepressant clinical trial active drug versus placebo. relevant endpoint Hamilton 17-item depression rating scale (HAMD17) assessed baseline weeks 1, 2, 4, 6. Study drug discontinuation occurred 24% subjects active drug 26% subjects placebo. data study drug discontinuation missing single additional intermittent missing observation. consider imputation model mean change baseline HAMD17 score outcome (variable CHANGE dataset). following covariates included imputation model: treatment group (THERAPY), (categorical) visit (VISIT), treatment--visit interactions, baseline HAMD17 score (BASVAL), baseline HAMD17 score--visit interactions. common unstructured covariance matrix structure assumed groups. analysis model ANCOVA model treatment group primary factor adjustment baseline HAMD17 score. rbmi expects input dataset complete; , must one row per subject visit. Missing outcome values coded NA, missing covariate values allowed. dataset incomplete, expand_locf() helper function can used add missing rows, using LOCF imputation carry forward observed baseline covariate values visits missing outcomes. Rows corresponding missing outcomes present antidepressant trial dataset. address therefore use expand_locf() function follows:","code":"library(rbmi) library(dplyr) #> #> Attaching package: 'dplyr' #> The following objects are masked from 'package:stats': #> #> filter, lag #> The following objects are masked from 'package:base': #> #> intersect, setdiff, setequal, union data(\"antidepressant_data\") dat <- antidepressant_data # Use expand_locf to add rows corresponding to visits with missing outcomes to the dataset dat <- expand_locf( dat, PATIENT = levels(dat$PATIENT), # expand by PATIENT and VISIT VISIT = levels(dat$VISIT), vars = c(\"BASVAL\", \"THERAPY\"), # fill with LOCF BASVAL and THERAPY group = c(\"PATIENT\"), order = c(\"PATIENT\", \"VISIT\") )"},{"path":"/articles/quickstart.html","id":"draws","dir":"Articles","previous_headings":"","what":"Draws","title":"rbmi: Quickstart","text":"draws() function fits imputation models stores corresponding parameter estimates Bayesian posterior parameter draws. three main inputs draws() function : data - primary longitudinal data.frame containing outcome variable covariates. data_ice - data.frame specifies first visit affected intercurrent event (ICE) imputation strategy handling missing outcome data ICE. one ICE imputed non-MAR strategy allowed per subject. method - statistical method used fit imputation models create imputed datasets. antidepressant trial data, dataset data_ice provided. However, can derived , dataset, subject’s first visit affected ICE “study drug discontinuation” corresponds first terminal missing observation. first derive dateset data_ice create 150 Bayesian posterior draws imputation model parameters. example, assume imputation strategy ICE Jump Reference (JR) subjects 150 multiple imputed datasets using Bayesian posterior draws imputation model created. Note use set_vars() specifies names key variables within dataset imputation model. Additionally, note whilst vars$group vars$visit added terms imputation model default, interaction , thus inclusion group * visit list covariates. Available imputation methods include: Bayesian multiple imputation - method_bayes() Approximate Bayesian multiple imputation - method_approxbayes() Conditional mean imputation (bootstrap) - method_condmean(type = \"bootstrap\") Conditional mean imputation (jackknife) - method_condmean(type = \"jackknife\") Bootstrapped multiple imputation - method = method_bmlmi() comparison methods, refer stat_specs vignette (Section 3.10). “statistical specifications” vignette (Section 3.10): vignette(\"stat_specs\",package=\"rbmi\"). Available imputation strategies include: Missing Random - \"MAR\" Jump Reference - \"JR\" Copy Reference - \"CR\" Copy Increments Reference - \"CIR\" Last Mean Carried Forward - \"LMCF\"","code":"# create data_ice and set the imputation strategy to JR for # each patient with at least one missing observation dat_ice <- dat %>% arrange(PATIENT, VISIT) %>% filter(is.na(CHANGE)) %>% group_by(PATIENT) %>% slice(1) %>% ungroup() %>% select(PATIENT, VISIT) %>% mutate(strategy = \"JR\") # In this dataset, subject 3618 has an intermittent missing values which does not correspond # to a study drug discontinuation. We therefore remove this subject from `dat_ice`. # (In the later imputation step, it will automatically be imputed under the default MAR assumption.) dat_ice <- dat_ice[-which(dat_ice$PATIENT == 3618),] dat_ice #> # A tibble: 43 × 3 #> PATIENT VISIT strategy #> #> 1 1513 5 JR #> 2 1514 5 JR #> 3 1517 5 JR #> 4 1804 7 JR #> 5 2104 7 JR #> 6 2118 5 JR #> 7 2218 6 JR #> 8 2230 6 JR #> 9 2721 5 JR #> 10 2729 5 JR #> # ℹ 33 more rows # Define the names of key variables in our dataset and # the covariates included in the imputation model using `set_vars()` # Note that the covariates argument can also include interaction terms vars <- set_vars( outcome = \"CHANGE\", visit = \"VISIT\", subjid = \"PATIENT\", group = \"THERAPY\", covariates = c(\"BASVAL*VISIT\", \"THERAPY*VISIT\") ) # Define which imputation method to use (here: Bayesian multiple imputation with 150 imputed datsets) method <- method_bayes( burn_in = 200, burn_between = 5, n_samples = 150 ) # Create samples for the imputation parameters by running the draws() function set.seed(987) drawObj <- draws( data = dat, data_ice = dat_ice, vars = vars, method = method, quiet = TRUE ) drawObj #> #> Draws Object #> ------------ #> Number of Samples: 150 #> Number of Failed Samples: 0 #> Model Formula: CHANGE ~ 1 + THERAPY + VISIT + BASVAL * VISIT + THERAPY * VISIT #> Imputation Type: random #> Method: #> name: Bayes #> burn_in: 200 #> burn_between: 5 #> same_cov: TRUE #> n_samples: 150"},{"path":"/articles/quickstart.html","id":"impute","dir":"Articles","previous_headings":"","what":"Impute","title":"rbmi: Quickstart","text":"next step use parameters imputation model generate imputed datasets. done via impute() function. function two key inputs: imputation model output draws() reference groups relevant reference-based imputation methods. ’s usage thus: instance, specifying PLACEBO group reference group well DRUG group (standard imputation using reference-based methods). Generally speaking, need see directly interact imputed datasets. However, wish inspect , can extracted imputation object using extract_imputed_dfs() helper function, .e.: Note case method_bayes() method_approxbayes(), imputed datasets correspond random imputations original dataset. method_condmean(), first imputed dataset always correspond completed original dataset containing subjects. method_condmean(type=\"jackknife\"), remaining datasets correspond conditional mean imputations leave-one-subject-datasets, whereas method_condmean(type=\"bootstrap\"), subsequent dataset corresponds conditional mean imputation bootstrapped datasets. method_bmlmi(), imputed datasets correspond sets random imputations bootstrapped datasets.","code":"imputeObj <- impute( drawObj, references = c(\"DRUG\" = \"PLACEBO\", \"PLACEBO\" = \"PLACEBO\") ) imputeObj #> #> Imputation Object #> ----------------- #> Number of Imputed Datasets: 150 #> Fraction of Missing Data (Original Dataset): #> 4: 0% #> 5: 8% #> 6: 13% #> 7: 25% #> References: #> DRUG -> PLACEBO #> PLACEBO -> PLACEBO imputed_dfs <- extract_imputed_dfs(imputeObj) head(imputed_dfs[[10]], 12) # first 12 rows of 10th imputed dataset #> PATIENT HAMATOTL PGIIMP RELDAYS VISIT THERAPY GENDER POOLINV BASVAL #> 1 new_pt_1 21 2 7 4 DRUG F 006 32 #> 2 new_pt_1 19 2 14 5 DRUG F 006 32 #> 3 new_pt_1 21 3 28 6 DRUG F 006 32 #> 4 new_pt_1 17 4 42 7 DRUG F 006 32 #> 5 new_pt_2 18 3 7 4 PLACEBO F 006 14 #> 6 new_pt_2 18 2 15 5 PLACEBO F 006 14 #> 7 new_pt_2 14 3 29 6 PLACEBO F 006 14 #> 8 new_pt_2 8 2 42 7 PLACEBO F 006 14 #> 9 new_pt_3 18 3 7 4 DRUG F 006 21 #> 10 new_pt_3 17 3 14 5 DRUG F 006 21 #> 11 new_pt_3 12 3 28 6 DRUG F 006 21 #> 12 new_pt_3 9 3 44 7 DRUG F 006 21 #> HAMDTL17 CHANGE #> 1 21 -11 #> 2 20 -12 #> 3 19 -13 #> 4 17 -15 #> 5 11 -3 #> 6 14 0 #> 7 9 -5 #> 8 5 -9 #> 9 20 -1 #> 10 18 -3 #> 11 16 -5 #> 12 13 -8"},{"path":"/articles/quickstart.html","id":"analyse","dir":"Articles","previous_headings":"","what":"Analyse","title":"rbmi: Quickstart","text":"next step run analysis model imputed dataset. done defining analysis function calling analyse() apply function imputed dataset. vignette use ancova() function provided rbmi package fits separate ANCOVA model outcomes visit returns treatment effect estimate corresponding least square means group per visit. Note , similar draws(), ancova() function uses set_vars() function determines names key variables within data covariates (addition treatment group) analysis model adjusted. Please also note names analysis estimates contain “ref” “alt” refer two treatment arms. particular “ref” refers first factor level vars$group necessarily coincide control arm. example, since levels(dat[[vars$group]]) = c(\"DRUG\", PLACEBO), results associated “ref” correspond intervention arm, associated “alt” correspond control arm. Additionally, can use delta argument analyse() perform delta adjustments imputed datasets prior analysis. brief, implemented specifying data.frame contains amount adjustment added longitudinal outcome subject visit, .e.  data.frame must contain columns subjid, visit, delta. appreciated carrying procedure potentially tedious, therefore delta_template() helper function provided simplify . particular, delta_template() returns shell data.frame delta-adjustment set 0 patients. Additionally delta_template() adds several meta-variables onto shell data.frame can used manual derivation manipulation delta-adjustment. example lets say want add delta-value 5 imputed values (.e. values missing original dataset) drug arm. implemented follows:","code":"anaObj <- analyse( imputeObj, ancova, vars = set_vars( subjid = \"PATIENT\", outcome = \"CHANGE\", visit = \"VISIT\", group = \"THERAPY\", covariates = c(\"BASVAL\") ) ) anaObj #> #> Analysis Object #> --------------- #> Number of Results: 150 #> Analysis Function: ancova #> Delta Applied: FALSE #> Analysis Estimates: #> trt_4 #> lsm_ref_4 #> lsm_alt_4 #> trt_5 #> lsm_ref_5 #> lsm_alt_5 #> trt_6 #> lsm_ref_6 #> lsm_alt_6 #> trt_7 #> lsm_ref_7 #> lsm_alt_7 # For reference show the additional meta variables provided delta_template(imputeObj) %>% as_tibble() #> # A tibble: 688 × 8 #> PATIENT VISIT THERAPY is_mar is_missing is_post_ice strategy delta #> #> 1 1503 4 DRUG TRUE FALSE FALSE NA 0 #> 2 1503 5 DRUG TRUE FALSE FALSE NA 0 #> 3 1503 6 DRUG TRUE FALSE FALSE NA 0 #> 4 1503 7 DRUG TRUE FALSE FALSE NA 0 #> 5 1507 4 PLACEBO TRUE FALSE FALSE NA 0 #> 6 1507 5 PLACEBO TRUE FALSE FALSE NA 0 #> 7 1507 6 PLACEBO TRUE FALSE FALSE NA 0 #> 8 1507 7 PLACEBO TRUE FALSE FALSE NA 0 #> 9 1509 4 DRUG TRUE FALSE FALSE NA 0 #> 10 1509 5 DRUG TRUE FALSE FALSE NA 0 #> # ℹ 678 more rows delta_df <- delta_template(imputeObj) %>% as_tibble() %>% mutate(delta = if_else(THERAPY == \"DRUG\" & is_missing , 5, 0)) %>% select(PATIENT, VISIT, delta) delta_df #> # A tibble: 688 × 3 #> PATIENT VISIT delta #> #> 1 1503 4 0 #> 2 1503 5 0 #> 3 1503 6 0 #> 4 1503 7 0 #> 5 1507 4 0 #> 6 1507 5 0 #> 7 1507 6 0 #> 8 1507 7 0 #> 9 1509 4 0 #> 10 1509 5 0 #> # ℹ 678 more rows anaObj_delta <- analyse( imputeObj, ancova, delta = delta_df, vars = set_vars( subjid = \"PATIENT\", outcome = \"CHANGE\", visit = \"VISIT\", group = \"THERAPY\", covariates = c(\"BASVAL\") ) )"},{"path":"/articles/quickstart.html","id":"pool","dir":"Articles","previous_headings":"","what":"Pool","title":"rbmi: Quickstart","text":"Finally, pool() function can used summarise analysis results across multiple imputed datasets provide overall statistic standard error, confidence intervals p-value hypothesis test null hypothesis effect equal 0. Note pooling method automatically derived based method specified original call draws(): method_bayes() method_approxbayes() pooling inference based Rubin’s rules. method_condmean(type = \"bootstrap\") inference either based normal approximation using bootstrap standard error (pool(..., type = \"normal\")) bootstrap percentiles (pool(..., type = \"percentile\")). method_condmean(type = \"jackknife\") inference based normal approximation using jackknife estimate standard error. method = method_bmlmi() inference according methods described von Hippel Bartlett (see stat_specs vignette details) Since used Bayesian multiple imputation vignette, pool() function automatically use Rubin’s rules. table values shown print message poolObj can also extracted using .data.frame() function: outputs gives estimated difference 2.180 (95% CI -0.080 4.439) two groups last visit associated p-value 0.059.","code":"poolObj <- pool( anaObj, conf.level = 0.95, alternative = \"two.sided\" ) poolObj #> #> Pool Object #> ----------- #> Number of Results Combined: 150 #> Method: rubin #> Confidence Level: 0.95 #> Alternative: two.sided #> #> Results: #> #> ================================================== #> parameter est se lci uci pval #> -------------------------------------------------- #> trt_4 -0.092 0.683 -1.439 1.256 0.893 #> lsm_ref_4 -1.616 0.486 -2.576 -0.656 0.001 #> lsm_alt_4 -1.708 0.475 -2.645 -0.77 <0.001 #> trt_5 1.332 0.925 -0.495 3.159 0.152 #> lsm_ref_5 -4.157 0.661 -5.462 -2.852 <0.001 #> lsm_alt_5 -2.825 0.646 -4.1 -1.55 <0.001 #> trt_6 1.927 1.005 -0.059 3.913 0.057 #> lsm_ref_6 -6.097 0.721 -7.522 -4.671 <0.001 #> lsm_alt_6 -4.17 0.7 -5.553 -2.786 <0.001 #> trt_7 2.18 1.143 -0.08 4.439 0.059 #> lsm_ref_7 -6.994 0.826 -8.628 -5.36 <0.001 #> lsm_alt_7 -4.815 0.791 -6.379 -3.25 <0.001 #> -------------------------------------------------- as.data.frame(poolObj) #> parameter est se lci uci pval #> 1 trt_4 -0.09180645 0.6826279 -1.43949684 1.2558839 8.931772e-01 #> 2 lsm_ref_4 -1.61581996 0.4862316 -2.57577141 -0.6558685 1.093708e-03 #> 3 lsm_alt_4 -1.70762640 0.4749573 -2.64531931 -0.7699335 4.262148e-04 #> 4 trt_5 1.33217342 0.9248889 -0.49452471 3.1588715 1.517381e-01 #> 5 lsm_ref_5 -4.15685743 0.6607638 -5.46196249 -2.8517524 2.982856e-09 #> 6 lsm_alt_5 -2.82468402 0.6455730 -4.09978956 -1.5495785 2.197441e-05 #> 7 trt_6 1.92723926 1.0050687 -0.05860912 3.9130876 5.706399e-02 #> 8 lsm_ref_6 -6.09679600 0.7213490 -7.52226719 -4.6713248 2.489617e-14 #> 9 lsm_alt_6 -4.16955674 0.7003707 -5.55341225 -2.7857012 1.784937e-08 #> 10 trt_7 2.17964370 1.1426199 -0.07965819 4.4389456 5.852211e-02 #> 11 lsm_ref_7 -6.99418014 0.8260358 -8.62803604 -5.3603242 4.048404e-14 #> 12 lsm_alt_7 -4.81453644 0.7913711 -6.37916058 -3.2499123 1.067031e-08"},{"path":"/articles/quickstart.html","id":"code","dir":"Articles","previous_headings":"","what":"Code","title":"rbmi: Quickstart","text":"report code presented vignette.","code":"library(rbmi) library(dplyr) data(\"antidepressant_data\") dat <- antidepressant_data # Use expand_locf to add rows corresponding to visits with missing outcomes to the dataset dat <- expand_locf( dat, PATIENT = levels(dat$PATIENT), # expand by PATIENT and VISIT VISIT = levels(dat$VISIT), vars = c(\"BASVAL\", \"THERAPY\"), # fill with LOCF BASVAL and THERAPY group = c(\"PATIENT\"), order = c(\"PATIENT\", \"VISIT\") ) # Create data_ice and set the imputation strategy to JR for # each patient with at least one missing observation dat_ice <- dat %>% arrange(PATIENT, VISIT) %>% filter(is.na(CHANGE)) %>% group_by(PATIENT) %>% slice(1) %>% ungroup() %>% select(PATIENT, VISIT) %>% mutate(strategy = \"JR\") # In this dataset, subject 3618 has an intermittent missing values which does not correspond # to a study drug discontinuation. We therefore remove this subject from `dat_ice`. # (In the later imputation step, it will automatically be imputed under the default MAR assumption.) dat_ice <- dat_ice[-which(dat_ice$PATIENT == 3618),] # Define the names of key variables in our dataset using `set_vars()` # and the covariates included in the imputation model # Note that the covariates argument can also include interaction terms vars <- set_vars( outcome = \"CHANGE\", visit = \"VISIT\", subjid = \"PATIENT\", group = \"THERAPY\", covariates = c(\"BASVAL*VISIT\", \"THERAPY*VISIT\") ) # Define which imputation method to use (here: Bayesian multiple imputation with 150 imputed datsets) method <- method_bayes( burn_in = 200, burn_between = 5, n_samples = 150 ) # Create samples for the imputation parameters by running the draws() function set.seed(987) drawObj <- draws( data = dat, data_ice = dat_ice, vars = vars, method = method, quiet = TRUE ) # Impute the data imputeObj <- impute( drawObj, references = c(\"DRUG\" = \"PLACEBO\", \"PLACEBO\" = \"PLACEBO\") ) # Fit the analysis model on each imputed dataset anaObj <- analyse( imputeObj, ancova, vars = set_vars( subjid = \"PATIENT\", outcome = \"CHANGE\", visit = \"VISIT\", group = \"THERAPY\", covariates = c(\"BASVAL\") ) ) # Apply a delta adjustment # Add a delta-value of 5 to all imputed values (i.e. those values # which were missing in the original dataset) in the drug arm. delta_df <- delta_template(imputeObj) %>% as_tibble() %>% mutate(delta = if_else(THERAPY == \"DRUG\" & is_missing , 5, 0)) %>% select(PATIENT, VISIT, delta) # Repeat the analyses with the adjusted values anaObj_delta <- analyse( imputeObj, ancova, delta = delta_df, vars = set_vars( subjid = \"PATIENT\", outcome = \"CHANGE\", visit = \"VISIT\", group = \"THERAPY\", covariates = c(\"BASVAL\") ) ) # Pool the results poolObj <- pool( anaObj, conf.level = 0.95, alternative = \"two.sided\" )"},{"path":"/articles/retrieved_dropout.html","id":"retrieved-dropout-models-in-a-nutshell","dir":"Articles","previous_headings":"","what":"Retrieved dropout models in a nutshell","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"Retrieved dropout models proposed analysis estimands using treatment policy strategy addressing ICE. models, missing outcomes multiply imputed conditional upon whether occur pre- post-ICE. Retrieved dropout models typically rely extended missing--random (MAR) assumption, .e., assume missing outcome data similar observed data subjects treatment group observed outcome history, ICE status. comprehensive description evaluation retrieved dropout models, refer Guizzaro et al. (2021), Polverejan Dragalin (2020), Noci et al. (2023), Drury et al. (2024), Bell et al. (2024). Broadly, publications find retrieved dropout models reduce bias compared alternative analysis approaches based imputation basic MAR assumption reference-based missing data assumption. However, several issues retrieved dropout models also highlighted. Retrieved dropout models require enough post-ICE data collected inform imputation model. Even relatively small amounts missingness, complex retrieved dropout models may face identifiability issues. Another drawback models general loss power relative reference-based imputation methods, becomes meaningful post-ICE observation percentages 50% increases accelerating rate percentage decreases (Bell et al. 2024).","code":""},{"path":"/articles/retrieved_dropout.html","id":"sec:dataSimul","dir":"Articles","previous_headings":"","what":"Data simulation using function simulate_data()","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"purposes vignette first create simulated dataset rbmi function simulate_data(). simulate_data() function generates data randomized clinical trial longitudinal continuous outcomes two different types ICEs. Specifically, simulate 1:1 randomized trial active drug (intervention) versus placebo (control) 100 subjects per group 4 post-baseline assessments (3-monthly visits 12 months): mean outcome trajectory placebo group increases linearly 50 baseline (visit 0) 60 visit 4, .e. slope 10 points/year (2.5 points every 3 months). mean outcome trajectory intervention group identical placebo group month 6. month 6 onward, slope decreases 50% 5 points/year (.e. 1.25 points every 3 months). covariance structure baseline follow-values groups implied random intercept slope model standard deviation 5 intercept slope, correlation 0.25. addition, independent residual error standard deviation 2.5 added assessment. probability intercurrent event study drug discontinuation visit calculated according logistic model depends observed outcome visit. Specifically, visit-wise discontinuation probability 3% 4% control intervention group, respectively, specified case observed outcome equal 50 (mean value baseline). odds discontinuation simulated increase +10% +1 point increase observed outcome. Study drug discontinuation simulated effect mean trajectory placebo group. intervention group, subjects discontinue follow slope mean trajectory placebo group time point onward. compatible copy increments reference (CIR) assumption. Study dropout study drug discontinuation visit occurs probability 50% leading missing outcome data time point onward. function simulate_data() requires 3 arguments (see function documentation help(simulate_data) details): pars_c: simulation parameters control group. pars_t: simulation parameters intervention group. post_ice1_traj: Specifies observed outcomes ICE1 simulated. , report data according specifications can simulated function simulate_data(): frequency ICE proportion data collected ICE impacts variance treatment effect retrieved dropout models. example, large proportion ICE combined small proportion data collected ICE might result substantial variance inflation, especially complex retrieved dropout models. proportion subjects ICE proportion subjects withdrew simulated study summarized : study 23% study participants discontinued study treatment control arm 24% intervention arm. Approximately half participants discontinued treatment dropped-study discontinuation visit leading missing outcomes subsequent visits.","code":"library(rbmi) library(dplyr) ## ## Attaching package: 'dplyr' ## The following objects are masked from 'package:stats': ## ## filter, lag ## The following objects are masked from 'package:base': ## ## intersect, setdiff, setequal, union set.seed(1392) time <- c(0, 3, 6, 9, 12) # Mean trajectory control muC <- c(50.0, 52.5, 55.0, 57.5, 60.0) # Mean trajectory intervention muT <- c(50.0, 52.5, 55.0, 56.25, 57.50) # Create Sigma sd_error <- 2.5 covRE <- rbind( c(25.0, 6.25), c(6.25, 25.0) ) Sigma <- cbind(1, time / 12) %*% covRE %*% rbind(1, time / 12) + diag(sd_error^2, nrow = length(time)) # Set simulation parameters of the control group parsC <- set_simul_pars( mu = muC, sigma = Sigma, n = 100, # sample size prob_ice1 = 0.03, # prob of discontinuation for outcome equal to 50 or_outcome_ice1 = 1.10, # +1 point increase => +10% odds of discontinuation prob_post_ice1_dropout = 0.5 # dropout rate following discontinuation ) # Set simulation parameters of the intervention group parsT <- parsC parsT$mu <- muT parsT$prob_ice1 <- 0.04 # Simulate data data <- simulate_data( pars_c = parsC, pars_t = parsT, post_ice1_traj = \"CIR\" # Assumption about post-ice trajectory ) %>% select(-c(outcome_noICE, ind_ice2)) # remove unncessary columns head(data) ## id visit group outcome_bl ind_ice1 dropout_ice1 outcome ## 1 id_1 0 Control 53.35397 0 0 53.35397 ## 2 id_1 1 Control 53.35397 0 0 55.15100 ## 3 id_1 2 Control 53.35397 0 0 59.81038 ## 4 id_1 3 Control 53.35397 0 0 61.59709 ## 5 id_1 4 Control 53.35397 0 0 67.08044 ## 6 id_2 0 Control 53.31025 0 0 53.31025 # Compute endpoint of interest: change from baseline data <- data %>% filter(visit != \"0\") %>% mutate( change = outcome - outcome_bl, visit = factor(visit, levels = unique(visit)) ) data %>% group_by(visit) %>% summarise( freq_disc_ctrl = mean(ind_ice1[group == \"Control\"] == 1), freq_dropout_ctrl = mean(dropout_ice1[group == \"Control\"] == 1), freq_disc_interv = mean(ind_ice1[group == \"Intervention\"] == 1), freq_dropout_interv = mean(dropout_ice1[group == \"Intervention\"] == 1) ) ## # A tibble: 4 × 5 ## visit freq_disc_ctrl freq_dropout_ctrl freq_disc_interv freq_dropout_interv ## ## 1 1 0.03 0.01 0.06 0.03 ## 2 2 0.1 0.03 0.1 0.04 ## 3 3 0.19 0.09 0.17 0.06 ## 4 4 0.23 0.12 0.24 0.1"},{"path":"/articles/retrieved_dropout.html","id":"estimators-based-on-retrieved-dropout-models","dir":"Articles","previous_headings":"","what":"Estimators based on retrieved dropout models","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"consider retrieved dropout methods model pre- post-ICE outcomes jointly including time-varying ICE indicators imputation model, .e. allow occurrence ICE impact mean structure covariance matrix. Imputation missing outcomes performed MAR assumption including observed data. analysis completed data, use standard ANCOVA model outcome follow-visit, respectively, treatment assignment main covariate adjustment baseline outcome. Specifically, consider following imputation models: Imputation basic MAR assumption (basic MAR): model ignores whether outcome observed pre- post-ICE, .e. retrieved dropout model. Rather, asymptotically equivalent standard MMRM model analogous “MI1” model Bell et al. (2024). difference “MI1” model rbmi based sequential imputation rather, missing outcomes imputed simultaneously based MMRM-type imputation model. include baseline outcome visit treatment group visit interaction terms imputation model form: change ~ outcome_bl*visit + group*visit. Retrieved dropout model 1 (RD1): model uses following imputation model: change ~ outcome_bl*visit + group*visit + time_since_ice1*group, time_since_ice1 set 0 treatment discontinuation time treatment discontinuation (months) subsequent visits. implies change slope outcome trajectories ICE, modeled separately treatment arm. model similar “TV2-MAR” estimator Noci et al. (2023). Compared basic MAR model, model requires estimation 2 additional parameters. Retrieved dropout model 2 (RD2): model uses following imputation model: change ~ outcome_bl*visit + group*visit + ind_ice1*group*visit. assumes constant shift outcomes ICE, modeled separately treatment arm visit. model analogous “MI2” model Bell et al. (2024). Compared basic MAR model, model requires estimation 2 times “number visits” additional parameters. makes different though rather weaker assumptions RD1 model might also harder fit post-ICE data collection sparse visits.","code":""},{"path":"/articles/retrieved_dropout.html","id":"implementation-of-the-defined-retrieved-dropout-models-in-rbmi","dir":"Articles","previous_headings":"","what":"Implementation of the defined retrieved dropout models in rbmi","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"rbmi supports inclusion time-varying covariates imputation model. requirement time-varying covariate non-missing visits including outcome might missing. Imputation performed (extended) MAR assumption. Therefore, imputation approaches implemented rbmi valid yield comparable estimators standard errors. vignette, used conditional mean imputation approach combined jackknife.","code":""},{"path":"/articles/retrieved_dropout.html","id":"basic-mar-model","dir":"Articles","previous_headings":"4 Implementation of the defined retrieved dropout models in rbmi","what":"Basic MAR model","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"","code":"# Define key variables for the imputation and analysis models vars <- set_vars( subjid = \"id\", visit = \"visit\", outcome = \"change\", group = \"group\", covariates = c(\"outcome_bl*visit\", \"group*visit\") ) vars_an <- vars vars_an$covariates <- \"outcome_bl\" # Define imputation method method <- method_condmean(type = \"jackknife\") draw_obj <- draws( data = data, data_ice = NULL, vars = vars, method = method, quiet = TRUE ) impute_obj <- impute( draw_obj ) ana_obj <- analyse( impute_obj, vars = vars_an ) pool_obj_basicMAR <- pool(ana_obj) pool_obj_basicMAR ## ## Pool Object ## ----------- ## Number of Results Combined: 1 + 200 ## Method: jackknife ## Confidence Level: 0.95 ## Alternative: two.sided ## ## Results: ## ## ================================================== ## parameter est se lci uci pval ## -------------------------------------------------- ## trt_1 -0.991 0.557 -2.083 0.101 0.075 ## lsm_ref_1 3.117 0.401 2.331 3.902 <0.001 ## lsm_alt_1 2.126 0.391 1.36 2.892 <0.001 ## trt_2 -0.937 0.611 -2.134 0.26 0.125 ## lsm_ref_2 5.814 0.447 4.938 6.69 <0.001 ## lsm_alt_2 4.877 0.414 4.066 5.688 <0.001 ## trt_3 -1.491 0.743 -2.948 -0.034 0.045 ## lsm_ref_3 7.725 0.526 6.694 8.757 <0.001 ## lsm_alt_3 6.234 0.522 5.211 7.258 <0.001 ## trt_4 -2.872 0.945 -4.723 -1.02 0.002 ## lsm_ref_4 10.787 0.661 9.491 12.083 <0.001 ## lsm_alt_4 7.915 0.67 6.603 9.228 <0.001 ## --------------------------------------------------"},{"path":"/articles/retrieved_dropout.html","id":"retrieved-dropout-model-1-rd1","dir":"Articles","previous_headings":"4 Implementation of the defined retrieved dropout models in rbmi","what":"Retrieved dropout model 1 (RD1)","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"","code":"# derive variable \"time_since_ice1\" (time since ICE in months) data <- data %>% group_by(id) %>% mutate(time_since_ice1 = cumsum(ind_ice1)*3) vars$covariates <- c(\"outcome_bl*visit\", \"group*visit\", \"time_since_ice1*group\") draw_obj <- draws( data = data, data_ice = NULL, vars = vars, method = method, quiet = TRUE ) impute_obj <- impute( draw_obj ) ana_obj <- analyse( impute_obj, vars = vars_an ) pool_obj_RD1 <- pool(ana_obj) pool_obj_RD1 ## ## Pool Object ## ----------- ## Number of Results Combined: 1 + 200 ## Method: jackknife ## Confidence Level: 0.95 ## Alternative: two.sided ## ## Results: ## ## ================================================== ## parameter est se lci uci pval ## -------------------------------------------------- ## trt_1 -0.931 0.558 -2.025 0.163 0.095 ## lsm_ref_1 3.119 0.4 2.334 3.903 <0.001 ## lsm_alt_1 2.188 0.393 1.419 2.957 <0.001 ## trt_2 -0.805 0.616 -2.013 0.403 0.192 ## lsm_ref_2 5.822 0.445 4.949 6.695 <0.001 ## lsm_alt_2 5.017 0.424 4.186 5.849 <0.001 ## trt_3 -1.263 0.758 -2.748 0.222 0.096 ## lsm_ref_3 7.749 0.52 6.729 8.768 <0.001 ## lsm_alt_3 6.486 0.549 5.41 7.562 <0.001 ## trt_4 -2.506 0.969 -4.406 -0.606 0.01 ## lsm_ref_4 10.837 0.653 9.558 12.116 <0.001 ## lsm_alt_4 8.331 0.718 6.924 9.737 <0.001 ## --------------------------------------------------"},{"path":"/articles/retrieved_dropout.html","id":"retrieved-dropout-model-2-rd2","dir":"Articles","previous_headings":"4 Implementation of the defined retrieved dropout models in rbmi","what":"Retrieved dropout model 2 (RD2)","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"","code":"vars$covariates <- c(\"outcome_bl*visit\", \"group*visit\", \"ind_ice1*group*visit\") draw_obj <- draws( data = data, data_ice = NULL, vars = vars, method = method, quiet = TRUE ) impute_obj <- impute( draw_obj ) ana_obj <- analyse( impute_obj, vars = vars_an ) pool_obj_RD2 <- pool(ana_obj) pool_obj_RD2 ## ## Pool Object ## ----------- ## Number of Results Combined: 1 + 200 ## Method: jackknife ## Confidence Level: 0.95 ## Alternative: two.sided ## ## Results: ## ## ================================================== ## parameter est se lci uci pval ## -------------------------------------------------- ## trt_1 -0.927 0.558 -2.021 0.167 0.097 ## lsm_ref_1 3.125 0.4 2.341 3.908 <0.001 ## lsm_alt_1 2.198 0.395 1.424 2.972 <0.001 ## trt_2 -0.889 0.612 -2.089 0.311 0.146 ## lsm_ref_2 5.837 0.443 4.97 6.705 <0.001 ## lsm_alt_2 4.948 0.421 4.124 5.772 <0.001 ## trt_3 -1.305 0.757 -2.788 0.178 0.085 ## lsm_ref_3 7.648 0.54 6.59 8.707 <0.001 ## lsm_alt_3 6.343 0.528 5.308 7.378 <0.001 ## trt_4 -2.617 0.975 -4.528 -0.706 0.007 ## lsm_ref_4 10.883 0.665 9.58 12.186 <0.001 ## lsm_alt_4 8.267 0.715 6.866 9.667 <0.001 ## --------------------------------------------------"},{"path":"/articles/retrieved_dropout.html","id":"brief-summary-of-results","dir":"Articles","previous_headings":"4 Implementation of the defined retrieved dropout models in rbmi","what":"Brief summary of results","title":"rbmi: Implementation of retrieved-dropout models using rbmi","text":"point estimators treatment effect last visit -2.872, -2.506, -2.617 basic MAR, RD1, RD2 estimators, respectively, .e. slightly smaller retrieved dropout models compared basic MAR model. corresponding standard errors 3 estimators 0.945, 0.969, 0.975, .e. slightly larger retrieved dropout models compared basic MAR model.","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"scope-of-this-document","dir":"Articles","previous_headings":"","what":"Scope of this document","title":"rbmi: Statistical Specifications","text":"document describes statistical methods implemented rbmi R package standard reference-based multiple imputation continuous longitudinal outcomes. package implements three classes multiple imputation (MI) approaches: Conventional MI methods based Bayesian (approximate Bayesian) posterior draws model parameters combined Rubin’s rules make inferences described Carpenter, Roger, Kenward (2013) Cro et al. (2020). Conditional mean imputation methods combined re-sampling techniques described Wolbers et al. (2022). Bootstrapped MI methods described von Hippel Bartlett (2021). document structured follows: first provide informal introduction estimands corresponding treatment effect estimation based MI (section 2). core document consists section 3 describes statistical methodology detail also contains comparison implemented approaches (section 3.10). link theory functions included package rbmi described section 4. conclude comparison package alternative software implementations reference-based imputation methods (section 5).","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"estimands","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods","what":"Estimands","title":"rbmi: Statistical Specifications","text":"ICH E9(R1) addendum estimands sensitivity analyses describes systematic approach ensure alignment among clinical trial objectives, trial execution/conduct, statistical analyses, interpretation results (ICH E9 working group (2019)). per addendum, estimand precise description treatment effect reflecting clinical question posed trial objective summarizes population-level outcomes patients different treatment conditions compared. One important attribute estimand list possible intercurrent events (ICEs), .e. events occurring treatment initiation affect either interpretation existence measurements associated clinical question interest, definition appropriate strategies deal ICEs. three relevant strategies purpose document hypothetical strategy, treatment policy strategy, composite strategy. hypothetical strategy, scenario envisaged ICE occur. scenario, endpoint values ICE directly observable treated using models missing data. treatment policy strategy, treatment effect presence ICEs targeted analyses based observed outcomes regardless whether subject ICE . composite strategy, ICE included component endpoint.","code":""},{"path":"/articles/stat_specs.html","id":"alignment-between-the-estimand-and-the-estimation-method","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods","what":"Alignment between the estimand and the estimation method","title":"rbmi: Statistical Specifications","text":"ICH E9(R1) addendum distinguishes ICEs missing data (ICH E9 working group (2019)). Whereas ICEs treatment discontinuations reflect clinical practice, amount missing data can minimized conduct clinical trial. However, many connections missing data ICEs. example, often difficult retain subjects clinical trial treatment discontinuation subject’s dropout trial leads missing data. another example, outcome values ICEs addressed using hypothetical strateg directly observable hypothetical scenario. Consequently, observed outcome values ICEs typically discarded treated missing data. addendum proposes estimation methods address problem presented missing data selected align estimand. recent overview methods align estimator estimand Mallinckrodt et al. (2020). short introduction estimation methods studies longitudinal endpoints can also found Wolbers et al. (2022). One prominent statistical method purpose multiple imputation (MI), target rbmi package.","code":""},{"path":"/articles/stat_specs.html","id":"missing-data-prior-to-ices","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods > 2.2 Alignment between the estimand and the estimation method","what":"Missing data prior to ICEs","title":"rbmi: Statistical Specifications","text":"Missing data may occur subjects without ICE prior occurrence ICE. missing outcomes associated ICE, often plausible impute missing--random (MAR) assumption using standard MMRM imputation model longitudinal outcomes. Informally, MAR occurs missing data can fully accounted baseline variables included model observed longitudinal outcomes, model correctly specified.","code":""},{"path":"/articles/stat_specs.html","id":"implementation-of-the-hypothetical-strategy","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods > 2.2 Alignment between the estimand and the estimation method","what":"Implementation of the hypothetical strategy","title":"rbmi: Statistical Specifications","text":"MAR imputation model described often also good starting point imputing data ICE handled using hypothetical strategy (Mallinckrodt et al. (2020)). Informally, assumes unobserved values ICE similar observed data subjects ICE remained follow-. However, situations, may reasonable assume missingness “informative” indicates systematically better worse outcome observed subjects. situations, MNAR imputation \\(\\delta\\)-adjustment explored sensitivity analysis. \\(\\delta\\)-adjustments add fixed random quantity imputations order make imputed outcomes systematically worse better observed described Cro et al. (2020). rbmi fixed \\(\\delta\\)-adjustments implemented.","code":""},{"path":"/articles/stat_specs.html","id":"implementation-of-the-treatment-policy-strategy","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods > 2.2 Alignment between the estimand and the estimation method","what":"Implementation of the treatment policy strategy","title":"rbmi: Statistical Specifications","text":"Ideally, data collection continues ICE handled treatment policy strategy missing data arises. Indeed, post-ICE data increasingly systematically collected RCTs. However, despite best efforts, missing data ICE study treatment discontinuation may still occur subject drops study discontinuation. difficult give definite recommendations regarding implementation treatment policy strategy presence missing data stage optimal method highly context dependent topic ongoing statistical research. ICEs thought negligible effect efficacy outcomes, standard MAR-based imputation ignores whether outcome observed pre- post-ICE may appropriate. contrast, ICE treatment discontinuation may expected substantial impact efficacy outcomes. settings, MAR assumption may still plausible conditioning subject’s time-varying treatment status (Guizzaro et al. (2021)). case, one option impute missing post-discontinuation data based subjects also discontinued treatment continued followed . Another option may require somewhat less post-discontinuation data include subjects imputation procedure model post-discontinuation data using time-varying treatment status indicators (Guizzaro et al. (2021), Polverejan Dragalin (2020), Noci et al. (2023), Drury et al. (2024), Bell et al. (2024)). approach, post-ICE outcomes included every step analysis, including fitting imputation model. assumes ICEs may impact post-ICE outcomes otherwise missingness non-informative. approach also assumes time-varying covariates contain missing values, deviations outcomes ICE correctly modeled time-varying covariates, sufficient post-ICE data available inform regression coefficients time-varying covariates. resulting imputation models called “retrieved dropout models” statistical literature. models tend less bias alternative analysis approaches based imputation basic MAR assumption reference-based missing data assumption. However, retrieved dropout models associated inflated standard errors associated treatment effect estimators detrimental effect study power. particular, observed post-ICE observation percentages falls 50%, power loss can quite dramatic (Bell et al. 2024). illustrate implementation retrieved dropout models vignette “Implementation retrieved-dropout models using rbmi” (vignette(topic = \"retrieved_dropout\", package = \"rbmi\")). trial settings, subjects discontinue randomized treatment. settings, treatment discontinuation rates higher difficult retain subjects trial treatment discontinuation leading sparse data collection treatment discontinuation. settings, amount available data treatment discontinuation may insufficient inform imputation model explicitly models post-discontinuation data. Depending disease area anticipated mechanism action intervention, may plausible assume subjects intervention group behave similarly subjects control group ICE treatment discontinuation. case, reference-based imputation methods option (Mallinckrodt et al. (2020)). Reference-based imputation methods formalize idea impute missing data intervention group based data control reference group. general description review reference-based imputation methods, refer Carpenter, Roger, Kenward (2013), Cro et al. (2020), . White, Royes, Best (2020) Wolbers et al. (2022). technical description implemented statistical methodology reference-based imputation, refer section 3 (particular section 3.4).","code":""},{"path":"/articles/stat_specs.html","id":"implementation-of-the-composite-strategy","dir":"Articles","previous_headings":"2 Introduction to estimands and estimation methods > 2.2 Alignment between the estimand and the estimation method","what":"Implementation of the composite strategy","title":"rbmi: Statistical Specifications","text":"composite strategy typically applied binary time--event outcomes can also used continuous outcomes ascribing suitably unfavorable value patients experience ICEs composite strategy defined. One possibility implement use MI \\(\\delta\\)-adjustment post-ICE data described Darken et al. (2020).","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"sec:methodsOverview","dir":"Articles","previous_headings":"3 Statistical methodology","what":"Overview of the imputation procedure","title":"rbmi: Statistical Specifications","text":"Analyses datasets missing data always rely missing data assumptions. methods described can used produce valid imputations MAR assumption reference-based imputation assumptions. MNAR imputation based fixed \\(\\delta\\)-adjustments typically used sensitivity analyses tipping-point analyses also supported. Three general imputation approaches implemented rbmi: Conventional MI based Bayesian (approximate Bayesian) posterior draws imputation model combined Rubin’s rules inference described Carpenter, Roger, Kenward (2013) Cro et al. (2020). Conditional mean imputation based REML estimate imputation model combined resampling techniques (jackknife bootstrap) inference described Wolbers et al. (2022). Bootstrapped MI methods based REML estimates imputation model described von Hippel Bartlett (2021).","code":""},{"path":"/articles/stat_specs.html","id":"conventional-mi","dir":"Articles","previous_headings":"3 Statistical methodology > 3.1 Overview of the imputation procedure","what":"Conventional MI","title":"rbmi: Statistical Specifications","text":"Conventional MI approaches include following steps: Base imputation model fitting step (Section 3.3) Fit Bayesian multivariate normal mixed model repeated measures (MMRM) observed longitudinal outcomes exclusion data ICEs reference-based missing data imputation desired (Section 3.3.3). Draw \\(M\\) posterior samples estimated parameters (regression coefficients covariance matrices) model. Alternatively, \\(M\\) approximate posterior draws posterior distribution can sampled repeatedly applying conventional restricted maximum-likelihood (REML) parameter estimation MMRM model nonparametric bootstrap samples original dataset (Section 3.3.4). Imputation step (Section 3.4) Take single sample \\(m\\) (\\(m\\1,\\ldots, M)\\) posterior distribution imputation model parameters. subject, use sampled parameters defined imputation strategy determine mean covariance matrix describing subject’s marginal outcome distribution longitudinal outcome assessments (.e. observed missing outcomes). subjects, construct conditional multivariate normal distribution missing outcomes given observed outcomes (including observed outcomes ICEs reference-based assumption desired). subject, draw single sample conditional distribution impute missing outcomes leading complete imputed dataset. sensitivity analyses, pre-defined \\(\\delta\\)-adjustment may applied imputed data prior analysis step. (Section 3.5). Analysis step (Section 3.6) Analyze imputed dataset using analysis model (e.g. ANCOVA) resulting point estimate standard error (corresponding degrees freedom) treatment effect. Pooling step inference (Section 3.7) Repeat steps 2. 3. posterior sample \\(m\\), resulting \\(M\\) complete datasets, \\(M\\) point estimates treatment effect, \\(M\\) standard errors (corresponding degrees freedom). Pool \\(M\\) treatment effect estimates, standard errors, degrees freedom using rules Barnard Rubin obtain final pooled treatment effect estimator, standard error, degrees freedom.","code":""},{"path":"/articles/stat_specs.html","id":"conditional-mean-imputation","dir":"Articles","previous_headings":"3 Statistical methodology > 3.1 Overview of the imputation procedure","what":"Conditional mean imputation","title":"rbmi: Statistical Specifications","text":"conditional mean imputation approach includes following steps: Base imputation model fitting step (Section 3.3) Fit conventional multivariate normal/MMRM model using restricted maximum likelihood (REML) observed longitudinal outcomes exclusion data ICEs reference-based missing data imputation desired (Section 3.3.2). Imputation step (Section 3.4) subject, use fitted parameters step 1. construct conditional distribution missing outcomes given observed outcomes (including observed outcomes ICEs reference-based missing data imputation desired) described . subject, impute missing data deterministically mean conditional distribution leading complete imputed dataset. sensitivity analyses, pre-defined \\(\\delta\\)-adjustment may applied imputed data prior analysis step. (Section 3.5). Analysis step (Section 3.6) Apply analysis model (e.g. ANCOVA) completed dataset resulting point estimate treatment effect. Jackknife bootstrap inference step (Section 3.8) Inference treatment effect estimate 3. based re-sampling techniques. jackknife bootstrap supported. Importantly, methods require repeating steps imputation procedure (.e. imputation, conditional mean imputation, analysis steps) resampled datasets.","code":""},{"path":"/articles/stat_specs.html","id":"bootstrapped-mi","dir":"Articles","previous_headings":"3 Statistical methodology > 3.1 Overview of the imputation procedure","what":"Bootstrapped MI","title":"rbmi: Statistical Specifications","text":"bootstrapped MI approach includes following steps: Base imputation model fitting step (Section 3.3) Apply conventional restricted maximum-likelihood (REML) parameter estimation MMRM model \\(B\\) nonparametric bootstrap samples original dataset using observed longitudinal outcomes exclusion data ICEs reference-based missing data imputation desired. Imputation step (Section 3.4) Take bootstrapped dataset \\(b\\) (\\(b\\1,\\ldots, B)\\) corresponding imputation model parameter estimates. subject (bootstrapped dataset), use parameter estimates defined strategy dealing ICEs determine mean covariance matrix describing subject’s marginal outcome distribution longitudinal outcome assessments (.e. observed missing outcomes). subjects (bootstrapped dataset), construct conditional multivariate normal distribution missing outcomes given observed outcomes (including observed outcomes ICEs reference-based missing data imputation desired). subject (bootstrapped dataset), draw \\(D\\) samples conditional distributions impute missing outcomes leading \\(D\\) complete imputed dataset bootstrap sample \\(b\\). sensitivity analyses, pre-defined \\(\\delta\\)-adjustment may applied imputed data prior analysis step. (Section 3.5). Analysis step (Section 3.6) Analyze \\(B\\times D\\) imputed datasets using analysis model (e.g. ANCOVA) resulting \\(B\\times D\\) point estimates treatment effect. Pooling step inference (Section 3.9) Pool \\(B\\times D\\) treatment effect estimates described von Hippel Bartlett (2021) obtain final pooled treatment effect estimate, standard error, degrees freedom.","code":""},{"path":"/articles/stat_specs.html","id":"setting-notation-and-missing-data-assumptions","dir":"Articles","previous_headings":"3 Statistical methodology","what":"Setting, notation, and missing data assumptions","title":"rbmi: Statistical Specifications","text":"Assume data study \\(n\\) subjects total subject \\(\\) (\\(=1,\\ldots,n\\)) \\(J\\) scheduled follow-visits outcome interest assessed. applications, data randomized trial intervention vs control group treatment effect interest comparison outcomes specific visit randomized groups. However, single-arm trials multi-arm trials principle also supported rbmi implementation. Denote observed outcome vector length \\(J\\) subject \\(\\) \\(Y_i\\) (missing assessments coded NA (available)) non-missing missing components \\(Y_{!}\\) \\(Y_{?}\\), respectively. default, imputation missing outcomes \\(Y_{}\\) performed MAR assumption rbmi. Therefore, missing data following ICE handled using MAR imputation, compatible default assumption. discussed Section 2, MAR assumption often good starting point implementing hypothetical strategy. also note observed outcome data ICE handled using hypothetical strategy compatible strategy. Therefore, assume post-ICE data ICEs handled using hypothetical strategy already set NA \\(Y_i\\) prior calling rbmi functions. However, observed outcomes ICEs handled using treatment policy strategy included \\(Y_i\\) compatible strategy. Subjects may also experience one ICE missing data imputation according reference-based imputation method foreseen. subject \\(\\) ICE, denote first visit affected ICE \\(\\tilde{t}_i \\\\{1,\\ldots,J\\}\\). subjects, set \\(\\tilde{t}_i=\\infty\\). subject’s outcome vector setting observed outcomes visit \\(\\tilde{t}_i\\) onwards missing (.e. NA) denoted \\(Y'_i\\) corresponding data vector removal NA elements \\(Y'_{!}\\). MNAR \\(\\delta\\)-adjustments added imputed datasets formal imputation steps. covered separate section (Section 3.5).","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"sec:imputationModelSpecs","dir":"Articles","previous_headings":"3 Statistical methodology > 3.3 The base imputation model","what":"Included data and model specification","title":"rbmi: Statistical Specifications","text":"purpose imputation model estimate (covariate-dependent) mean trajectories covariance matrices group absence ICEs handled using reference-based imputation methods. Conventionally, publications reference-based imputation methods implicitly assumed corresponding post-ICE data missing subjects (Carpenter, Roger, Kenward (2013)). also allow situation post-ICE data available subjects needs imputed using reference-based methods others. However, observed data ICEs reference-based imputation methods specified compatible imputation model described therefore removed considered missing purpose estimating imputation model, purpose . example, patient ICE addressed reference-based method outcomes ICE collected, post-ICE outcomes excluded fitting base imputation model (included following steps). , base imputation model fitted \\(Y'_{!}\\) \\(Y_{!}\\). exclude data, imputation model mistakenly estimate mean trajectories based mixture observed pre- post-ICE data relevant reference-based imputations. Observed post-ICE outcomes control reference group also excluded base imputation model user specifies reference-based imputation strategy ICEs. ensures ICE impact data included imputation model regardless whether ICE occurred control intervention group. hand, imputation reference group based MAR assumption even reference-based imputation methods may preferable settings include post-ICE data control group base imputation model. can implemented specifying MAR strategy ICE control group reference-based strategy ICE intervention group. base imputation model longitudinal outcomes \\(Y'_i\\) assumes mean structure linear function covariates. Full flexibility specification linear predictor model supported. minimum covariates include treatment group, (categorical) visit, treatment--visit interactions. Typically, covariates including baseline outcome also included. External time-varying covariates (e.g. calendar time visit) well internal time-varying (e.g. time-varying indicators treatment discontinuation initiation rescue treatment) may principle also included indicated (Guizzaro et al. (2021)). Missing covariate values allowed. means values time-varying covariates must non-missing every visit regardless whether outcome measured missing. Denote \\(J\\times p\\) design matrix subject \\(\\) corresponding mean structure model \\(X_i\\) matrix removal rows corresponding missing outcomes \\(Y'_{!}\\) \\(X'_{!}\\). \\(p\\) number parameters mean structure model elements \\(Y'_{!}\\). base imputation model observed outcomes defined : \\[ Y'_{!} = X'_{!}\\beta + \\epsilon_{!} \\mbox{ } \\epsilon_{!}\\sim N(0,\\Sigma_{!!})\\] \\(\\beta\\) vector regression coefficients \\(\\Sigma_{!!}\\) covariance matrix obtained complete-data \\(J\\times J\\)-covariance matrix \\(\\Sigma\\) omitting rows columns corresponding missing outcome assessments subject \\(\\). Typically, common unstructured covariance matrix subjects assumed \\(\\Sigma\\) separate covariate matrices per treatment group also supported. Indeed, implementation also supports specification separate covariate matrices according arbitrarily defined categorical variable groups subjects disjoint subset. example, useful different covariance matrices suspected different subject strata. Finally, imputation methods described rely Bayesian model fitting MCMC, flexibility choice covariance structure, .e. unstructured (default), heterogeneous Toeplitz, heterogeneous compound symmetry, AR(1) covariance structures supported.","code":""},{"path":"/articles/stat_specs.html","id":"sec:imputationModelREML","dir":"Articles","previous_headings":"3 Statistical methodology > 3.3 The base imputation model","what":"Restricted maximum likelihood estimation (REML)","title":"rbmi: Statistical Specifications","text":"Frequentist parameter estimation base imputation based REML. use REML improved alternative maximum likelihood (ML) covariance parameter estimation originally proposed Patterson Thompson (1971). Since , become default method parameter estimation linear mixed effects models. rbmi allows choose ML REML methods estimate model parameters, REML default option.","code":""},{"path":"/articles/stat_specs.html","id":"sec:imputationModelBayes","dir":"Articles","previous_headings":"3 Statistical methodology > 3.3 The base imputation model","what":"Bayesian model fitting","title":"rbmi: Statistical Specifications","text":"Bayesian imputation model fitted R package rstan (Stan Development Team (2020)). rstan R interface Stan. Stan powerful flexible statistical software developed dedicated team implements Bayesian inference state---art MCMC sampling procedures. multivariate normal model missing data specified section 3.3.1 can considered generalization models described Stan user’s guide (see Stan Development Team (2020, sec. 3.5)). prior distributions SAS implementation “five macros” used (Roger (2021)), .e. improper flat priors regression coefficients weakly informative inverse Wishart prior covariance matrix (matrices). Specifically, let \\(S \\\\mathbb{R}^{J \\times J}\\) symmetric positive definite matrix \\(\\nu \\(J-1, \\infty)\\). symmetric positive definite matrix \\(x \\\\mathbb{R}^{J \\times J}\\) density: \\[ \\text{InvWish}(x \\vert \\nu, S) = \\frac{1}{2^{\\nu J/2}} \\frac{1}{\\Gamma_J(\\frac{\\nu}{2})} \\vert S \\vert^{\\nu/2} \\vert x \\vert ^{-(\\nu + J + 1)/2} \\text{exp}(-\\frac{1}{2} \\text{tr}(Sx^{-1})). \\] \\(\\nu > J+1\\) mean given : \\[ E[x] = \\frac{S}{\\nu - J - 1}. \\] choose \\(S\\) equal estimated covariance matrix frequentist REML fit \\(\\nu = J+2\\) lowest degrees freedom guarantee finite mean. Setting degrees freedom low \\(\\nu\\) ensures prior little impact posterior. Moreover, choice allows interpret parameter \\(S\\) mean prior distribution. “five macros”, MCMC algorithm initialized parameters frequentist REML fit (see section 3.3.2). described , using weakly informative priors parameters. Therefore, Markov chain essentially starting targeted stationary posterior distribution minimal amount burn-chain required.","code":""},{"path":"/articles/stat_specs.html","id":"sec:imputationModelBoot","dir":"Articles","previous_headings":"3 Statistical methodology > 3.3 The base imputation model","what":"Approximate Bayesian posterior draws via the bootstrap","title":"rbmi: Statistical Specifications","text":"Several authors suggested stabler way get Bayesian posterior draws imputation model bootstrap incomplete data calculate REML estimates bootstrap sample (Little Rubin (2002), Efron (1994), Honaker King (2010), von Hippel Bartlett (2021)). method proper REML estimates bootstrap samples asymptotically equivalent sample posterior distribution may provide additional robustness model misspecification (Little Rubin (2002, sec. 10.2.3, part 6), Honaker King (2010)). order retain balance treatment groups stratification factors across bootstrap samples, user able provide stratification variables bootstrap rbmi implementation.","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"sec:imputatioMNAR","dir":"Articles","previous_headings":"3 Statistical methodology > 3.4 Imputation step","what":"Marginal imputation distribution for a subject - MAR case","title":"rbmi: Statistical Specifications","text":"subject \\(\\), marginal distribution complete \\(J\\)-dimensional outcome vector assessment visits according imputation model multivariate normal distribution. mean \\(\\tilde{\\mu}_i\\) given predicted mean imputation model conditional subject’s baseline characteristics, group, , optionally, time-varying covariates. covariance matrix \\(\\tilde{\\Sigma}_i\\) given overall estimated covariance matrix , different covariance matrices assumed different groups, covariance matrix corresponding subject \\(\\)’s group.","code":""},{"path":"/articles/stat_specs.html","id":"sec:imputationRefBased","dir":"Articles","previous_headings":"3 Statistical methodology > 3.4 Imputation step","what":"Marginal imputation distribution for a subject - reference-based imputation methods","title":"rbmi: Statistical Specifications","text":"subject \\(\\), calculate mean covariance matrix complete \\(J\\)-dimensional outcome vector assessment visits MAR case denote \\(\\mu_i\\) \\(\\Sigma_i\\). reference-based imputation methods, corresponding reference group also required group. Typically, reference group intervention group control group. reference mean \\(\\mu_{ref,}\\) defined predicted mean imputation model conditional reference group (rather actual group subject \\(\\) belongs ) subject’s baseline characteristics. reference covariance matrix \\(\\Sigma_{ref,}\\) overall estimated covariance matrix , different covariance matrices assumed different groups, estimated covariance matrix corresponding reference group. principle, time-varying covariates also included reference-based imputation methods. However, sensible external time-varying covariates (e.g. calendar time visit) internal time-varying covariates (e.g. treatment discontinuation) latter likely depend actual treatment group typically sensible assume trajectory time-varying covariate reference group. Based means covariance matrices, subject’s marginal imputation distribution reference-based imputation methods calculated detailed Carpenter, Roger, Kenward (2013, sec. 4.3). Denote mean covariance matrix marginal imputation distribution \\(\\tilde{\\mu}_i\\) \\(\\tilde{\\Sigma}_i\\). Recall subject’s first visit affected ICE denoted \\(\\tilde{t}_i \\\\{1,\\ldots,J\\}\\) (visit \\(\\tilde{t}_i-1\\) last visit unaffected ICE). marginal distribution patient \\(\\) built according specific assumption data post ICE follows: Jump reference (JR): patient’s outcome distribution normally distributed following mean: \\[\\tilde{\\mu}_i = (\\mu_i[1], \\dots, \\mu_i[\\tilde{t}_i-1], \\mu_{ref,}[\\tilde{t}_i], \\dots, \\mu_{ref,}[J])^T.\\] covariance matrix constructed follows. First, partition covariance matrices \\(\\Sigma_i\\) \\(\\Sigma_{ref,}\\) blocks according time ICE \\(\\tilde{t}_i\\): \\[ \\Sigma_{} = \\begin{bmatrix} \\Sigma_{, 11} & \\Sigma_{, 12} \\\\ \\Sigma_{, 21} & \\Sigma_{,22} \\\\ \\end{bmatrix} \\] \\[ \\Sigma_{ref,} = \\begin{bmatrix} \\Sigma_{ref, , 11} & \\Sigma_{ref, , 12} \\\\ \\Sigma_{ref, , 21} & \\Sigma_{ref, ,22} \\\\ \\end{bmatrix}. \\] want covariance matrix \\(\\tilde{\\Sigma}_i\\) match \\(\\Sigma_i\\) pre-deviation measurements, \\(\\Sigma_{ref,}\\) conditional components post-deviation given pre-deviation measurements. solution derived Carpenter, Roger, Kenward (2013, sec. 4.3) given : \\[ \\begin{matrix} \\tilde{\\Sigma}_{,11} = \\Sigma_{, 11} \\\\ \\tilde{\\Sigma}_{, 21} = \\Sigma_{ref,, 21} \\Sigma^{-1}_{ref,, 11} \\Sigma_{, 11} \\\\ \\tilde{\\Sigma}_{, 22} = \\Sigma_{ref, , 22} - \\Sigma_{ref,, 21} \\Sigma^{-1}_{ref,, 11} (\\Sigma_{ref,, 11} - \\Sigma_{,11}) \\Sigma^{-1}_{ref,, 11} \\Sigma_{ref,, 12}. \\end{matrix} \\] Copy increments reference (CIR): patient’s outcome distribution normally distributed following mean: \\[ \\begin{split} \\tilde{\\mu}_i =& (\\mu_i[1], \\dots, \\mu_i[\\tilde{t}_i-1], \\mu_i[\\tilde{t}_i-1] + (\\mu_{ref,}[\\tilde{t}_i] - \\mu_{ref,}[\\tilde{t}_i-1]), \\dots,\\\\ & \\mu_i[\\tilde{t}_i-1]+(\\mu_{ref,}[J] - \\mu_{ref,}[\\tilde{t}_i-1]))^T. \\end{split} \\] covariance matrix derived JR method. Copy reference (CR): patient’s outcome distribution normally distributed mean covariance matrix taken reference group: \\[ \\tilde{\\mu}_i = \\mu_{ref,} \\] \\[ \\tilde{\\Sigma}_i = \\Sigma_{ref,}. \\] Last mean carried forward (LMCF): patient’s outcome distribution normally distributed following mean: \\[ \\tilde{\\mu}_i = (\\mu_i[1], \\dots, \\mu_i[\\tilde{t}_i-1], \\mu_i[\\tilde{t}_i-1], \\dots, \\mu_i[\\tilde{t}_i-1])'\\] covariance matrix: \\[ \\tilde{\\Sigma}_i = \\Sigma_i.\\]","code":""},{"path":"/articles/stat_specs.html","id":"sec:imputationRandomConditionalMean","dir":"Articles","previous_headings":"3 Statistical methodology > 3.4 Imputation step","what":"Imputation of missing outcome data","title":"rbmi: Statistical Specifications","text":"joint marginal multivariate normal imputation distribution subject \\(\\)’s observed missing outcome data mean \\(\\tilde{\\mu}_i\\) covariance matrix \\(\\tilde{\\Sigma}_i\\) defined . actual imputation missing outcome data obtained conditioning marginal distribution subject’s observed outcome data. note, approach valid regardless whether subject intermittent terminal missing data. conditional distribution used imputation multivariate normal distribution explicit formulas conditional mean covariance readily available. completeness, report notation terminology setting. marginal distribution outcome patient \\(\\) \\(Y_i \\sim N(\\tilde{\\mu}_i, \\tilde{\\Sigma}_i)\\) outcome \\(Y_i\\) can decomposed observed (\\(Y_{,!}\\)) unobserved (\\(Y_{,?}\\)) components. Analogously mean \\(\\tilde{\\mu}_i\\) can decomposed \\((\\tilde{\\mu}_{,!},\\tilde{\\mu}_{,?})\\) covariance \\(\\tilde{\\Sigma}_i\\) : \\[ \\tilde{\\Sigma}_i = \\begin{bmatrix} \\tilde{\\Sigma}_{, !!} & \\tilde{\\Sigma}_{,!?} \\\\ \\tilde{\\Sigma}_{, ?!} & \\tilde{\\Sigma}_{, ??} \\end{bmatrix}. \\] conditional distribution \\(Y_{,?}\\) conditional \\(Y_{,!}\\) multivariate normal distribution expectation \\[ E(Y_{,?} \\vert Y_{,!})= \\tilde{\\mu}_{,?} + \\tilde{\\Sigma}_{, ?!} \\tilde{\\Sigma}_{,!!}^{-1} (Y_{,!} - \\tilde{\\mu}_{,!}) \\] covariance matrix \\[ Cov(Y_{,?} \\vert Y_{,!}) = \\tilde{\\Sigma}_{,??} - \\tilde{\\Sigma}_{,?!} \\tilde{\\Sigma}_{,!!}^{-1} \\tilde{\\Sigma}_{,!?}. \\] Conventional random imputation consists sampling conditional multivariate normal distribution. Conditional mean imputation imputes missing values deterministic conditional expectation \\(E(Y_{,?} \\vert Y_{,!})\\).","code":""},{"path":"/articles/stat_specs.html","id":"sec:deltaAdjustment","dir":"Articles","previous_headings":"3 Statistical methodology","what":"\\(\\delta\\)-adjustment","title":"rbmi: Statistical Specifications","text":"marginal \\(\\delta\\)-adjustment approach similar “five macros” SAS implemented (Roger (2021)), .e. fixed non-stochastic values added multivariate normal imputation step prior analysis. relevant sensitivity analyses order make imputed data systematically worse better, respectively, observed data. addition, authors suggested \\(\\delta\\)-type adjustments implement composite strategy continuous outcomes (Darken et al. (2020)). implementation provides full flexibility regarding specific implementation \\(\\delta\\)-adjustment, .e. value added may depend randomized treatment group, timing subject’s ICE, factors. suggestions case studies regarding topic, refer Cro et al. (2020).","code":""},{"path":"/articles/stat_specs.html","id":"sec:analysis","dir":"Articles","previous_headings":"3 Statistical methodology","what":"Analysis step","title":"rbmi: Statistical Specifications","text":"data imputation, standard analysis model can applied completed data resulting treatment effect estimate. imputed data longer contains missing values, analysis model often simple. example, can analysis covariance (ANCOVA) model outcome (change outcome baseline) specific visit j dependent variable, randomized treatment group primary covariate , typically, adjustment baseline covariates imputation model.","code":""},{"path":"/articles/stat_specs.html","id":"sec:pooling","dir":"Articles","previous_headings":"3 Statistical methodology","what":"Pooling step for inference of (approximate) Bayesian MI and Rubin’s rules","title":"rbmi: Statistical Specifications","text":"Assume analysis model applied \\(M\\) multiple imputed random datasets resulted \\(m\\) treatment effect estimates \\(\\hat{\\theta}_m\\) (\\(m=1,\\ldots,M\\)) corresponding standard error \\(SE_m\\) (available) degrees freedom \\(\\nu_{com}\\). degrees freedom available analysis model, set \\(\\nu_{com}=\\infty\\) inference based normal distribution. Rubin’s rules used pooling treatment effect estimates corresponding variances estimates analysis steps across \\(M\\) multiple imputed datasets. According Rubin’s rules, final estimate treatment effect calculated sample mean \\(M\\) treatment effect estimates: \\[ \\hat{\\theta} = \\frac{1}{M} \\sum_{m = 1}^M \\hat{\\theta}_m. \\] pooled variance based two components reflect within variance treatment effects across multiple imputed datasets: \\[ V(\\hat{\\theta}) = V_W(\\hat{\\theta}) + (1 + \\frac{1}{M}) V_B(\\hat{\\theta}) \\] \\(V_W(\\hat{\\theta}) = \\frac{1}{M}\\sum_{m = 1}^M SE^2_m\\) within-variance \\(V_B(\\hat{\\theta}) = \\frac{1}{M-1} \\sum_{m = 1}^M (\\hat{\\theta}_m - \\hat{\\theta})^2\\) -variance. Confidence intervals tests null hypothesis \\(H_0: \\theta=\\theta_0\\) based \\(t\\)-statistics \\(T\\): \\[ T= (\\hat{\\theta}-\\theta_0)/\\sqrt{V(\\hat{\\theta})}. \\] null hypothesis, \\(T\\) approximate \\(t\\)-distribution \\(\\nu\\) degrees freedom. \\(\\nu\\) calculated according Barnard Rubin approximation, see Barnard Rubin (1999) (formula 3) Little Rubin (2002) (formula (5.24), page 87): \\[ \\nu = \\frac{\\nu_{old}* \\nu_{obs}}{\\nu_{old} + \\nu_{obs}} \\] \\[ \\nu_{old} = \\frac{M-1}{\\lambda^2} \\quad\\mbox{}\\quad \\nu_{obs} = \\frac{\\nu_{com} + 1}{\\nu_{com} + 3} \\nu_{com} (1 - \\lambda) \\] \\(\\lambda = \\frac{(1 + \\frac{1}{M})V_B(\\hat{\\theta})}{V(\\hat{\\theta})}\\) fraction missing information.","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"point-estimate-of-the-treatment-effect","dir":"Articles","previous_headings":"3 Statistical methodology > 3.8 Bootstrap and jackknife inference for conditional mean imputation","what":"Point estimate of the treatment effect","title":"rbmi: Statistical Specifications","text":"point estimator obtained applying analysis model (Section 3.6) single conditional mean imputation missing data (see Section 3.4.3) based REML estimator parameters imputation model (see Section 3.3.2). denote treatment effect estimator \\(\\hat{\\theta}\\). demonstrated Wolbers et al. (2022) (Section 2.4), treatment effect estimator valid analysis model ANCOVA model , generally, treatment effect estimator linear function imputed outcome vector. Indeed, case, estimator identical pooled treatment effect across multiple random REML imputation infinite number imputations corresponds computationally efficient implementation proposal von Hippel Bartlett (2021). expect conditional mean imputation method also applicable analysis models (e.g. general MMRM analysis models) formally justified.","code":""},{"path":"/articles/stat_specs.html","id":"jackknife-standard-errors-confidence-intervals-ci-and-tests-for-the-treatment-effect","dir":"Articles","previous_headings":"3 Statistical methodology > 3.8 Bootstrap and jackknife inference for conditional mean imputation","what":"Jackknife standard errors, confidence intervals (CI) and tests for the treatment effect","title":"rbmi: Statistical Specifications","text":"dataset containing \\(n\\) subjects, jackknife standard error depends treatment effect estimates \\(\\hat{\\theta}_{(-b)}\\) (\\(b=1,\\ldots,n\\)) samples original dataset leave observation subject \\(b\\). described previously, obtain treatment effect estimates leave-one-subject-datasets, steps imputation procedure (.e. imputation, conditional mean imputation, analysis steps) need repeated new dataset. , jackknife standard error defined \\[\\hat{se}_{jack}=[\\frac{(n-1)}{n}\\cdot\\sum_{b=1}^{n} (\\hat{\\theta}_{(-b)}-\\bar{\\theta}_{(.)})^2]^{1/2}\\] \\(\\bar{\\theta}_{(.)}\\) denotes mean jackknife estimates (Efron Tibshirani (1994), chapter 10). corresponding two-sided normal approximation \\(1-\\alpha\\) CI defined \\(\\hat{\\theta}\\pm z^{1-\\alpha/2}\\cdot \\hat{se}_{jack}\\) \\(\\hat{\\theta}\\) treatment effect estimate original dataset. Tests null hypothesis \\(H_0: \\theta=\\theta_0\\) based \\(Z\\)-score \\(Z=(\\hat{\\theta}-\\theta_0)/\\hat{se}_{jack}\\) using standard normal approximation. simulation study reported Wolbers et al. (2022) demonstrated exact protection type error jackknife-based inference relatively low sample size (n = 100 per group) substantial amount missing data (>25% subjects ICE).","code":""},{"path":"/articles/stat_specs.html","id":"bootstrap-standard-errors-confidence-intervals-ci-and-tests-for-the-treatment-effect","dir":"Articles","previous_headings":"3 Statistical methodology > 3.8 Bootstrap and jackknife inference for conditional mean imputation","what":"Bootstrap standard errors, confidence intervals (CI) and tests for the treatment effect","title":"rbmi: Statistical Specifications","text":"alternative jackknife, bootstrap also implemented rbmi (Efron Tibshirani (1994), Davison Hinkley (1997)). Two different bootstrap methods implemented rbmi: Methods based bootstrap standard error normal approximation percentile bootstrap methods. Denote treatment effect estimates \\(B\\) bootstrap samples \\(\\hat{\\theta}^*_b\\) (\\(b=1,\\ldots,B\\)). bootstrap standard error \\(\\hat{se}_{boot}\\) defined empirical standard deviation bootstrapped treatment effect estimates. Confidence intervals tests based bootstrap standard error can constructed way jackknife. Confidence intervals using percentile bootstrap based empirical quantiles bootstrap distribution corresponding statistical tests implemented rbmi via inversion confidence interval. Explicit formulas bootstrap inference implemented rbmi package considerations regarding required number bootstrap samples included Appendix Wolbers et al. (2022). simulation study reported Wolbers et al. (2022) demonstrated small inflation type error rate inference based bootstrap standard error (\\(5.3\\%\\) nominal type error rate \\(5\\%\\)) sample size n = 100 per group substantial amount missing data (>25% subjects ICE). Based simulations, recommend jackknife bootstrap inference performed better simulation study typically much faster compute bootstrap.","code":""},{"path":"/articles/stat_specs.html","id":"sec:poolbmlmi","dir":"Articles","previous_headings":"3 Statistical methodology","what":"Pooling step for inference of the bootstrapped MI methods","title":"rbmi: Statistical Specifications","text":"Assume analysis model applied \\(B\\times D\\) multiple imputed random datasets resulted \\(B\\times D\\) treatment effect estimates \\(\\hat{\\theta}_{bd}\\) (\\(b=1,\\ldots,B\\); \\(d=1,\\ldots,D\\)). final estimate treatment effect calculated sample mean \\(B*D\\) treatment effect estimates: \\[ \\hat{\\theta} = \\frac{1}{BD} \\sum_{b = 1}^B \\sum_{d = 1}^D \\hat{\\theta}_{bd}. \\] pooled variance based two components reflect variability within imputed bootstrap samples (von Hippel Bartlett (2021), formula 8.4): \\[ V(\\hat{\\theta}) = (1 + \\frac{1}{B})\\frac{MSB - MSW}{D} + \\frac{MSW}{BD} \\] \\(MSB\\) mean square bootstrapped datasets, \\(MSW\\) mean square within bootstrapped datasets imputed datasets: \\[ \\begin{align*} MSB &= \\frac{D}{B-1} \\sum_{b = 1}^B (\\bar{\\theta_{b}} - \\hat{\\theta})^2 \\\\ MSW &= \\frac{1}{B(D-1)} \\sum_{b = 1}^B \\sum_{d = 1}^D (\\theta_{bd} - \\bar{\\theta_b})^2 \\end{align*} \\] \\(\\bar{\\theta_{b}}\\) mean across \\(D\\) estimates obtained random imputation \\(b\\)-th bootstrap sample. degrees freedom estimated following formula (von Hippel Bartlett (2021), formula 8.6): \\[ \\nu = \\frac{(MSB\\cdot (B+1) - MSW\\cdot B)^2}{\\frac{MSB^2\\cdot (B+1)^2}{B-1} + \\frac{MSW^2\\cdot B}{D-1}} \\] Confidence intervals tests null hypothesis \\(H_0: \\theta=\\theta_0\\) based \\(t\\)-statistics \\(T\\): \\[ T= (\\hat{\\theta}-\\theta_0)/\\sqrt{V(\\hat{\\theta})}. \\] null hypothesis, \\(T\\) approximate \\(t\\)-distribution \\(\\nu\\) degrees freedom.","code":""},{"path":[]},{"path":"/articles/stat_specs.html","id":"treatment-effect-estimation","dir":"Articles","previous_headings":"3 Statistical methodology > 3.10 Comparison between the implemented approaches","what":"Treatment effect estimation","title":"rbmi: Statistical Specifications","text":"approaches provide consistent treatment effect estimates standard reference-based imputation methods case analysis model completed datasets general linear model ANCOVA. Methods conditional mean imputation also valid analysis models. validity conditional mean imputation formally demonstrated analyses using general linear model (Wolbers et al. (2022, sec. 2.4)) though may also applicable widely (e.g. general MMRM analysis models). Treatment effects based conditional mean imputation deterministic. methods affected Monte Carlo sampling error precision estimates depends number imputations bootstrap samples, respectively.","code":""},{"path":"/articles/stat_specs.html","id":"standard-errors-of-the-treatment-effect","dir":"Articles","previous_headings":"3 Statistical methodology > 3.10 Comparison between the implemented approaches","what":"Standard errors of the treatment effect","title":"rbmi: Statistical Specifications","text":"approaches imputation MAR assumption provide consistent estimates frequentist standard error. reference-based imputation methods, situation complicated two different types variance estimators proposed statistical literature (Bartlett (2023)). first frequentist variance describes actual repeated sampling variability estimator. reference-based missing data assumption correctly specified, resulting inference based variance correct frequentist sense, .e. hypothesis tests asymptotically correct type error control confidence intervals correct coverage probabilities repeated sampling (Bartlett (2023), Wolbers et al. (2022)). Reference-based missing data assumptions strong borrow information reference arm imputation active arm. consequence, size frequentist standard errors treatment effects may decrease increasing amounts missing data. second proposal -called “information-anchored” variance originally proposed context sensitivity analyses (Cro, Carpenter, Kenward (2019)). variance estimator based disentangling point estimation variance estimation altogether. information-anchoring principle described Cro, Carpenter, Kenward (2019) states relative increase variance treatment effect estimator MAR imputation increasing amounts missing data preserved reference-based imputation methods. resulting information-anchored variance typically similar variance MAR imputation typically increases increasing amounts missing data. However, information-anchored variance reflect actual variability reference-based estimator repeated sampling resulting inference highly conservative resulting substantial power loss (Wolbers et al. (2022)). Moreover, date, Bayesian frequentist framework developed information-anchored variance provides correct inference reference-based missingness assumptions, clear whether framework can even developed. Reference-based conditional mean imputation (method_condmean()) bootstrapped likelihood-based multiple methods (method = method_bmlmi()) obtain standard errors via resampling hence target frequentist variance (Wolbers et al. (2022), von Hippel Bartlett (2021)). finite samples, simulations sample size \\(n=100\\) per group reported Wolbers et al. (2022) demonstrated conditional mean imputation combined jackknife (method_condmean(type = \"jackknife\")) provided exact protection type one error rate whereas bootstrap (method_condmean(type = \"bootstrap\")) associated small type error inflation (5.1% 5.3% nominal level 5%). reference-based conditional mean imputation, alternative information-anchored variance can obtained following proposal Lu (2021). basic idea Lu (2021) obtain information-anchored variance via MAR imputation combined delta-adjustment delta selected data-driven way match reference-based estimator. conditional mean imputation, proposal Lu (2021) can implemented choosing delta-adjustment difference conditional mean imputation chosen reference-based assumption MAR original dataset. illustration different variances can obtained conditional mean imputation rbmi provided vignette “Frequentist information-anchored inference reference-based conditional mean imputation” (vignette(topic = \"CondMean_Inference\", package = \"rbmi\")). Reference-based Bayesian (approximate Bayesian) multiple imputation methods combined Rubin’s rules (method_bayes() method_approxbayes()) target information-anchored variance (Cro, Carpenter, Kenward (2019)). frequentist variance methods principle obtained via bootstrap jackknife re-sampling treatment effect estimates computationally intensive directly supported rbmi. view primary analyses, accurate type error control (can obtained using frequentist variance) important adherence information anchoring principle , us, fully compatible strong reference-based assumptions. case, reference-based imputation used primary analysis, critical chosen reference-based assumption can clinically justified, suitable sensitivity analyses conducted stress-test assumptions. Conditional mean imputation combined jackknife method leads deterministic standard error estimates , consequently, confidence intervals \\(p\\)-values also deterministic. particularly important regulatory setting important ascertain whether calculated \\(p\\)-value close critical boundary 5% truly threshold rather uncertain Monte Carlo error.","code":""},{"path":"/articles/stat_specs.html","id":"computational-complexity","dir":"Articles","previous_headings":"3 Statistical methodology > 3.10 Comparison between the implemented approaches","what":"Computational complexity","title":"rbmi: Statistical Specifications","text":"Bayesian MI methods rely specification prior distributions usage Markov chain Monte Carlo (MCMC) methods. methods based multiple imputation bootstrapping require tuning parameters specification number imputations \\(M\\) bootstrap samples \\(B\\) rely numerical optimization fitting MMRM imputation models via REML. Conditional mean imputation combined jackknife tuning parameters. rbmi implementation, fitting MMRM imputation model via REML computationally expensive. MCMC sampling using rstan (Stan Development Team (2020)) typically relatively fast setting requires small burn-burn-chains. addition, number random imputations reliable inference using Rubin’s rules often smaller number resamples required jackknife bootstrap (see e.g. discussions . R. White, Royston, Wood (2011, sec. 7) Bayesian MI Appendix Wolbers et al. (2022) bootstrap). Thus, many applications, expect conventional MI based Bayesian posterior draws fastest, followed conventional MI using approximate Bayesian posterior draws conditional mean imputation combined jackknife. Conditional mean imputation combined bootstrap bootstrapped MI methods typically computationally demanding. note, implemented methods conceptually straightforward parallelise parallelisation support provided rbmi.","code":""},{"path":"/articles/stat_specs.html","id":"sec:rbmiFunctions","dir":"Articles","previous_headings":"","what":"Mapping of statistical methods to rbmi functions","title":"rbmi: Statistical Specifications","text":"full documentation rbmi package functionality refer help pages functions package vignettes. give brief overview different steps imputation procedure mapped rbmi functions: Bayesian posterior parameter draws imputation model obtained via argument method = method_bayes(). Approximate Bayesian posterior parameter draws imputation model obtained via argument method = method_approxbayes(). ML REML parameter estimates imputation model parameters original dataset leave-one-subject-datasets (required jackknife) obtained via argument method = method_condmean(type = \"jackknife\"). ML REML parameter estimates imputation model parameters original dataset bootstrapped datasets obtained via argument method = method_condmean(type = \"bootstrap\"). Bootstrapped MI methods obtained via argument method = method_bmlmi(B=B, D=D) \\(B\\) refers number bootstrap samples \\(D\\) number random imputations bootstrap sample. imputation step using random imputation deterministic conditional mean imputation, respectively, implemented function impute(). Imputation can performed assuming already implemented imputation strategies presented section 3.4. Additionally, user-defined imputation strategies also supported. analysis step implemented function analyse() applies analysis model imputed datasets. default, analysis model (argument fun) ancova() function alternative analysis functions can also provided user. analyse() function also allows \\(\\delta\\)-adjustments imputed datasets prior analysis via argument delta. inference step implemented function pool() pools results across imputed datasets. Rubin Bernard rule applied case (approximate) Bayesian MI. conditional mean imputation, jackknife bootstrap (normal approximation percentile) inference supported. BMLMI, pooling inference steps performed via pool() case implements method described Section 3.9.","code":""},{"path":"/articles/stat_specs.html","id":"sec:otherSoftware","dir":"Articles","previous_headings":"","what":"Comparison to other software implementations","title":"rbmi: Statistical Specifications","text":"established software implementation reference-based imputation SAS -called “five macros” James Roger (Roger (2021)). alternative R implementation also currently development R package RefBasedMI (McGrath White (2021)). rbmi several features supported implementations: addition Bayesian MI approach implemented also packages, implementation provides three alternative MI approaches: approximate Bayesian MI, conditional mean imputation combined resampling, bootstrapped MI. rbmi allows usage data collected ICE. example, suppose want adopt treatment policy strategy ICE “treatment discontinuation”. possible implementation strategy use observed outcome data subjects remain study ICE use reference-based imputation case subject drops . implementation, implemented excluding observed post ICE data imputation model assumes MAR missingness including analysis model. knowledge, directly supported implementations. RefBasedMI fits imputation model data treatment group separately implies covariate-treatment group interactions covariates pooled data treatment groups. contrast, Roger’s five macros assume joint model including data randomized groups covariate-treatment interactions covariates allowed. also chose implement joint model use flexible model linear predictor may may include interaction term covariate treatment group. addition, imputation model also allows inclusion time-varying covariates. implementation, grouping subjects purpose imputation model (definition reference group) need correspond assigned treatment groups. provides additional flexibility imputation procedure. clear us whether feature supported Roger’s five macros RefBasedMI. believe R-based implementation modular RefBasedMI facilitate package enhancements. contrast, general causal model introduced . White, Royes, Best (2020) available implementations currently supported .","code":""},{"path":[]},{"path":"/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Craig Gower-Page. Author, maintainer. Alessandro Noci. Author. Marcel Wolbers. Contributor. Isaac Gravestock. Author. F. Hoffmann-La Roche AG. Copyright holder, funder.","code":""},{"path":"/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Gower-Page C, Noci , Gravestock (2024). rbmi: Reference Based Multiple Imputation. R package version 1.3.1, https://github.com/insightsengineering/rbmi, https://insightsengineering.github.io/rbmi/. Gower-Page C, Noci , Wolbers M (2022). “rbmi: R package standard reference-based multiple imputation methods.” Journal Open Source Software, 7(74), 4251. doi:10.21105/joss.04251, https://doi.org/10.21105/joss.04251.","code":"@Manual{, title = {rbmi: Reference Based Multiple Imputation}, author = {Craig Gower-Page and Alessandro Noci and Isaac Gravestock}, year = {2024}, note = {R package version 1.3.1, https://github.com/insightsengineering/rbmi}, url = {https://insightsengineering.github.io/rbmi/}, } @Article{, title = {rbmi: A R package for standard and reference-based multiple imputation methods}, author = {Craig Gower-Page and Alessandro Noci and Marcel Wolbers}, year = {2022}, publisher = {The Open Journal}, doi = {10.21105/joss.04251}, url = {https://doi.org/10.21105/joss.04251}, volume = {7}, number = {74}, pages = {4251}, journal = {Journal of Open Source Software}, }"},{"path":[]},{"path":"/index.html","id":"overview","dir":"","previous_headings":"","what":"Overview","title":"Reference Based Multiple Imputation","text":"rbmi package used imputation missing data clinical trials continuous multivariate normal longitudinal outcomes. supports imputation missing random (MAR) assumption, reference-based imputation methods, delta adjustments (required sensitivity analysis tipping point analyses). package implements Bayesian approximate Bayesian multiple imputation combined Rubin’s rules inference, frequentist conditional mean imputation combined (jackknife bootstrap) resampling.","code":""},{"path":"/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Reference Based Multiple Imputation","text":"package can installed directly CRAN via: Note usage Bayesian multiple imputation requires installation suggested package rstan.","code":"install.packages(\"rbmi\") install.packages(\"rstan\")"},{"path":"/index.html","id":"usage","dir":"","previous_headings":"","what":"Usage","title":"Reference Based Multiple Imputation","text":"package designed around 4 core functions: draws() - Fits multiple imputation models impute() - Imputes multiple datasets analyse() - Analyses multiple datasets pool() - Pools multiple results single statistic basic usage core functions described quickstart vignette:","code":"vignette(topic = \"quickstart\", package = \"rbmi\")"},{"path":"/index.html","id":"validation","dir":"","previous_headings":"","what":"Validation","title":"Reference Based Multiple Imputation","text":"clarification current validation status rbmi please see FAQ vignette.","code":""},{"path":"/index.html","id":"support","dir":"","previous_headings":"","what":"Support","title":"Reference Based Multiple Imputation","text":"help regards using package find bug please create GitHub issue","code":""},{"path":"/reference/QR_decomp.html","id":null,"dir":"Reference","previous_headings":"","what":"QR decomposition — QR_decomp","title":"QR decomposition — QR_decomp","text":"QR decomposition defined Stan user's guide (section 1.2).","code":""},{"path":"/reference/QR_decomp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"QR decomposition — QR_decomp","text":"","code":"QR_decomp(mat)"},{"path":"/reference/QR_decomp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"QR decomposition — QR_decomp","text":"mat matrix perform QR decomposition .","code":""},{"path":"/reference/Stack.html","id":null,"dir":"Reference","previous_headings":"","what":"R6 Class for a FIFO stack — Stack","title":"R6 Class for a FIFO stack — Stack","text":"simple stack object offering add / pop functionality","code":""},{"path":"/reference/Stack.html","id":"public-fields","dir":"Reference","previous_headings":"","what":"Public fields","title":"R6 Class for a FIFO stack — Stack","text":"stack list containing current stack","code":""},{"path":[]},{"path":"/reference/Stack.html","id":"public-methods","dir":"Reference","previous_headings":"","what":"Public methods","title":"R6 Class for a FIFO stack — Stack","text":"Stack$add() Stack$pop() Stack$clone()","code":""},{"path":"/reference/Stack.html","id":"method-add-","dir":"Reference","previous_headings":"","what":"Method add()","title":"R6 Class for a FIFO stack — Stack","text":"Adds content end stack (must list)","code":""},{"path":"/reference/Stack.html","id":"usage","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for a FIFO stack — Stack","text":"","code":"Stack$add(x)"},{"path":"/reference/Stack.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for a FIFO stack — Stack","text":"x content add stack","code":""},{"path":"/reference/Stack.html","id":"method-pop-","dir":"Reference","previous_headings":"","what":"Method pop()","title":"R6 Class for a FIFO stack — Stack","text":"Retrieve content stack","code":""},{"path":"/reference/Stack.html","id":"usage-1","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for a FIFO stack — Stack","text":"","code":"Stack$pop(i)"},{"path":"/reference/Stack.html","id":"arguments-1","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for a FIFO stack — Stack","text":"number items retrieve stack. less items left stack just return everything left.","code":""},{"path":"/reference/Stack.html","id":"method-clone-","dir":"Reference","previous_headings":"","what":"Method clone()","title":"R6 Class for a FIFO stack — Stack","text":"objects class cloneable method.","code":""},{"path":"/reference/Stack.html","id":"usage-2","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for a FIFO stack — Stack","text":"","code":"Stack$clone(deep = FALSE)"},{"path":"/reference/Stack.html","id":"arguments-2","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for a FIFO stack — Stack","text":"deep Whether make deep clone.","code":""},{"path":"/reference/add_class.html","id":null,"dir":"Reference","previous_headings":"","what":"Add a class — add_class","title":"Add a class — add_class","text":"Utility function add class object. Adds new class existing classes.","code":""},{"path":"/reference/add_class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Add a class — add_class","text":"","code":"add_class(x, cls)"},{"path":"/reference/add_class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Add a class — add_class","text":"x object add class . cls class added.","code":""},{"path":"/reference/adjust_trajectories.html","id":null,"dir":"Reference","previous_headings":"","what":"Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories","title":"Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories","text":"Adjust trajectories due intercurrent event (ICE)","code":""},{"path":"/reference/adjust_trajectories.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories","text":"","code":"adjust_trajectories( distr_pars_group, outcome, ids, ind_ice, strategy_fun, distr_pars_ref = NULL )"},{"path":"/reference/adjust_trajectories.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories","text":"distr_pars_group Named list containing simulation parameters multivariate normal distribution assumed given treatment group. contains following elements: mu: Numeric vector indicating mean outcome trajectory. include outcome baseline. sigma Covariance matrix outcome trajectory. outcome Numeric variable specifies longitudinal outcome. ids Factor variable specifies id subject. ind_ice binary variable takes value 1 corresponding outcome affected ICE 0 otherwise. strategy_fun Function implementing trajectories intercurrent event (ICE). Must one getStrategies(). See getStrategies() details. distr_pars_ref Optional. Named list containing simulation parameters reference arm. contains following elements: mu: Numeric vector indicating mean outcome trajectory assuming ICEs. include outcome baseline. sigma Covariance matrix outcome trajectory assuming ICEs.","code":""},{"path":"/reference/adjust_trajectories.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Adjust trajectories due to the intercurrent event (ICE) — adjust_trajectories","text":"numeric vector containing adjusted trajectories.","code":""},{"path":[]},{"path":"/reference/adjust_trajectories_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","title":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","text":"Adjust trajectory subject's outcome due intercurrent event (ICE)","code":""},{"path":"/reference/adjust_trajectories_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","text":"","code":"adjust_trajectories_single( distr_pars_group, outcome, strategy_fun, distr_pars_ref = NULL )"},{"path":"/reference/adjust_trajectories_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","text":"distr_pars_group Named list containing simulation parameters multivariate normal distribution assumed given treatment group. contains following elements: mu: Numeric vector indicating mean outcome trajectory. include outcome baseline. sigma Covariance matrix outcome trajectory. outcome Numeric variable specifies longitudinal outcome. strategy_fun Function implementing trajectories intercurrent event (ICE). Must one getStrategies(). See getStrategies() details. distr_pars_ref Optional. Named list containing simulation parameters reference arm. contains following elements: mu: Numeric vector indicating mean outcome trajectory assuming ICEs. include outcome baseline. sigma Covariance matrix outcome trajectory assuming ICEs.","code":""},{"path":"/reference/adjust_trajectories_single.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","text":"numeric vector containing adjusted trajectory single subject.","code":""},{"path":"/reference/adjust_trajectories_single.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Adjust trajectory of a subject's outcome due to the intercurrent event (ICE) — adjust_trajectories_single","text":"outcome specified --post-ICE observations (.e. observations adjusted) set NA.","code":""},{"path":"/reference/analyse.html","id":null,"dir":"Reference","previous_headings":"","what":"Analyse Multiple Imputed Datasets — analyse","title":"Analyse Multiple Imputed Datasets — analyse","text":"function takes multiple imputed datasets (generated impute() function) runs analysis function .","code":""},{"path":"/reference/analyse.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Analyse Multiple Imputed Datasets — analyse","text":"","code":"analyse( imputations, fun = ancova, delta = NULL, ..., ncores = 1, .validate = TRUE )"},{"path":"/reference/analyse.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Analyse Multiple Imputed Datasets — analyse","text":"imputations imputations object created impute(). fun analysis function applied imputed dataset. See details. delta data.frame containing delta transformation applied imputed datasets prior running fun. See details. ... Additional arguments passed onto fun. ncores number parallel processes use running function. Can also cluster object created make_rbmi_cluster(). See parallisation section . .validate inputations checked ensure conforms required format (default = TRUE) ? Can gain small performance increase set FALSE analysing large number samples.","code":""},{"path":"/reference/analyse.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Analyse Multiple Imputed Datasets — analyse","text":"function works performing following steps: Extract dataset imputations object. Apply delta adjustments specified delta argument. Run analysis function fun dataset. Repeat steps 1-3 across datasets inside imputations object. Collect return analysis results. analysis function fun must take data.frame first argument. options analyse() passed onto fun via .... fun must return named list element list containing single numeric element called est (additionally se df originally specified method_bayes() method_approxbayes()) .e.: Please note vars$subjid column (defined original call draws()) scrambled data.frames provided fun. say contain original subject values hard coding subject ids strictly avoided. default fun ancova() function. Please note function requires vars object, created set_vars(), provided via vars argument e.g. analyse(imputeObj, vars = set_vars(...)). Please see documentation ancova() full details. Please also note theoretical justification conditional mean imputation method (method = method_condmean() draws()) relies fact ANCOVA linear transformation outcomes. Thus care required applying alternative analysis functions setting. delta argument can used specify offsets applied outcome variable imputed datasets prior analysis. typically used sensitivity tipping point analyses. delta dataset must contain columns vars$subjid, vars$visit (specified original call draws()) delta. Essentially data.frame merged onto imputed dataset vars$subjid vars$visit outcome variable modified : Please note order provide maximum flexibility, delta argument can used modify /outcome values including imputed. Care must taken defining offsets. recommend use helper function delta_template() define delta datasets provides utility variables is_missing can used identify exactly visits imputed.","code":"myfun <- function(dat, ...) { mod_1 <- lm(data = dat, outcome ~ group) mod_2 <- lm(data = dat, outcome ~ group + covar) x <- list( trt_1 = list( est = coef(mod_1)[[group]], se = sqrt(vcov(mod_1)[group, group]), df = df.residual(mod_1) ), trt_2 = list( est = coef(mod_2)[[group]], se = sqrt(vcov(mod_2)[group, group]), df = df.residual(mod_2) ) ) return(x) } imputed_data[[vars$outcome]] <- imputed_data[[vars$outcome]] + imputed_data[[\"delta\"]]"},{"path":"/reference/analyse.html","id":"parallelisation","dir":"Reference","previous_headings":"","what":"Parallelisation","title":"Analyse Multiple Imputed Datasets — analyse","text":"speed evaluation analyse() can use ncores argument enable parallelisation. Simply providing integer get rbmi automatically spawn many background processes parallelise across. using custom analysis function need ensure libraries global objects required function available sub-processes. need use make_rbmi_cluster() function example: Note significant overhead setting sub-processes transferring data back--forth main process sub-processes. parallelisation analyse() function tends worth > 2000 samples generated draws(). Conversely using parallelisation samples smaller may lead longer run times just running sequentially. important note implementation parallel processing within analyse() optimised around assumption parallel processes spawned machine remote cluster. One optimisation required data saved temporary file local disk read sub-process. done avoid overhead transferring data network. assumption stage need parallelising analysis remote cluster likely better parallelising across multiple rbmi runs rather within single rbmi run. Finally, tipping point analysis can get reasonable performance improvement re-using cluster call analyse() e.g.","code":"my_custom_fun <- function(...) cl <- make_rbmi_cluster( 4, objects = list(\"my_custom_fun\" = my_custom_fun), packages = c(\"dplyr\", \"nlme\") ) analyse( imputations = imputeObj, fun = my_custom_fun, ncores = cl ) parallel::stopCluster(cl) cl <- make_rbmi_cluster(4) ana_1 <- analyse( imputations = imputeObj, delta = delta_plan_1, ncores = cl ) ana_2 <- analyse( imputations = imputeObj, delta = delta_plan_2, ncores = cl ) ana_3 <- analyse( imputations = imputeObj, delta = delta_plan_3, ncores = cl ) parallel::clusterStop(cl)"},{"path":[]},{"path":"/reference/analyse.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Analyse Multiple Imputed Datasets — analyse","text":"","code":"if (FALSE) { # \\dontrun{ vars <- set_vars( subjid = \"subjid\", visit = \"visit\", outcome = \"outcome\", group = \"group\", covariates = c(\"sex\", \"age\", \"sex*age\") ) analyse( imputations = imputeObj, vars = vars ) deltadf <- data.frame( subjid = c(\"Pt1\", \"Pt1\", \"Pt2\"), visit = c(\"Visit_1\", \"Visit_2\", \"Visit_2\"), delta = c( 5, 9, -10) ) analyse( imputations = imputeObj, delta = deltadf, vars = vars ) } # }"},{"path":"/reference/ancova.html","id":null,"dir":"Reference","previous_headings":"","what":"Analysis of Covariance — ancova","title":"Analysis of Covariance — ancova","text":"Performs analysis covariance two groups returning estimated \"treatment effect\" (.e. contrast two treatment groups) least square means estimates group.","code":""},{"path":"/reference/ancova.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Analysis of Covariance — ancova","text":"","code":"ancova( data, vars, visits = NULL, weights = c(\"counterfactual\", \"equal\", \"proportional_em\", \"proportional\") )"},{"path":"/reference/ancova.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Analysis of Covariance — ancova","text":"data data.frame containing data used model. vars vars object generated set_vars(). group, visit, outcome covariates elements required. See details. visits optional character vector specifying visits fit ancova model . NULL, separate ancova model fit outcomes visit (determined unique(data[[vars$visit]])). See details. weights Character, either \"counterfactual\" (default), \"equal\", \"proportional_em\" \"proportional\". Specifies weighting strategy used calculating lsmeans. See weighting section details.","code":""},{"path":"/reference/ancova.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Analysis of Covariance — ancova","text":"function works follows: Select first value visits. Subset data observations occurred visit. Fit linear model vars$outcome ~ vars$group + vars$covariates. Extract \"treatment effect\" & least square means treatment group. Repeat points 2-3 values visits. value visits provided set unique(data[[vars$visit]]). order meet formatting standards set analyse() results collapsed single list suffixed visit name, e.g.: Please note \"ref\" refers first factor level vars$group necessarily coincide control arm. Analogously, \"alt\" refers second factor level vars$group. \"trt\" refers model contrast translating mean difference second level first level. want include interaction terms model can done providing covariates argument set_vars() e.g. set_vars(covariates = c(\"sex*age\")).","code":"list( trt_visit_1 = list(est = ...), lsm_ref_visit_1 = list(est = ...), lsm_alt_visit_1 = list(est = ...), trt_visit_2 = list(est = ...), lsm_ref_visit_2 = list(est = ...), lsm_alt_visit_2 = list(est = ...), ... )"},{"path":[]},{"path":"/reference/ancova.html","id":"counterfactual","dir":"Reference","previous_headings":"","what":"Counterfactual","title":"Analysis of Covariance — ancova","text":"weights = \"counterfactual\" (default) lsmeans obtained taking average predicted values patient assigning patients arm turn. approach equivalent standardization g-computation. comparison emmeans approach equivalent : Note ensure backwards compatibility previous versions rbmi weights = \"proportional\" alias weights = \"counterfactual\". get results consistent emmeans's weights = \"proportional\" please use weights = \"proportional_em\".","code":"emmeans::emmeans(model, specs = \"\", counterfactual = \"\")"},{"path":"/reference/ancova.html","id":"equal","dir":"Reference","previous_headings":"","what":"Equal","title":"Analysis of Covariance — ancova","text":"weights = \"equal\" lsmeans obtained taking model fitted value hypothetical patient whose covariates defined follows: Continuous covariates set mean(X) Dummy categorical variables set 1/N N number levels Continuous * continuous interactions set mean(X) * mean(Y) Continuous * categorical interactions set mean(X) * 1/N Dummy categorical * categorical interactions set 1/N * 1/M comparison emmeans approach equivalent :","code":"emmeans::emmeans(model, specs = \"\", weights = \"equal\")"},{"path":"/reference/ancova.html","id":"proportional","dir":"Reference","previous_headings":"","what":"Proportional","title":"Analysis of Covariance — ancova","text":"weights = \"proportional_em\" lsmeans obtained per weights = \"equal\" except instead weighting observation equally weighted proportion given combination categorical values occurred data. comparison emmeans approach equivalent : Note confused weights = \"proportional\" alias weights = \"counterfactual\".","code":"emmeans::emmeans(model, specs = \"\", weights = \"proportional\")"},{"path":[]},{"path":"/reference/ancova_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"Performance analysis covariance. See ancova() full details.","code":""},{"path":"/reference/ancova_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"","code":"ancova_single( data, outcome, group, covariates, weights = c(\"counterfactual\", \"equal\", \"proportional_em\", \"proportional\") )"},{"path":"/reference/ancova_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"data data.frame containing data used model. outcome Character, name outcome variable data. group Character, name group variable data. covariates Character vector containing name additional covariates included model well interaction terms. weights Character, either \"counterfactual\" (default), \"equal\", \"proportional_em\" \"proportional\". Specifies weighting strategy used calculating lsmeans. See weighting section details.","code":""},{"path":"/reference/ancova_single.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"group must factor variable 2 levels. outcome must continuous numeric variable.","code":""},{"path":[]},{"path":"/reference/ancova_single.html","id":"counterfactual","dir":"Reference","previous_headings":"","what":"Counterfactual","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"weights = \"counterfactual\" (default) lsmeans obtained taking average predicted values patient assigning patients arm turn. approach equivalent standardization g-computation. comparison emmeans approach equivalent : Note ensure backwards compatibility previous versions rbmi weights = \"proportional\" alias weights = \"counterfactual\". get results consistent emmeans's weights = \"proportional\" please use weights = \"proportional_em\".","code":"emmeans::emmeans(model, specs = \"\", counterfactual = \"\")"},{"path":"/reference/ancova_single.html","id":"equal","dir":"Reference","previous_headings":"","what":"Equal","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"weights = \"equal\" lsmeans obtained taking model fitted value hypothetical patient whose covariates defined follows: Continuous covariates set mean(X) Dummy categorical variables set 1/N N number levels Continuous * continuous interactions set mean(X) * mean(Y) Continuous * categorical interactions set mean(X) * 1/N Dummy categorical * categorical interactions set 1/N * 1/M comparison emmeans approach equivalent :","code":"emmeans::emmeans(model, specs = \"\", weights = \"equal\")"},{"path":"/reference/ancova_single.html","id":"proportional","dir":"Reference","previous_headings":"","what":"Proportional","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"weights = \"proportional_em\" lsmeans obtained per weights = \"equal\" except instead weighting observation equally weighted proportion given combination categorical values occurred data. comparison emmeans approach equivalent : Note confused weights = \"proportional\" alias weights = \"counterfactual\".","code":"emmeans::emmeans(model, specs = \"\", weights = \"proportional\")"},{"path":[]},{"path":"/reference/ancova_single.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Implements an Analysis of Covariance (ANCOVA) — ancova_single","text":"","code":"if (FALSE) { # \\dontrun{ iris2 <- iris[ iris$Species %in% c(\"versicolor\", \"virginica\"), ] iris2$Species <- factor(iris2$Species) ancova_single(iris2, \"Sepal.Length\", \"Species\", c(\"Petal.Length * Petal.Width\")) } # }"},{"path":"/reference/antidepressant_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Antidepressant trial data — antidepressant_data","title":"Antidepressant trial data — antidepressant_data","text":"dataset containing data publicly available example data set antidepressant clinical trial. dataset available website Drug Information Association Scientific Working Group Estimands Missing Data. per website, original data antidepressant clinical trial four treatments; two doses experimental medication, positive control, placebo published Goldstein et al (2004). mask real data, week 8 observations removed two arms created: original placebo arm \"drug arm\" created randomly selecting patients three non-placebo arms.","code":""},{"path":"/reference/antidepressant_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Antidepressant trial data — antidepressant_data","text":"","code":"antidepressant_data"},{"path":"/reference/antidepressant_data.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Antidepressant trial data — antidepressant_data","text":"data.frame 608 rows 11 variables: PATIENT: patients IDs. HAMATOTL: total score Hamilton Anxiety Rating Scale. PGIIMP: patient's Global Impression Improvement Rating Scale. RELDAYS: number days visit baseline. VISIT: post-baseline visit. levels 4,5,6,7. THERAPY: treatment group variable. equal PLACEBO observations placebo arm, DRUG observations active arm. GENDER: patient's gender. POOLINV: pooled investigator. BASVAL: baseline outcome value. HAMDTL17: Hamilton 17-item rating scale value. CHANGE: change baseline Hamilton 17-item rating scale.","code":""},{"path":"/reference/antidepressant_data.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Antidepressant trial data — antidepressant_data","text":"relevant endpoint Hamilton 17-item rating scale depression (HAMD17) baseline weeks 1, 2, 4, 6 assessments included. Study drug discontinuation occurred 24% subjects active drug 26% placebo. data study drug discontinuation missing single additional intermittent missing observation.","code":""},{"path":"/reference/antidepressant_data.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Antidepressant trial data — antidepressant_data","text":"Goldstein, Lu, Detke, Wiltse, Mallinckrodt, Demitrack. Duloxetine treatment depression: double-blind placebo-controlled comparison paroxetine. J Clin Psychopharmacol 2004;24: 389-399.","code":""},{"path":"/reference/apply_delta.html","id":null,"dir":"Reference","previous_headings":"","what":"Applies delta adjustment — apply_delta","title":"Applies delta adjustment — apply_delta","text":"Takes delta dataset adjusts outcome variable adding corresponding delta.","code":""},{"path":"/reference/apply_delta.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Applies delta adjustment — apply_delta","text":"","code":"apply_delta(data, delta = NULL, group = NULL, outcome = NULL)"},{"path":"/reference/apply_delta.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Applies delta adjustment — apply_delta","text":"data data.frame outcome column adjusted. delta data.frame (must contain column called delta). group character vector variables data delta used merge 2 data.frames together . outcome character, name outcome variable data.","code":""},{"path":"/reference/as_analysis.html","id":null,"dir":"Reference","previous_headings":"","what":"Construct an analysis object — as_analysis","title":"Construct an analysis object — as_analysis","text":"Creates analysis object ensuring components correctly defined.","code":""},{"path":"/reference/as_analysis.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Construct an analysis object — as_analysis","text":"","code":"as_analysis(results, method, delta = NULL, fun = NULL, fun_name = NULL)"},{"path":"/reference/as_analysis.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Construct an analysis object — as_analysis","text":"results list lists contain analysis results imputation See analyse() details object look like. method method object specified draws(). delta delta dataset used. See analyse() details specified. fun analysis function used. fun_name character name analysis function (used printing) purposes.","code":""},{"path":"/reference/as_ascii_table.html","id":null,"dir":"Reference","previous_headings":"","what":"as_ascii_table — as_ascii_table","title":"as_ascii_table — as_ascii_table","text":"function takes data.frame attempts convert simple ascii format suitable printing screen assumed variable values .character() method order cast character.","code":""},{"path":"/reference/as_ascii_table.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"as_ascii_table — as_ascii_table","text":"","code":"as_ascii_table(dat, line_prefix = \" \", pcol = NULL)"},{"path":"/reference/as_ascii_table.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"as_ascii_table — as_ascii_table","text":"dat Input dataset convert ascii table line_prefix Symbols prefix infront every line table pcol name column handled p-value. Sets value <0.001 value 0 rounding","code":""},{"path":"/reference/as_class.html","id":null,"dir":"Reference","previous_headings":"","what":"Set Class — as_class","title":"Set Class — as_class","text":"Utility function set objects class.","code":""},{"path":"/reference/as_class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set Class — as_class","text":"","code":"as_class(x, cls)"},{"path":"/reference/as_class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set Class — as_class","text":"x object set class . cls class set.","code":""},{"path":"/reference/as_cropped_char.html","id":null,"dir":"Reference","previous_headings":"","what":"as_cropped_char — as_cropped_char","title":"as_cropped_char — as_cropped_char","text":"Makes character string x chars Reduce x char string ...","code":""},{"path":"/reference/as_cropped_char.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"as_cropped_char — as_cropped_char","text":"","code":"as_cropped_char(inval, crop_at = 30, ndp = 3)"},{"path":"/reference/as_cropped_char.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"as_cropped_char — as_cropped_char","text":"inval single element value crop_at character limit ndp Number decimal places display","code":""},{"path":"/reference/as_dataframe.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert object to dataframe — as_dataframe","title":"Convert object to dataframe — as_dataframe","text":"Convert object dataframe","code":""},{"path":"/reference/as_dataframe.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert object to dataframe — as_dataframe","text":"","code":"as_dataframe(x)"},{"path":"/reference/as_dataframe.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert object to dataframe — as_dataframe","text":"x data.frame like object Utility function convert \"data.frame-like\" object actual data.frame avoid issues inconsistency methods ( [() dplyr's grouped dataframes)","code":""},{"path":"/reference/as_draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates a draws object — as_draws","title":"Creates a draws object — as_draws","text":"Creates draws object final output call draws().","code":""},{"path":"/reference/as_draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates a draws object — as_draws","text":"","code":"as_draws(method, samples, data, formula, n_failures = NULL, fit = NULL)"},{"path":"/reference/as_draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates a draws object — as_draws","text":"method method object generated either method_bayes(), method_approxbayes(), method_condmean() method_bmlmi(). samples list sample_single objects. See sample_single(). data R6 longdata object containing relevant input data information. formula Fixed effects formula object used model specification. n_failures Absolute number failures model fit. fit method_bayes() chosen, returns MCMC Stan fit object. Otherwise NULL.","code":""},{"path":"/reference/as_draws.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Creates a draws object — as_draws","text":"draws object named list containing following: data: R6 longdata object containing relevant input data information. method: method object generated either method_bayes(), method_approxbayes() method_condmean(). samples: list containing estimated parameters interest. element samples named list containing following: ids: vector characters containing ids subjects included original dataset. beta: numeric vector estimated regression coefficients. sigma: list estimated covariance matrices (one level vars$group). theta: numeric vector transformed covariances. failed: Logical. TRUE model fit failed. ids_samp: vector characters containing ids subjects included given sample. fit: method_bayes() chosen, returns MCMC Stan fit object. Otherwise NULL. n_failures: absolute number failures model fit. Relevant method_condmean(type = \"bootstrap\"), method_approxbayes() method_bmlmi(). formula: fixed effects formula object used model specification.","code":""},{"path":"/reference/as_imputation.html","id":null,"dir":"Reference","previous_headings":"","what":"Create an imputation object — as_imputation","title":"Create an imputation object — as_imputation","text":"function creates object returned impute(). Essentially glorified wrapper around list() ensuring required elements set class added expected.","code":""},{"path":"/reference/as_imputation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create an imputation object — as_imputation","text":"","code":"as_imputation(imputations, data, method, references)"},{"path":"/reference/as_imputation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create an imputation object — as_imputation","text":"imputations list imputations_list's created imputation_df() data longdata object created longDataConstructor() method method object created method_condmean(), method_bayes() method_approxbayes() references named vector. Identifies references used generating imputed values. form c(\"Group\" = \"Reference\", \"Group\" = \"Reference\").","code":""},{"path":"/reference/as_indices.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert indicator to index — as_indices","title":"Convert indicator to index — as_indices","text":"Converts string 0's 1's index positions 1's padding results 0's length","code":""},{"path":"/reference/as_indices.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert indicator to index — as_indices","text":"","code":"as_indices(x)"},{"path":"/reference/as_indices.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert indicator to index — as_indices","text":"x character vector whose values either \"0\" \"1\". elements vector must length","code":""},{"path":"/reference/as_indices.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Convert indicator to index — as_indices","text":".e.","code":"patmap(c(\"1101\", \"0001\")) -> list(c(1,2,4,999), c(4,999, 999, 999))"},{"path":"/reference/as_mmrm_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates a ","title":"Creates a ","text":"Converts design matrix + key variables common format particular function following: Renames covariates V1, V2, etc avoid issues special characters variable names Ensures key variables right type Inserts outcome, visit subjid variables data.frame naming outcome, visit subjid provided also insert group variable data.frame named group","code":""},{"path":"/reference/as_mmrm_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates a ","text":"","code":"as_mmrm_df(designmat, outcome, visit, subjid, group = NULL)"},{"path":"/reference/as_mmrm_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates a ","text":"designmat data.frame matrix containing covariates use MMRM model. Dummy variables must already expanded , .e. via stats::model.matrix(). contain missing values outcome numeric vector. outcome value regressed MMRM model. visit character / factor vector. Indicates visit outcome value occurred . subjid character / factor vector. subject identifier used link separate visits belong subject. group character / factor vector. Indicates treatment group patient belongs .","code":""},{"path":"/reference/as_mmrm_formula.html","id":null,"dir":"Reference","previous_headings":"","what":"Create MMRM formula — as_mmrm_formula","title":"Create MMRM formula — as_mmrm_formula","text":"Derives MMRM model formula structure mmrm_df. returns formula object form:","code":""},{"path":"/reference/as_mmrm_formula.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create MMRM formula — as_mmrm_formula","text":"","code":"as_mmrm_formula(mmrm_df, cov_struct)"},{"path":"/reference/as_mmrm_formula.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create MMRM formula — as_mmrm_formula","text":"mmrm_df mmrm data.frame created as_mmrm_df() cov_struct Character - covariance structure used, must one \"us\" (default), \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\")","code":""},{"path":"/reference/as_mmrm_formula.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Create MMRM formula — as_mmrm_formula","text":"","code":"outcome ~ 0 + V1 + V2 + V4 + ... + us(visit | group / subjid)"},{"path":"/reference/as_model_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Expand data.frame into a design matrix — as_model_df","title":"Expand data.frame into a design matrix — as_model_df","text":"Expands data.frame using formula create design matrix. Key details always place outcome variable first column return object.","code":""},{"path":"/reference/as_model_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Expand data.frame into a design matrix — as_model_df","text":"","code":"as_model_df(dat, frm)"},{"path":"/reference/as_model_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Expand data.frame into a design matrix — as_model_df","text":"dat data.frame frm formula","code":""},{"path":"/reference/as_model_df.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Expand data.frame into a design matrix — as_model_df","text":"outcome column may contain NA's none variables listed formula contain missing values","code":""},{"path":"/reference/as_simple_formula.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates a simple formula object from a string — as_simple_formula","title":"Creates a simple formula object from a string — as_simple_formula","text":"Converts string list variables formula object","code":""},{"path":"/reference/as_simple_formula.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates a simple formula object from a string — as_simple_formula","text":"","code":"as_simple_formula(outcome, covars)"},{"path":"/reference/as_simple_formula.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates a simple formula object from a string — as_simple_formula","text":"outcome character (length 1 vector). Name outcome variable covars character (vector). Name covariates","code":""},{"path":"/reference/as_simple_formula.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Creates a simple formula object from a string — as_simple_formula","text":"formula","code":""},{"path":"/reference/as_stan_array.html","id":null,"dir":"Reference","previous_headings":"","what":"As array — as_stan_array","title":"As array — as_stan_array","text":"Converts numeric value length 1 1 dimension array. avoid type errors thrown stan length 1 numeric vectors provided R stan::vector inputs","code":""},{"path":"/reference/as_stan_array.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"As array — as_stan_array","text":"","code":"as_stan_array(x)"},{"path":"/reference/as_stan_array.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"As array — as_stan_array","text":"x numeric vector","code":""},{"path":"/reference/as_strata.html","id":null,"dir":"Reference","previous_headings":"","what":"Create vector of Stratas — as_strata","title":"Create vector of Stratas — as_strata","text":"Collapse multiple categorical variables distinct unique categories. e.g. return","code":"as_strata(c(1,1,2,2,2,1), c(5,6,5,5,6,5)) c(1,2,3,3,4,1)"},{"path":"/reference/as_strata.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create vector of Stratas — as_strata","text":"","code":"as_strata(...)"},{"path":"/reference/as_strata.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create vector of Stratas — as_strata","text":"... numeric/character/factor vectors length","code":""},{"path":"/reference/as_strata.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create vector of Stratas — as_strata","text":"","code":"if (FALSE) { # \\dontrun{ as_strata(c(1,1,2,2,2,1), c(5,6,5,5,6,5)) } # }"},{"path":"/reference/assert_variables_exist.html","id":null,"dir":"Reference","previous_headings":"","what":"Assert that all variables exist within a dataset — assert_variables_exist","title":"Assert that all variables exist within a dataset — assert_variables_exist","text":"Performs assertion check ensure vector variable exists within data.frame expected.","code":""},{"path":"/reference/assert_variables_exist.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Assert that all variables exist within a dataset — assert_variables_exist","text":"","code":"assert_variables_exist(data, vars)"},{"path":"/reference/assert_variables_exist.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Assert that all variables exist within a dataset — assert_variables_exist","text":"data data.frame vars character vector variable names","code":""},{"path":"/reference/char2fct.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert character variables to factor — char2fct","title":"Convert character variables to factor — char2fct","text":"Provided vector variable names function converts character variables factors. affect numeric existing factor variables","code":""},{"path":"/reference/char2fct.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert character variables to factor — char2fct","text":"","code":"char2fct(data, vars = NULL)"},{"path":"/reference/char2fct.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert character variables to factor — char2fct","text":"data data.frame vars character vector variables data","code":""},{"path":"/reference/check_ESS.html","id":null,"dir":"Reference","previous_headings":"","what":"Diagnostics of the MCMC based on ESS — check_ESS","title":"Diagnostics of the MCMC based on ESS — check_ESS","text":"Check quality MCMC draws posterior distribution checking whether relative ESS sufficiently large.","code":""},{"path":"/reference/check_ESS.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Diagnostics of the MCMC based on ESS — check_ESS","text":"","code":"check_ESS(stan_fit, n_draws, threshold_lowESS = 0.4)"},{"path":"/reference/check_ESS.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Diagnostics of the MCMC based on ESS — check_ESS","text":"stan_fit stanfit object. n_draws Number MCMC draws. threshold_lowESS number [0,1] indicating minimum acceptable value relative ESS. See details.","code":""},{"path":"/reference/check_ESS.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Diagnostics of the MCMC based on ESS — check_ESS","text":"warning message case detected problems.","code":""},{"path":"/reference/check_ESS.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Diagnostics of the MCMC based on ESS — check_ESS","text":"check_ESS() works follows: Extract ESS stan_fit parameter model. Compute relative ESS (.e. ESS divided number draws). Check whether parameter ESS lower threshold. least one parameter relative ESS threshold, warning thrown.","code":""},{"path":"/reference/check_hmc_diagn.html","id":null,"dir":"Reference","previous_headings":"","what":"Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn","title":"Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn","text":"Check : divergent iterations. Bayesian Fraction Missing Information (BFMI) sufficiently low. number iterations saturated max treedepth zero. Please see rstan::check_hmc_diagnostics() details.","code":""},{"path":"/reference/check_hmc_diagn.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn","text":"","code":"check_hmc_diagn(stan_fit)"},{"path":"/reference/check_hmc_diagn.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn","text":"stan_fit stanfit object.","code":""},{"path":"/reference/check_hmc_diagn.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Diagnostics of the MCMC based on HMC-related measures. — check_hmc_diagn","text":"warning message case detected problems.","code":""},{"path":"/reference/check_mcmc.html","id":null,"dir":"Reference","previous_headings":"","what":"Diagnostics of the MCMC — check_mcmc","title":"Diagnostics of the MCMC — check_mcmc","text":"Diagnostics MCMC","code":""},{"path":"/reference/check_mcmc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Diagnostics of the MCMC — check_mcmc","text":"","code":"check_mcmc(stan_fit, n_draws, threshold_lowESS = 0.4)"},{"path":"/reference/check_mcmc.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Diagnostics of the MCMC — check_mcmc","text":"stan_fit stanfit object. n_draws Number MCMC draws. threshold_lowESS number [0,1] indicating minimum acceptable value relative ESS. See details.","code":""},{"path":"/reference/check_mcmc.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Diagnostics of the MCMC — check_mcmc","text":"warning message case detected problems.","code":""},{"path":"/reference/check_mcmc.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Diagnostics of the MCMC — check_mcmc","text":"Performs checks quality MCMC. See check_ESS() check_hmc_diagn() details.","code":""},{"path":"/reference/compute_sigma.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma","title":"Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma","text":"Adapt covariance matrix reference-based methods. Used Copy Increments Reference (CIR) Jump Reference (JTR) methods, adapt covariance matrix different pre-deviation post deviation covariance structures. See Carpenter et al. (2013)","code":""},{"path":"/reference/compute_sigma.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma","text":"","code":"compute_sigma(sigma_group, sigma_ref, index_mar)"},{"path":"/reference/compute_sigma.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma","text":"sigma_group covariance matrix dimensions equal index_mar subjects original group sigma_ref covariance matrix dimensions equal index_mar subjects reference group index_mar logical vector indicating visits meet MAR assumption subject. .e. identifies observations non-MAR intercurrent event (ICE).","code":""},{"path":"/reference/compute_sigma.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Compute covariance matrix for some reference-based methods (JR, CIR) — compute_sigma","text":"Carpenter, James R., James H. Roger, Michael G. Kenward. \"Analysis longitudinal trials protocol deviation: framework relevant, accessible assumptions, inference via multiple imputation.\" Journal Biopharmaceutical statistics 23.6 (2013): 1352-1371.","code":""},{"path":"/reference/convert_to_imputation_list_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert list of imputation_list_single() objects to an imputation_list_df() object (i.e. a list of imputation_df() objects's) — convert_to_imputation_list_df","title":"Convert list of imputation_list_single() objects to an imputation_list_df() object (i.e. a list of imputation_df() objects's) — convert_to_imputation_list_df","text":"Convert list imputation_list_single() objects imputation_list_df() object (.e. list imputation_df() objects's)","code":""},{"path":"/reference/convert_to_imputation_list_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert list of imputation_list_single() objects to an imputation_list_df() object (i.e. a list of imputation_df() objects's) — convert_to_imputation_list_df","text":"","code":"convert_to_imputation_list_df(imputes, sample_ids)"},{"path":"/reference/convert_to_imputation_list_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert list of imputation_list_single() objects to an imputation_list_df() object (i.e. a list of imputation_df() objects's) — convert_to_imputation_list_df","text":"imputes list imputation_list_single() objects sample_ids list 1 element per required imputation_df. element must contain vector \"ID\"'s correspond imputation_single() ID's required dataset. total number ID's must equal total number rows within imputes$imputations accommodate method_bmlmi() impute_data_individual() function returns list imputation_list_single() objects 1 object per subject. imputation_list_single() stores subjects imputations matrix columns matrix correspond D method_bmlmi(). Note methods (.e. methods_*()) special case D = 1. number rows matrix varies subject equal number times patient selected imputation (non-conditional mean methods 1 per subject per imputed dataset). function best illustrated example: convert_to_imputation_df(imputes, sample_ids) result : Note different repetitions (.e. value set D) grouped together sequentially.","code":"imputes = list( imputation_list_single( id = \"Tom\", imputations = matrix( imputation_single_t_1_1, imputation_single_t_1_2, imputation_single_t_2_1, imputation_single_t_2_2, imputation_single_t_3_1, imputation_single_t_3_2 ) ), imputation_list_single( id = \"Tom\", imputations = matrix( imputation_single_h_1_1, imputation_single_h_1_2, ) ) ) sample_ids <- list( c(\"Tom\", \"Harry\", \"Tom\"), c(\"Tom\") ) imputation_list_df( imputation_df( imputation_single_t_1_1, imputation_single_h_1_1, imputation_single_t_2_1 ), imputation_df( imputation_single_t_1_2, imputation_single_h_1_2, imputation_single_t_2_2 ), imputation_df( imputation_single_t_3_1 ), imputation_df( imputation_single_t_3_2 ) )"},{"path":"/reference/d_lagscale.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate delta from a lagged scale coefficient — d_lagscale","title":"Calculate delta from a lagged scale coefficient — d_lagscale","text":"Calculates delta value based upon baseline delta value post ICE scaling coefficient.","code":""},{"path":"/reference/d_lagscale.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate delta from a lagged scale coefficient — d_lagscale","text":"","code":"d_lagscale(delta, dlag, is_post_ice)"},{"path":"/reference/d_lagscale.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate delta from a lagged scale coefficient — d_lagscale","text":"delta numeric vector. Determines baseline amount delta applied visit. dlag numeric vector. Determines scaling applied delta based upon visit ICE occurred . Must length delta. is_post_ice logical vector. Indicates whether visit \"post-ICE\" .","code":""},{"path":"/reference/d_lagscale.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Calculate delta from a lagged scale coefficient — d_lagscale","text":"See delta_template() full details calculation performed.","code":""},{"path":"/reference/delta_template.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a delta data.frame template — delta_template","title":"Create a delta data.frame template — delta_template","text":"Creates data.frame format required analyse() use applying delta adjustment.","code":""},{"path":"/reference/delta_template.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a delta data.frame template — delta_template","text":"","code":"delta_template(imputations, delta = NULL, dlag = NULL, missing_only = TRUE)"},{"path":"/reference/delta_template.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a delta data.frame template — delta_template","text":"imputations imputation object created impute(). delta NULL numeric vector. Determines baseline amount delta applied visit. See details. numeric vector must length number unique visits original dataset. dlag NULL numeric vector. Determines scaling applied delta based upon visit ICE occurred . See details. numeric vector must length number unique visits original dataset. missing_only Logical, TRUE non-missing post-ICE data delta value 0 assigned. Note calculation (described details section) performed first overwritten 0's end (.e. delta values missing post-ICE visits stay regardless option).","code":""},{"path":"/reference/delta_template.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Create a delta data.frame template — delta_template","text":"apply delta adjustment analyse() function expects delta data.frame 3 variables: vars$subjid, vars$visit delta (vars object supplied original call draws() created set_vars() function). function return data.frame aforementioned variables one row per subject per visit. delta argument function NULL delta column returned data.frame 0 observations. delta argument NULL delta calculated separately subject accumulative sum delta multiplied scaling coefficient dlag based upon many visits subject's intercurrent event (ICE) visit question . best illustrated example: Let delta = c(5,6,7,8) dlag=c(1,2,3,4) (.e. assuming 4 visits) lets say subject ICE visit 2. calculation follows: say subject delta offset 0 applied visit-1, 6 visit-2, 20 visit-3 44 visit-4. comparison, lets say subject instead ICE visit 3, calculation follows: terms practical usage, lets say wanted delta 5 used post ICE visits regardless proximity ICE visit. can achieved setting delta = c(5,5,5,5) dlag = c(1,0,0,0). example lets say subject ICE visit-1, calculation follows: Another way using arguments set delta difference time visits dlag amount delta per unit time. example lets say visit weeks 1, 5, 6 & 9 want delta 3 applied week ICE. can achieved setting delta = c(0,4,1,3) (difference weeks visit) dlag = c(3, 3, 3, 3). example lets say subject ICE week-5 (.e. visit-2) calculation : .e. week-6 (1 week ICE) delta 3 week-9 (4 weeks ICE) delta 12. Please note function also returns several utility variables user can create custom logic defining delta set . additional variables include: is_mar - observation missing regarded MAR? variable set FALSE observations occurred non-MAR ICE, otherwise set TRUE. is_missing - outcome variable observation missing. is_post_ice - observation occur patient's ICE defined data_ice dataset supplied draws(). strategy - imputation strategy assigned subject. design implementation function largely based upon functionality implemented called \"five marcos\" James Roger. See Roger (2021).","code":"v1 v2 v3 v4 -------------- 5 6 7 8 # delta assigned to each visit 0 1 2 3 # lagged scaling starting from the first visit after the subjects ICE -------------- 0 6 14 24 # delta * lagged scaling -------------- 0 6 20 44 # accumulative sum of delta to be applied to each visit v1 v2 v3 v4 -------------- 5 6 7 8 # delta assigned to each visit 0 0 1 2 # lagged scaling starting from the first visit after the subjects ICE -------------- 0 0 7 16 # delta * lagged scaling -------------- 0 0 7 23 # accumulative sum of delta to be applied to each visit v1 v2 v3 v4 -------------- 5 5 5 5 # delta assigned to each visit 1 0 0 0 # lagged scaling starting from the first visit after the subjects ICE -------------- 5 0 0 0 # delta * lagged scaling -------------- 5 5 5 5 # accumulative sum of delta to be applied to each visit v1 v2 v3 v4 -------------- 0 4 1 3 # delta assigned to each visit 0 0 3 3 # lagged scaling starting from the first visit after the subjects ICE -------------- 0 0 3 9 # delta * lagged scaling -------------- 0 0 3 12 # accumulative sum of delta to be applied to each visit"},{"path":"/reference/delta_template.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Create a delta data.frame template — delta_template","text":"Roger, James. Reference-based mi via multivariate normal rm (“five macros” miwithd), 2021. URL https://www.lshtm.ac.uk/research/centres-projects-groups/missing-data#dia-missing-data.","code":""},{"path":[]},{"path":"/reference/delta_template.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a delta data.frame template — delta_template","text":"","code":"if (FALSE) { # \\dontrun{ delta_template(imputeObj) delta_template(imputeObj, delta = c(5,6,7,8), dlag = c(1,2,3,4)) } # }"},{"path":"/reference/draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Fit the base imputation model and get parameter estimates — draws","title":"Fit the base imputation model and get parameter estimates — draws","text":"draws fits base imputation model observed outcome data according given multiple imputation methodology. According user's method specification, returns either draws posterior distribution model parameters required Bayesian multiple imputation frequentist parameter estimates original data bootstrapped leave-one-datasets required conditional mean imputation. purpose imputation model estimate model parameters absence intercurrent events (ICEs) handled using reference-based imputation methods. reason, observed outcome data ICEs, reference-based imputation methods specified, removed considered missing purpose estimating imputation model, purpose . imputation model mixed model repeated measures (MMRM) valid missing--random (MAR) assumption. can fit using maximum likelihood (ML) restricted ML (REML) estimation, Bayesian approach, approximate Bayesian approach according user's method specification. ML/REML approaches approximate Bayesian approach support several possible covariance structures, Bayesian approach based MCMC sampling supports unstructured covariance structure. case covariance matrix can assumed different across group.","code":""},{"path":"/reference/draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Fit the base imputation model and get parameter estimates — draws","text":"","code":"draws(data, data_ice = NULL, vars, method, ncores = 1, quiet = FALSE) # S3 method for class 'approxbayes' draws(data, data_ice = NULL, vars, method, ncores = 1, quiet = FALSE) # S3 method for class 'condmean' draws(data, data_ice = NULL, vars, method, ncores = 1, quiet = FALSE) # S3 method for class 'bmlmi' draws(data, data_ice = NULL, vars, method, ncores = 1, quiet = FALSE) # S3 method for class 'bayes' draws(data, data_ice = NULL, vars, method, ncores = 1, quiet = FALSE)"},{"path":"/reference/draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Fit the base imputation model and get parameter estimates — draws","text":"data data.frame containing data used model. See details. data_ice data.frame specifies information related ICEs imputation strategies. See details. vars vars object generated set_vars(). See details. method method object generated either method_bayes(), method_approxbayes(), method_condmean() method_bmlmi(). specifies multiple imputation methodology used. See details. ncores single numeric specifying number cores use creating draws object. Note parameter ignored method_bayes() (Default = 1). Can also cluster object generated make_rbmi_cluster() quiet Logical, TRUE suppress printing progress information printed console.","code":""},{"path":"/reference/draws.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Fit the base imputation model and get parameter estimates — draws","text":"draws object named list containing following: data: R6 longdata object containing relevant input data information. method: method object generated either method_bayes(), method_approxbayes() method_condmean(). samples: list containing estimated parameters interest. element samples named list containing following: ids: vector characters containing ids subjects included original dataset. beta: numeric vector estimated regression coefficients. sigma: list estimated covariance matrices (one level vars$group). theta: numeric vector transformed covariances. failed: Logical. TRUE model fit failed. ids_samp: vector characters containing ids subjects included given sample. fit: method_bayes() chosen, returns MCMC Stan fit object. Otherwise NULL. n_failures: absolute number failures model fit. Relevant method_condmean(type = \"bootstrap\"), method_approxbayes() method_bmlmi(). formula: fixed effects formula object used model specification.","code":""},{"path":"/reference/draws.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Fit the base imputation model and get parameter estimates — draws","text":"draws performs first step multiple imputation (MI) procedure: fitting base imputation model. goal estimate parameters interest needed imputation phase (.e. regression coefficients covariance matrices MMRM model). function distinguishes following methods: Bayesian MI based MCMC sampling: draws returns draws posterior distribution parameters using Bayesian approach based MCMC sampling. method can specified using method = method_bayes(). Approximate Bayesian MI based bootstrapping: draws returns draws posterior distribution parameters using approximate Bayesian approach, sampling posterior distribution simulated fitting MMRM model bootstrap samples original dataset. method can specified using method = method_approxbayes()]. Conditional mean imputation bootstrap re-sampling: draws returns MMRM parameter estimates original dataset n_samples bootstrap samples. method can specified using method = method_condmean() argument type = \"bootstrap\". Conditional mean imputation jackknife re-sampling: draws returns MMRM parameter estimates original dataset leave-one-subject-sample. method can specified using method = method_condmean() argument type = \"jackknife\". Bootstrapped Maximum Likelihood MI: draws returns MMRM parameter estimates given number bootstrap samples needed perform random imputations bootstrapped samples. method can specified using method = method_bmlmi(). Bayesian MI based MCMC sampling proposed Carpenter, Roger, Kenward (2013) first introduced reference-based imputation methods. Approximate Bayesian MI discussed Little Rubin (2002). Conditional mean imputation methods discussed Wolbers et al (2022). Bootstrapped Maximum Likelihood MI described Von Hippel & Bartlett (2021). argument data contains longitudinal data. must least following variables: subjid: factor vector containing subject ids. visit: factor vector containing visit outcome observed . group: factor vector containing group subject belongs . outcome: numeric vector containing outcome variable. might contain missing values. Additional baseline time-varying covariates must included data. data must one row per visit per subject. means incomplete outcome data must set NA instead related row missing. Missing values covariates allowed. data incomplete expand_locf() helper function can used insert missing rows using Last Observation Carried Forward (LOCF) imputation impute covariates values. Note LOCF generally principled imputation method used appropriate specific covariate. Please note special provisioning baseline outcome values. want baseline observations included model part response variable removed advance outcome variable data. time want include baseline outcome covariate model, included separate column data (covariate). Character covariates explicitly cast factors. use custom analysis function requires specific reference levels character covariates (example computation least square means computation) advised manually cast character covariates factor advance running draws(). argument data_ice contains information occurrence ICEs. data.frame 3 columns: Subject ID: character vector containing ids subjects experienced ICE. column must named specified vars$subjid. Visit: character vector containing first visit occurrence ICE (.e. first visit affected ICE). visits must equal one levels data[[vars$visit]]. multiple ICEs happen subject, first non-MAR visit used. column must named specified vars$visit. Strategy: character vector specifying imputation strategy address ICE subject. column must named specified vars$strategy. Possible imputation strategies : \"MAR\": Missing Random. \"CIR\": Copy Increments Reference. \"CR\": Copy Reference. \"JR\": Jump Reference. \"LMCF\": Last Mean Carried Forward. explanations imputation strategies, see Carpenter, Roger, Kenward (2013), Cro et al (2021), Wolbers et al (2022). Please note user-defined imputation strategies can also set. data_ice argument necessary stage since (explained Wolbers et al (2022)), model fitted removing observations incompatible imputation model, .e. observed data data_ice[[vars$visit]] addressed imputation strategy different MAR excluded model fit. However observations discarded data imputation phase (performed function (impute()). summarize, stage pre-ICE data post-ICE data ICEs MAR imputation specified used. data_ice argument omitted, subject record within data_ice, assumed relevant subject's data pre-ICE missing visits imputed MAR assumption observed data used fit base imputation model. Please note ICE visit updated via update_strategy argument impute(); means subjects record data_ice always missing data imputed MAR assumption even strategy updated. vars argument named list specifies names key variables within data data_ice. list created set_vars() contains following named elements: subjid: name column data data_ice contains subject ids variable. visit: name column data data_ice contains visit variable. group: name column data contains group variable. outcome: name column data contains outcome variable. covariates: vector characters contains covariates included model (including interactions specified \"covariateName1*covariateName2\"). covariates provided default model specification outcome ~ 1 + visit + group used. Please note group*visit interaction included model default. strata: covariates used stratification variables bootstrap sampling. default vars$group set stratification variable. Needed method_condmean(type = \"bootstrap\") method_approxbayes(). strategy: name column data_ice contains subject-specific imputation strategy. experience, Bayesian MI (method = method_bayes()) relatively low number samples (e.g. n_samples 100) frequently triggers STAN warnings R-hat \"largest R-hat X.XX, indicating chains mixed\". many instances, warning might spurious, .e. standard diagnostics analysis MCMC samples indicate issues results look reasonable. Increasing number samples e.g. 150 usually gets rid warning.","code":""},{"path":"/reference/draws.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Fit the base imputation model and get parameter estimates — draws","text":"James R Carpenter, James H Roger, Michael G Kenward. Analysis longitudinal trials protocol deviation: framework relevant, accessible assumptions, inference via multiple imputation. Journal Biopharmaceutical Statistics, 23(6):1352–1371, 2013. Suzie Cro, Tim P Morris, Michael G Kenward, James R Carpenter. Sensitivity analysis clinical trials missing continuous outcome data using controlled multiple imputation: practical guide. Statistics Medicine, 39(21):2815–2842, 2020. Roderick J. . Little Donald B. Rubin. Statistical Analysis Missing Data, Second Edition. John Wiley & Sons, Hoboken, New Jersey, 2002. [Section 10.2.3] Marcel Wolbers, Alessandro Noci, Paul Delmar, Craig Gower-Page, Sean Yiu, Jonathan W. Bartlett. Standard reference-based conditional mean imputation. https://arxiv.org/abs/2109.11162, 2022. Von Hippel, Paul T Bartlett, Jonathan W. Maximum likelihood multiple imputation: Faster imputations consistent standard errors without posterior draws. 2021.","code":""},{"path":[]},{"path":"/reference/ensure_rstan.html","id":null,"dir":"Reference","previous_headings":"","what":"Ensure rstan exists — ensure_rstan","title":"Ensure rstan exists — ensure_rstan","text":"Checks see rstan exists throws helpful error message","code":""},{"path":"/reference/ensure_rstan.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Ensure rstan exists — ensure_rstan","text":"","code":"ensure_rstan()"},{"path":"/reference/eval_mmrm.html","id":null,"dir":"Reference","previous_headings":"","what":"Evaluate a call to mmrm — eval_mmrm","title":"Evaluate a call to mmrm — eval_mmrm","text":"utility function attempts evaluate call mmrm managing warnings errors thrown. particular function attempts catch warnings errors instead surfacing simply add additional element failed value TRUE. allows multiple calls made without program exiting.","code":""},{"path":"/reference/eval_mmrm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Evaluate a call to mmrm — eval_mmrm","text":"","code":"eval_mmrm(expr)"},{"path":"/reference/eval_mmrm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Evaluate a call to mmrm — eval_mmrm","text":"expr expression evaluated. call mmrm::mmrm().","code":""},{"path":"/reference/eval_mmrm.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Evaluate a call to mmrm — eval_mmrm","text":"function originally developed use glmmTMB needed hand-holding dropping false-positive warnings. important now kept around encase need catch false-positive warnings future.","code":""},{"path":[]},{"path":"/reference/eval_mmrm.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Evaluate a call to mmrm — eval_mmrm","text":"","code":"if (FALSE) { # \\dontrun{ eval_mmrm({ mmrm::mmrm(formula, data) }) } # }"},{"path":"/reference/expand.html","id":null,"dir":"Reference","previous_headings":"","what":"Expand and fill in missing data.frame rows — expand","title":"Expand and fill in missing data.frame rows — expand","text":"functions essentially wrappers around base::expand.grid() ensure missing combinations data inserted data.frame imputation/fill methods updating covariate values newly created rows.","code":""},{"path":"/reference/expand.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Expand and fill in missing data.frame rows — expand","text":"","code":"expand(data, ...) fill_locf(data, vars, group = NULL, order = NULL) expand_locf(data, ..., vars, group, order)"},{"path":"/reference/expand.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Expand and fill in missing data.frame rows — expand","text":"data dataset expand fill . ... variables levels expanded (note duplicate entries levels result multiple rows level). vars character vector containing names variables need filled . group character vector containing names variables group performing LOCF imputation var. order character vector containing names additional variables sort data.frame performing LOCF.","code":""},{"path":"/reference/expand.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Expand and fill in missing data.frame rows — expand","text":"draws() function makes assumption subjects visits present data.frame covariate values non missing; expand(), fill_locf() expand_locf() utility functions support users ensuring data.frame's conform assumptions. expand() takes vectors expected levels data.frame expands combinations inserting missing rows data.frame. Note \"expanded\" variables cast factors. fill_locf() applies LOCF imputation named covariates fill NAs created insertion new rows expand() (though note distinction made existing NAs newly created NAs). Note data.frame sorted c(group, order) performing LOCF imputation; data.frame returned original sort order however. expand_locf() simple composition function fill_locf() expand() .e. fill_locf(expand(...)).","code":""},{"path":"/reference/expand.html","id":"missing-first-values","dir":"Reference","previous_headings":"","what":"Missing First Values","title":"Expand and fill in missing data.frame rows — expand","text":"fill_locf() function performs last observation carried forward imputation. natural consequence unable impute missing observations observation first value given subject / grouping. values deliberately imputed risks silent errors case time varying covariates. One solution first use expand_locf() just visit variable time varying covariates merge baseline covariates afterwards .e.","code":"library(dplyr) dat_expanded <- expand( data = dat, subject = c(\"pt1\", \"pt2\", \"pt3\", \"pt4\"), visit = c(\"vis1\", \"vis2\", \"vis3\") ) dat_filled <- dat_expanded %>% left_join(baseline_covariates, by = \"subject\")"},{"path":"/reference/expand.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Expand and fill in missing data.frame rows — expand","text":"","code":"if (FALSE) { # \\dontrun{ dat_expanded <- expand( data = dat, subject = c(\"pt1\", \"pt2\", \"pt3\", \"pt4\"), visit = c(\"vis1\", \"vis2\", \"vis3\") ) dat_filled <- fill_loc( data = dat_expanded, vars = c(\"Sex\", \"Age\"), group = \"subject\", order = \"visit\" ) ## Or dat_filled <- expand_locf( data = dat, subject = c(\"pt1\", \"pt2\", \"pt3\", \"pt4\"), visit = c(\"vis1\", \"vis2\", \"vis3\"), vars = c(\"Sex\", \"Age\"), group = \"subject\", order = \"visit\" ) } # }"},{"path":"/reference/extract_covariates.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract Variables from string vector — extract_covariates","title":"Extract Variables from string vector — extract_covariates","text":"Takes string including potentially model terms like * : extracts individual variables","code":""},{"path":"/reference/extract_covariates.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract Variables from string vector — extract_covariates","text":"","code":"extract_covariates(x)"},{"path":"/reference/extract_covariates.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract Variables from string vector — extract_covariates","text":"x string variable names potentially including interaction terms","code":""},{"path":"/reference/extract_covariates.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Extract Variables from string vector — extract_covariates","text":".e. c(\"v1\", \"v2\", \"v2*v3\", \"v1:v2\") becomes c(\"v1\", \"v2\", \"v3\")","code":""},{"path":"/reference/extract_data_nmar_as_na.html","id":null,"dir":"Reference","previous_headings":"","what":"Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na","title":"Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na","text":"Set NA outcome values MNAR missing (.e. occur ICE handled using reference-based imputation strategy)","code":""},{"path":"/reference/extract_data_nmar_as_na.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na","text":"","code":"extract_data_nmar_as_na(longdata)"},{"path":"/reference/extract_data_nmar_as_na.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na","text":"longdata R6 longdata object containing relevant input data information.","code":""},{"path":"/reference/extract_data_nmar_as_na.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Set to NA outcome values that would be MNAR if they were missing (i.e. which occur after an ICE handled using a reference-based imputation strategy) — extract_data_nmar_as_na","text":"data.frame containing longdata$get_data(longdata$ids), MNAR outcome values set NA.","code":""},{"path":"/reference/extract_draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract draws from a stanfit object — extract_draws","title":"Extract draws from a stanfit object — extract_draws","text":"Extract draws stanfit object convert lists. function rstan::extract() returns draws given parameter array. function calls rstan::extract() extract draws stanfit object convert arrays lists.","code":""},{"path":"/reference/extract_draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract draws from a stanfit object — extract_draws","text":"","code":"extract_draws(stan_fit)"},{"path":"/reference/extract_draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract draws from a stanfit object — extract_draws","text":"stan_fit stanfit object.","code":""},{"path":"/reference/extract_draws.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract draws from a stanfit object — extract_draws","text":"named list length 2 containing: beta: list length equal number draws containing draws posterior distribution regression coefficients. sigma: list length equal number draws containing draws posterior distribution covariance matrices. element list list length equal 1 same_cov = TRUE equal number groups same_cov = FALSE.","code":""},{"path":"/reference/extract_imputed_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract imputed dataset — extract_imputed_df","title":"Extract imputed dataset — extract_imputed_df","text":"Takes imputation object generated imputation_df() uses extract completed dataset longdata object created longDataConstructor(). Also applies delta transformation data.frame provided delta argument. See analyse() details structure data.frame. Subject IDs returned data.frame scrambled .e. original values.","code":""},{"path":"/reference/extract_imputed_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract imputed dataset — extract_imputed_df","text":"","code":"extract_imputed_df(imputation, ld, delta = NULL, idmap = FALSE)"},{"path":"/reference/extract_imputed_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract imputed dataset — extract_imputed_df","text":"imputation imputation object generated imputation_df(). ld longdata object generated longDataConstructor(). delta Either NULL data.frame. used offset outcome values imputed dataset. idmap Logical. TRUE attribute called \"idmap\" attached return object contains list maps old subject ids new subject ids.","code":""},{"path":"/reference/extract_imputed_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract imputed dataset — extract_imputed_df","text":"data.frame.","code":""},{"path":"/reference/extract_imputed_dfs.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract imputed datasets — extract_imputed_dfs","title":"Extract imputed datasets — extract_imputed_dfs","text":"Extracts imputed datasets contained within imputations object generated impute().","code":""},{"path":"/reference/extract_imputed_dfs.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract imputed datasets — extract_imputed_dfs","text":"","code":"extract_imputed_dfs( imputations, index = seq_along(imputations$imputations), delta = NULL, idmap = FALSE )"},{"path":"/reference/extract_imputed_dfs.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract imputed datasets — extract_imputed_dfs","text":"imputations imputations object created impute(). index indexes imputed datasets return. default, datasets within imputations object returned. delta data.frame containing delta transformation applied imputed dataset. See analyse() details format specification data.frame. idmap Logical. subject IDs imputed data.frame's replaced new IDs ensure unique. Setting argument TRUE attaches attribute, called idmap, returned data.frame's provide map new subject IDs old subject IDs.","code":""},{"path":"/reference/extract_imputed_dfs.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract imputed datasets — extract_imputed_dfs","text":"list data.frames equal length index argument.","code":""},{"path":[]},{"path":"/reference/extract_imputed_dfs.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Extract imputed datasets — extract_imputed_dfs","text":"","code":"if (FALSE) { # \\dontrun{ extract_imputed_dfs(imputeObj) extract_imputed_dfs(imputeObj, c(1:3)) } # }"},{"path":"/reference/extract_params.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract parameters from a MMRM model — extract_params","title":"Extract parameters from a MMRM model — extract_params","text":"Extracts beta sigma coefficients MMRM model created mmrm::mmrm().","code":""},{"path":"/reference/extract_params.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract parameters from a MMRM model — extract_params","text":"","code":"extract_params(fit)"},{"path":"/reference/extract_params.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract parameters from a MMRM model — extract_params","text":"fit object created mmrm::mmrm()","code":""},{"path":"/reference/fit_mcmc.html","id":null,"dir":"Reference","previous_headings":"","what":"Fit the base imputation model using a Bayesian approach — fit_mcmc","title":"Fit the base imputation model using a Bayesian approach — fit_mcmc","text":"fit_mcmc() fits base imputation model using Bayesian approach. done MCMC method implemented stan run using function rstan::sampling(). function returns draws posterior distribution model parameters stanfit object. Additionally performs multiple diagnostics checks chain returns warnings case detected issues.","code":""},{"path":"/reference/fit_mcmc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Fit the base imputation model using a Bayesian approach — fit_mcmc","text":"","code":"fit_mcmc(designmat, outcome, group, subjid, visit, method, quiet = FALSE)"},{"path":"/reference/fit_mcmc.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Fit the base imputation model using a Bayesian approach — fit_mcmc","text":"designmat design matrix fixed effects. outcome response variable. Must numeric. group Character vector containing group variable. subjid Character vector containing subjects IDs. visit Character vector containing visit variable. method method object generated method_bayes(). quiet Specify whether stan sampling log printed console.","code":""},{"path":"/reference/fit_mcmc.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Fit the base imputation model using a Bayesian approach — fit_mcmc","text":"named list composed following: samples: named list containing draws parameter. corresponds output extract_draws(). fit: stanfit object.","code":""},{"path":"/reference/fit_mcmc.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Fit the base imputation model using a Bayesian approach — fit_mcmc","text":"Bayesian model assumes multivariate normal likelihood function weakly-informative priors model parameters: particular, uniform priors assumed regression coefficients inverse-Wishart priors covariance matrices. chain initialized using REML parameter estimates MMRM starting values. function performs following steps: Fit MMRM using REML approach. Prepare input data MCMC fit described data{} block Stan file. See prepare_stan_data() details. Run MCMC according input arguments using starting values REML parameter estimates estimated point 1. Performs diagnostics checks MCMC. See check_mcmc() details. Extract draws model fit. chains perform method$n_samples draws keeping one every method$burn_between iterations. Additionally first method$burn_in iterations discarded. total number iterations method$burn_in + method$burn_between*method$n_samples. purpose method$burn_in ensure samples drawn stationary distribution Markov Chain. method$burn_between aims keep draws uncorrelated .","code":""},{"path":"/reference/fit_mmrm.html","id":null,"dir":"Reference","previous_headings":"","what":"Fit a MMRM model — fit_mmrm","title":"Fit a MMRM model — fit_mmrm","text":"Fits MMRM model allowing different covariance structures using mmrm::mmrm(). Returns list key model parameters beta, sigma additional element failed indicating whether fit failed converge. fit fail converge beta sigma present.","code":""},{"path":"/reference/fit_mmrm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Fit a MMRM model — fit_mmrm","text":"","code":"fit_mmrm( designmat, outcome, subjid, visit, group, cov_struct = c(\"us\", \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"), REML = TRUE, same_cov = TRUE )"},{"path":"/reference/fit_mmrm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Fit a MMRM model — fit_mmrm","text":"designmat data.frame matrix containing covariates use MMRM model. Dummy variables must already expanded , .e. via stats::model.matrix(). contain missing values outcome numeric vector. outcome value regressed MMRM model. subjid character / factor vector. subject identifier used link separate visits belong subject. visit character / factor vector. Indicates visit outcome value occurred . group character / factor vector. Indicates treatment group patient belongs . cov_struct character value. Specifies covariance structure use. Must one \"us\" (default), \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\") REML logical. Specifies whether restricted maximum likelihood used same_cov logical. Used specify shared individual covariance matrix used per group","code":""},{"path":"/reference/generate_data_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate data for a single group — generate_data_single","title":"Generate data for a single group — generate_data_single","text":"Generate data single group","code":""},{"path":"/reference/generate_data_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate data for a single group — generate_data_single","text":"","code":"generate_data_single(pars_group, strategy_fun = NULL, distr_pars_ref = NULL)"},{"path":"/reference/generate_data_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate data for a single group — generate_data_single","text":"pars_group simul_pars object generated set_simul_pars(). specifies simulation parameters given group. strategy_fun Function implementing trajectories intercurrent event (ICE). Must one getStrategies(). See getStrategies() details. NULL post-ICE outcomes untouched. distr_pars_ref Optional. Named list containing simulation parameters reference arm. contains following elements: mu: Numeric vector indicating mean outcome trajectory assuming ICEs. include outcome baseline. sigma Covariance matrix outcome trajectory assuming ICEs. NULL, parameters inherited pars_group.","code":""},{"path":"/reference/generate_data_single.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate data for a single group — generate_data_single","text":"data.frame containing simulated data. includes following variables: id: Factor variable specifies id subject. visit: Factor variable specifies visit assessment. Visit 0 denotes baseline visit. group: Factor variable specifies treatment group subject belongs . outcome_bl: Numeric variable specifies baseline outcome. outcome_noICE: Numeric variable specifies longitudinal outcome assuming ICEs. ind_ice1: Binary variable takes value 1 corresponding visit affected ICE1 0 otherwise. dropout_ice1: Binary variable takes value 1 corresponding visit affected drop-following ICE1 0 otherwise. ind_ice2: Binary variable takes value 1 corresponding visit affected ICE2. outcome: Numeric variable specifies longitudinal outcome including ICE1, ICE2 intermittent missing values.","code":""},{"path":[]},{"path":"/reference/getStrategies.html","id":null,"dir":"Reference","previous_headings":"","what":"Get imputation strategies — getStrategies","title":"Get imputation strategies — getStrategies","text":"Returns list defining imputation strategies used create multivariate normal distribution parameters merging source group reference group per patient.","code":""},{"path":"/reference/getStrategies.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get imputation strategies — getStrategies","text":"","code":"getStrategies(...)"},{"path":"/reference/getStrategies.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get imputation strategies — getStrategies","text":"... User defined methods added return list. Input must function.","code":""},{"path":"/reference/getStrategies.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Get imputation strategies — getStrategies","text":"default Jump Reference (JR), Copy Reference (CR), Copy Increments Reference (CIR), Last Mean Carried Forward (LMCF) Missing Random (MAR) defined. user can define strategy functions (overwrite pre-defined ones) specifying named input function .e. NEW = function(...) .... exception MAR overwritten. user defined functions must take 3 inputs: pars_group, pars_ref index_mar. pars_group pars_ref lists elements mu sigma representing multivariate normal distribution parameters subject's current group reference group respectively. index_mar logical vector specifying visits subject met MAR assumption . function must return list elements mu sigma. See implementation strategy_JR() example.","code":""},{"path":"/reference/getStrategies.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get imputation strategies — getStrategies","text":"","code":"if (FALSE) { # \\dontrun{ getStrategies() getStrategies( NEW = function(pars_group, pars_ref, index_mar) code , JR = function(pars_group, pars_ref, index_mar) more_code ) } # }"},{"path":"/reference/get_ESS.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS","title":"Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS","text":"Extract Effective Sample Size (ESS) stanfit object","code":""},{"path":"/reference/get_ESS.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS","text":"","code":"get_ESS(stan_fit)"},{"path":"/reference/get_ESS.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS","text":"stan_fit stanfit object.","code":""},{"path":"/reference/get_ESS.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract the Effective Sample Size (ESS) from a stanfit object — get_ESS","text":"named vector containing ESS parameter model.","code":""},{"path":"/reference/get_bootstrap_stack.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates a stack object populated with bootstrapped samples — get_bootstrap_stack","title":"Creates a stack object populated with bootstrapped samples — get_bootstrap_stack","text":"Function creates Stack() object populated stack bootstrap samples based upon method$n_samples","code":""},{"path":"/reference/get_bootstrap_stack.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates a stack object populated with bootstrapped samples — get_bootstrap_stack","text":"","code":"get_bootstrap_stack(longdata, method, stack = Stack$new())"},{"path":"/reference/get_bootstrap_stack.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates a stack object populated with bootstrapped samples — get_bootstrap_stack","text":"longdata longDataConstructor() object method method object stack Stack() object (exposed unit testing purposes)","code":""},{"path":"/reference/get_conditional_parameters.html","id":null,"dir":"Reference","previous_headings":"","what":"Derive conditional multivariate normal parameters — get_conditional_parameters","title":"Derive conditional multivariate normal parameters — get_conditional_parameters","text":"Takes parameters multivariate normal distribution observed values calculate conditional distribution unobserved values.","code":""},{"path":"/reference/get_conditional_parameters.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Derive conditional multivariate normal parameters — get_conditional_parameters","text":"","code":"get_conditional_parameters(pars, values)"},{"path":"/reference/get_conditional_parameters.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Derive conditional multivariate normal parameters — get_conditional_parameters","text":"pars list elements mu sigma defining mean vector covariance matrix respectively. values vector observed values condition , must length pars$mu. Missing values must represented NA.","code":""},{"path":"/reference/get_conditional_parameters.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Derive conditional multivariate normal parameters — get_conditional_parameters","text":"list conditional distribution parameters: mu - conditional mean vector. sigma - conditional covariance matrix.","code":""},{"path":"/reference/get_delta_template.html","id":null,"dir":"Reference","previous_headings":"","what":"Get delta utility variables — get_delta_template","title":"Get delta utility variables — get_delta_template","text":"function creates default delta template (1 row per subject per visit) extracts utility information users need define logic defining delta. See delta_template() full details.","code":""},{"path":"/reference/get_delta_template.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get delta utility variables — get_delta_template","text":"","code":"get_delta_template(imputations)"},{"path":"/reference/get_delta_template.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get delta utility variables — get_delta_template","text":"imputations imputations object created impute().","code":""},{"path":"/reference/get_draws_mle.html","id":null,"dir":"Reference","previous_headings":"","what":"Fit the base imputation model on bootstrap samples — get_draws_mle","title":"Fit the base imputation model on bootstrap samples — get_draws_mle","text":"Fit base imputation model using ML/REML approach given number bootstrap samples specified method$n_samples. Returns parameter estimates model fit.","code":""},{"path":"/reference/get_draws_mle.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Fit the base imputation model on bootstrap samples — get_draws_mle","text":"","code":"get_draws_mle( longdata, method, sample_stack, n_target_samples, first_sample_orig, use_samp_ids, failure_limit = 0, ncores = 1, quiet = FALSE )"},{"path":"/reference/get_draws_mle.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Fit the base imputation model on bootstrap samples — get_draws_mle","text":"longdata R6 longdata object containing relevant input data information. method method object generated either method_approxbayes() method_condmean() argument type = \"bootstrap\". sample_stack stack object containing subject ids used mmrm iteration. n_target_samples Number samples needed created first_sample_orig Logical. TRUE function returns method$n_samples + 1 samples first sample contains parameter estimates original dataset method$n_samples samples contain parameter estimates bootstrap samples. FALSE function returns method$n_samples samples containing parameter estimates bootstrap samples. use_samp_ids Logical. TRUE, sampled subject ids returned. Otherwise subject ids original dataset returned. values used tell impute() subjects used derive imputed dataset. failure_limit Number failed samples allowed throwing error ncores Number processes parallelise job quiet Logical, TRUE suppress printing progress information printed console.","code":""},{"path":"/reference/get_draws_mle.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Fit the base imputation model on bootstrap samples — get_draws_mle","text":"draws object named list containing following: data: R6 longdata object containing relevant input data information. method: method object generated either method_bayes(), method_approxbayes() method_condmean(). samples: list containing estimated parameters interest. element samples named list containing following: ids: vector characters containing ids subjects included original dataset. beta: numeric vector estimated regression coefficients. sigma: list estimated covariance matrices (one level vars$group). theta: numeric vector transformed covariances. failed: Logical. TRUE model fit failed. ids_samp: vector characters containing ids subjects included given sample. fit: method_bayes() chosen, returns MCMC Stan fit object. Otherwise NULL. n_failures: absolute number failures model fit. Relevant method_condmean(type = \"bootstrap\"), method_approxbayes() method_bmlmi(). formula: fixed effects formula object used model specification.","code":""},{"path":"/reference/get_draws_mle.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Fit the base imputation model on bootstrap samples — get_draws_mle","text":"function takes Stack object contains multiple lists patient ids. function takes Stack pulls set ids constructs dataset just consisting patients (.e. potentially bootstrap jackknife sample). function fits MMRM model dataset create sample object. function repeats process n_target_samples reached. failure_limit samples fail converge function throws error. reaching desired number samples function generates returns draws object.","code":""},{"path":"/reference/get_ests_bmlmi.html","id":null,"dir":"Reference","previous_headings":"","what":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"Compute pooled point estimates, standard error degrees freedom according Von Hippel Bartlett formula Bootstrapped Maximum Likelihood Multiple Imputation (BMLMI).","code":""},{"path":"/reference/get_ests_bmlmi.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"","code":"get_ests_bmlmi(ests, D)"},{"path":"/reference/get_ests_bmlmi.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"ests numeric vector containing estimates analysis imputed datasets. D numeric representing number imputations bootstrap sample BMLMI method.","code":""},{"path":"/reference/get_ests_bmlmi.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"list containing point estimate, standard error degrees freedom.","code":""},{"path":"/reference/get_ests_bmlmi.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"ests must provided following order: firsts D elements related analyses random imputation one bootstrap sample. second set D elements (.e. D+1 2*D) related second bootstrap sample .","code":""},{"path":"/reference/get_ests_bmlmi.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Von Hippel and Bartlett pooling of BMLMI method — get_ests_bmlmi","text":"Von Hippel, Paul T Bartlett, Jonathan W8. Maximum likelihood multiple imputation: Faster imputations consistent standard errors without posterior draws. 2021","code":""},{"path":"/reference/get_example_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate a realistic example dataset — get_example_data","title":"Simulate a realistic example dataset — get_example_data","text":"Simulate realistic example dataset using simulate_data() hard-coded values input arguments.","code":""},{"path":"/reference/get_example_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate a realistic example dataset — get_example_data","text":"","code":"get_example_data()"},{"path":"/reference/get_example_data.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Simulate a realistic example dataset — get_example_data","text":"get_example_data() simulates 1:1 randomized trial active drug (intervention) versus placebo (control) 100 subjects per group 6 post-baseline assessments (bi-monthly visits 12 months). One intercurrent event corresponding treatment discontinuation also simulated. Specifically, data simulated following assumptions: mean outcome trajectory placebo group increases linearly 50 baseline (visit 0) 60 visit 6, .e. slope 10 points/year. mean outcome trajectory intervention group identical placebo group visit 2. visit 2 onward, slope decreases 50% 5 points/year. covariance structure baseline follow-values groups implied random intercept slope model standard deviation 5 intercept slope, correlation 0.25. addition, independent residual error standard deviation 2.5 added assessment. probability study drug discontinuation visit calculated according logistic model depends observed outcome visit. Specifically, visit-wise discontinuation probability 2% 3% control intervention group, respectively, specified case observed outcome equal 50 (mean value baseline). odds discontinuation simulated increase +10% +1 point increase observed outcome. Study drug discontinuation simulated effect mean trajectory placebo group. intervention group, subjects discontinue follow slope mean trajectory placebo group time point onward. compatible copy increments reference (CIR) assumption. Study drop-study drug discontinuation visit occurs probability 50% leading missing outcome data time point onward.","code":""},{"path":[]},{"path":"/reference/get_jackknife_stack.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates a stack object populated with jackknife samples — get_jackknife_stack","title":"Creates a stack object populated with jackknife samples — get_jackknife_stack","text":"Function creates Stack() object populated stack jackknife samples based upon","code":""},{"path":"/reference/get_jackknife_stack.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates a stack object populated with jackknife samples — get_jackknife_stack","text":"","code":"get_jackknife_stack(longdata, method, stack = Stack$new())"},{"path":"/reference/get_jackknife_stack.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates a stack object populated with jackknife samples — get_jackknife_stack","text":"longdata longDataConstructor() object method method object stack Stack() object (exposed unit testing purposes)","code":""},{"path":"/reference/get_mmrm_sample.html","id":null,"dir":"Reference","previous_headings":"","what":"Fit MMRM and returns parameter estimates — get_mmrm_sample","title":"Fit MMRM and returns parameter estimates — get_mmrm_sample","text":"get_mmrm_sample fits base imputation model using ML/REML approach. Returns parameter estimates fit.","code":""},{"path":"/reference/get_mmrm_sample.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Fit MMRM and returns parameter estimates — get_mmrm_sample","text":"","code":"get_mmrm_sample(ids, longdata, method)"},{"path":"/reference/get_mmrm_sample.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Fit MMRM and returns parameter estimates — get_mmrm_sample","text":"ids vector characters containing ids subjects. longdata R6 longdata object containing relevant input data information. method method object generated either method_approxbayes() method_condmean().","code":""},{"path":"/reference/get_mmrm_sample.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Fit MMRM and returns parameter estimates — get_mmrm_sample","text":"named list class sample_single. contains following: ids vector characters containing ids subjects included original dataset. beta numeric vector estimated regression coefficients. sigma list estimated covariance matrices (one level vars$group). theta numeric vector transformed covariances. failed logical. TRUE model fit failed. ids_samp vector characters containing ids subjects included given sample.","code":""},{"path":"/reference/get_pattern_groups.html","id":null,"dir":"Reference","previous_headings":"","what":"Determine patients missingness group — get_pattern_groups","title":"Determine patients missingness group — get_pattern_groups","text":"Takes design matrix multiple rows per subject returns dataset 1 row per subject new column pgroup indicating group patient belongs (based upon missingness pattern treatment group)","code":""},{"path":"/reference/get_pattern_groups.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Determine patients missingness group — get_pattern_groups","text":"","code":"get_pattern_groups(ddat)"},{"path":"/reference/get_pattern_groups.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Determine patients missingness group — get_pattern_groups","text":"ddat data.frame columns subjid, visit, group, is_avail","code":""},{"path":"/reference/get_pattern_groups.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Determine patients missingness group — get_pattern_groups","text":"column is_avail must character numeric 0 1","code":""},{"path":"/reference/get_pattern_groups_unique.html","id":null,"dir":"Reference","previous_headings":"","what":"Get Pattern Summary — get_pattern_groups_unique","title":"Get Pattern Summary — get_pattern_groups_unique","text":"Takes dataset pattern information creates summary dataset just 1 row per pattern","code":""},{"path":"/reference/get_pattern_groups_unique.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get Pattern Summary — get_pattern_groups_unique","text":"","code":"get_pattern_groups_unique(patterns)"},{"path":"/reference/get_pattern_groups_unique.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get Pattern Summary — get_pattern_groups_unique","text":"patterns data.frame columns pgroup, pattern group","code":""},{"path":"/reference/get_pattern_groups_unique.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Get Pattern Summary — get_pattern_groups_unique","text":"column pgroup must numeric vector indicating pattern group patient belongs column pattern must character string 0's 1's. must identical rows within pgroup column group must character / numeric vector indicating covariance group observation belongs . must identical within pgroup","code":""},{"path":"/reference/get_pool_components.html","id":null,"dir":"Reference","previous_headings":"","what":"Expected Pool Components — get_pool_components","title":"Expected Pool Components — get_pool_components","text":"Returns elements expected contained analyse object depending analysis method specified.","code":""},{"path":"/reference/get_pool_components.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Expected Pool Components — get_pool_components","text":"","code":"get_pool_components(x)"},{"path":"/reference/get_pool_components.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Expected Pool Components — get_pool_components","text":"x Character name analysis method, must one either \"rubin\", \"jackknife\", \"bootstrap\" \"bmlmi\".","code":""},{"path":"/reference/get_session_hash.html","id":null,"dir":"Reference","previous_headings":"","what":"Get session hash — get_session_hash","title":"Get session hash — get_session_hash","text":"Gets unique string based current R version relevant packages.","code":""},{"path":"/reference/get_session_hash.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get session hash — get_session_hash","text":"","code":"get_session_hash()"},{"path":"/reference/get_stan_model.html","id":null,"dir":"Reference","previous_headings":"","what":"Get Compiled Stan Object — get_stan_model","title":"Get Compiled Stan Object — get_stan_model","text":"Gets compiled Stan object can used rstan::sampling()","code":""},{"path":"/reference/get_stan_model.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get Compiled Stan Object — get_stan_model","text":"","code":"get_stan_model()"},{"path":"/reference/get_visit_distribution_parameters.html","id":null,"dir":"Reference","previous_headings":"","what":"Derive visit distribution parameters — get_visit_distribution_parameters","title":"Derive visit distribution parameters — get_visit_distribution_parameters","text":"Takes patient level data beta coefficients expands get patient specific estimate visit distribution parameters mu sigma. Returns values specific format expected downstream functions imputation process (namely list(list(mu = ..., sigma = ...), list(mu = ..., sigma = ...))).","code":""},{"path":"/reference/get_visit_distribution_parameters.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Derive visit distribution parameters — get_visit_distribution_parameters","text":"","code":"get_visit_distribution_parameters(dat, beta, sigma)"},{"path":"/reference/get_visit_distribution_parameters.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Derive visit distribution parameters — get_visit_distribution_parameters","text":"dat Patient level dataset, must 1 row per visit. Column order must order beta. number columns must match length beta beta List model beta coefficients. 1 element sample e.g. 3 samples models 4 beta coefficients argument form list( c(1,2,3,4) , c(5,6,7,8), c(9,10,11,12)). elements beta must length must length order dat. sigma List sigma. Must number entries beta.","code":""},{"path":"/reference/has_class.html","id":null,"dir":"Reference","previous_headings":"","what":"Does object have a class ? — has_class","title":"Does object have a class ? — has_class","text":"Utility function see object particular class. Useful know many classes object may .","code":""},{"path":"/reference/has_class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Does object have a class ? — has_class","text":"","code":"has_class(x, cls)"},{"path":"/reference/has_class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Does object have a class ? — has_class","text":"x object want check class . cls class want know .","code":""},{"path":"/reference/has_class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Does object have a class ? — has_class","text":"TRUE object class. FALSE object class.","code":""},{"path":"/reference/ife.html","id":null,"dir":"Reference","previous_headings":"","what":"if else — ife","title":"if else — ife","text":"wrapper around () else() prevent unexpected interactions ifelse() factor variables","code":""},{"path":"/reference/ife.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"if else — ife","text":"","code":"ife(x, a, b)"},{"path":"/reference/ife.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"if else — ife","text":"x True / False value return True b value return False","code":""},{"path":"/reference/ife.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"if else — ife","text":"default ifelse() convert factor variables numeric values often undesirable. connivance function avoids problem","code":""},{"path":"/reference/imputation_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a valid imputation_df object — imputation_df","title":"Create a valid imputation_df object — imputation_df","text":"Create valid imputation_df object","code":""},{"path":"/reference/imputation_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a valid imputation_df object — imputation_df","text":"","code":"imputation_df(...)"},{"path":"/reference/imputation_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a valid imputation_df object — imputation_df","text":"... list imputation_single.","code":""},{"path":"/reference/imputation_list_df.html","id":null,"dir":"Reference","previous_headings":"","what":"List of imputations_df — imputation_list_df","title":"List of imputations_df — imputation_list_df","text":"container multiple imputation_df's","code":""},{"path":"/reference/imputation_list_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"List of imputations_df — imputation_list_df","text":"","code":"imputation_list_df(...)"},{"path":"/reference/imputation_list_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"List of imputations_df — imputation_list_df","text":"... objects class imputation_df","code":""},{"path":"/reference/imputation_list_single.html","id":null,"dir":"Reference","previous_headings":"","what":"A collection of imputation_singles() grouped by a single subjid ID — imputation_list_single","title":"A collection of imputation_singles() grouped by a single subjid ID — imputation_list_single","text":"collection imputation_singles() grouped single subjid ID","code":""},{"path":"/reference/imputation_list_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"A collection of imputation_singles() grouped by a single subjid ID — imputation_list_single","text":"","code":"imputation_list_single(imputations, D = 1)"},{"path":"/reference/imputation_list_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"A collection of imputation_singles() grouped by a single subjid ID — imputation_list_single","text":"imputations list imputation_single() objects ordered repetitions grouped sequentially D number repetitions performed determines many columns imputation matrix constructor function create imputation_list_single object contains matrix imputation_single() objects grouped single id. matrix split D columns (.e. non-bmlmi methods always 1) id attribute determined extracting id attribute contributing imputation_single() objects. error throw multiple id detected","code":""},{"path":"/reference/imputation_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a valid imputation_single object — imputation_single","title":"Create a valid imputation_single object — imputation_single","text":"Create valid imputation_single object","code":""},{"path":"/reference/imputation_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a valid imputation_single object — imputation_single","text":"","code":"imputation_single(id, values)"},{"path":"/reference/imputation_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a valid imputation_single object — imputation_single","text":"id character string specifying subject id. values numeric vector indicating imputed values.","code":""},{"path":"/reference/impute.html","id":null,"dir":"Reference","previous_headings":"","what":"Create imputed datasets — impute","title":"Create imputed datasets — impute","text":"impute() creates imputed datasets based upon data options specified call draws(). One imputed dataset created per \"sample\" created draws().","code":""},{"path":"/reference/impute.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create imputed datasets — impute","text":"","code":"impute( draws, references = NULL, update_strategy = NULL, strategies = getStrategies() ) # S3 method for class 'random' impute( draws, references = NULL, update_strategy = NULL, strategies = getStrategies() ) # S3 method for class 'condmean' impute( draws, references = NULL, update_strategy = NULL, strategies = getStrategies() )"},{"path":"/reference/impute.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create imputed datasets — impute","text":"draws draws object created draws(). references named vector. Identifies references used reference-based imputation methods. form c(\"Group1\" = \"Reference1\", \"Group2\" = \"Reference2\"). NULL (default), references assumed form c(\"Group1\" = \"Group1\", \"Group2\" = \"Group2\"). argument NULL imputation strategy (defined data_ice[[vars$strategy]] call draws) MAR set. update_strategy optional data.frame. Updates imputation method originally set via data_ice option draws(). See details section information. strategies named list functions. Defines imputation functions used. names list mirror values specified strategy column data_ice. Default = getStrategies(). See getStrategies() details.","code":""},{"path":"/reference/impute.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Create imputed datasets — impute","text":"impute() uses imputation model parameter estimates, generated draws(), first calculate marginal (multivariate normal) distribution subject's longitudinal outcome variable depending covariate values. subjects intercurrent events (ICEs) handled using non-MAR methods, marginal distribution updated depending time first visit affected ICE, chosen imputation strategy chosen reference group described Carpenter, Roger, Kenward (2013) . subject's imputation distribution used imputing missing values defined marginal distribution conditional observed outcome values. One dataset generated per set parameter estimates provided draws(). exact manner missing values imputed conditional imputation distribution depends method object provided draws(), particular: Bayes & Approximate Bayes: imputed dataset contains 1 row per subject & visit original dataset missing values imputed taking single random sample conditional imputation distribution. Conditional Mean: imputed dataset contains 1 row per subject & visit bootstrapped jackknife dataset used generate corresponding parameter estimates draws(). Missing values imputed using mean conditional imputation distribution. Please note first imputed dataset refers conditional mean imputation original dataset whereas subsequent imputed datasets refer conditional mean imputations bootstrap jackknife samples, respectively, original data. Bootstrapped Maximum Likelihood MI (BMLMI): performs D random imputations bootstrapped dataset used generate corresponding parameter estimates draws(). total number B*D imputed datasets provided, B number bootstrapped datasets. Missing values imputed taking random sample conditional imputation distribution. update_strategy argument can used update imputation strategy originally set via data_ice option draws(). avoids re-run draws() function changing imputation strategy certain circumstances (detailed ). data.frame provided update_strategy argument must contain two columns, one subject ID another imputation strategy, whose names defined vars argument specified call draws(). Please note argument allows update imputation strategy arguments time first visit affected ICE. key limitation functionality one can switch MAR non-MAR strategy (vice versa) subjects without observed post-ICE data. reason change affect whether post-ICE data included base imputation model (explained help draws()). example, subject ICE \"Visit 2\" observed/known values \"Visit 3\" function throw error one tries switch strategy MAR non-MAR strategy. contrast, switching non-MAR MAR strategy, whilst valid, raise warning usable data utilised imputation model.","code":""},{"path":"/reference/impute.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Create imputed datasets — impute","text":"James R Carpenter, James H Roger, Michael G Kenward. Analysis longitudinal trials protocol deviation: framework relevant, accessible assumptions, inference via multiple imputation. Journal Biopharmaceutical Statistics, 23(6):1352–1371, 2013. [Section 4.2 4.3]","code":""},{"path":"/reference/impute.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create imputed datasets — impute","text":"","code":"if (FALSE) { # \\dontrun{ impute( draws = drawobj, references = c(\"Trt\" = \"Placebo\", \"Placebo\" = \"Placebo\") ) new_strategy <- data.frame( subjid = c(\"Pt1\", \"Pt2\"), strategy = c(\"MAR\", \"JR\") ) impute( draws = drawobj, references = c(\"Trt\" = \"Placebo\", \"Placebo\" = \"Placebo\"), update_strategy = new_strategy ) } # }"},{"path":"/reference/impute_data_individual.html","id":null,"dir":"Reference","previous_headings":"","what":"Impute data for a single subject — impute_data_individual","title":"Impute data for a single subject — impute_data_individual","text":"function performs imputation single subject time implementing process detailed impute().","code":""},{"path":"/reference/impute_data_individual.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Impute data for a single subject — impute_data_individual","text":"","code":"impute_data_individual( id, index, beta, sigma, data, references, strategies, condmean, n_imputations = 1 )"},{"path":"/reference/impute_data_individual.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Impute data for a single subject — impute_data_individual","text":"id Character string identifying subject. index sample indexes subject belongs e.g c(1,1,1,2,2,4). beta list beta coefficients sample, .e. beta[[1]] set beta coefficients first sample. sigma list sigma coefficients sample split group .e. sigma[[1]][[\"\"]] give sigma coefficients group first sample. data longdata object created longDataConstructor() references named vector. Identifies references used generating imputed values. form c(\"Group\" = \"Reference\", \"Group\" = \"Reference\"). strategies named list functions. Defines imputation functions used. names list mirror values specified method column data_ice. Default = getStrategies(). See getStrategies() details. condmean Logical. TRUE impute using conditional mean values, FALSE impute taking random draw multivariate normal distribution. n_imputations condmean = FALSE numeric representing number random imputations performed sample. Default 1 (one random imputation per sample).","code":""},{"path":"/reference/impute_data_individual.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Impute data for a single subject — impute_data_individual","text":"Note function performs required imputations subject time. .e. subject included samples 1,3,5,9 imputations (using sample-dependent imputation model parameters) performed one step order avoid look subjects's covariates expanding design matrix multiple times (computationally expensive). function also supports subject belonging sample multiple times, .e. 1,1,2,3,5,5, typically occur bootstrapped datasets.","code":""},{"path":"/reference/impute_internal.html","id":null,"dir":"Reference","previous_headings":"","what":"Create imputed datasets — impute_internal","title":"Create imputed datasets — impute_internal","text":"work horse function implements functionality impute. See user level function impute() details.","code":""},{"path":"/reference/impute_internal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create imputed datasets — impute_internal","text":"","code":"impute_internal( draws, references = NULL, update_strategy, strategies, condmean )"},{"path":"/reference/impute_internal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create imputed datasets — impute_internal","text":"draws draws object created draws(). references named vector. Identifies references used reference-based imputation methods. form c(\"Group1\" = \"Reference1\", \"Group2\" = \"Reference2\"). NULL (default), references assumed form c(\"Group1\" = \"Group1\", \"Group2\" = \"Group2\"). argument NULL imputation strategy (defined data_ice[[vars$strategy]] call draws) MAR set. update_strategy optional data.frame. Updates imputation method originally set via data_ice option draws(). See details section information. strategies named list functions. Defines imputation functions used. names list mirror values specified strategy column data_ice. Default = getStrategies(). See getStrategies() details. condmean logical. TRUE impute using conditional mean values, values impute taking random draw multivariate normal distribution.","code":""},{"path":"/reference/impute_outcome.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample outcome value — impute_outcome","title":"Sample outcome value — impute_outcome","text":"Draws random sample multivariate normal distribution.","code":""},{"path":"/reference/impute_outcome.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample outcome value — impute_outcome","text":"","code":"impute_outcome(conditional_parameters, n_imputations = 1, condmean = FALSE)"},{"path":"/reference/impute_outcome.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample outcome value — impute_outcome","text":"conditional_parameters list elements mu sigma contain mean vector covariance matrix sample . n_imputations numeric representing number random samples multivariate normal distribution performed. Default 1. condmean conditional mean imputation performed (opposed random sampling)","code":""},{"path":"/reference/invert.html","id":null,"dir":"Reference","previous_headings":"","what":"invert — invert","title":"invert — invert","text":"Utility function used replicated purrr::transpose. Turns list inside .","code":""},{"path":"/reference/invert.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"invert — invert","text":"","code":"invert(x)"},{"path":"/reference/invert.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"invert — invert","text":"x list","code":""},{"path":"/reference/invert_indexes.html","id":null,"dir":"Reference","previous_headings":"","what":"Invert and derive indexes — invert_indexes","title":"Invert and derive indexes — invert_indexes","text":"Takes list elements creates new list containing 1 entry per unique element value containing indexes original elements occurred .","code":""},{"path":"/reference/invert_indexes.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Invert and derive indexes — invert_indexes","text":"","code":"invert_indexes(x)"},{"path":"/reference/invert_indexes.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Invert and derive indexes — invert_indexes","text":"x list elements invert calculate index (see details).","code":""},{"path":"/reference/invert_indexes.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Invert and derive indexes — invert_indexes","text":"functions purpose best illustrated example: input: becomes:","code":"list( c(\"A\", \"B\", \"C\"), c(\"A\", \"A\", \"B\"))} list( \"A\" = c(1,2,2), \"B\" = c(1,2), \"C\" = 1 )"},{"path":"/reference/is_absent.html","id":null,"dir":"Reference","previous_headings":"","what":"Is value absent — is_absent","title":"Is value absent — is_absent","text":"Returns true value either NULL, NA \"\". case vector values must NULL/NA/\"\" x regarded absent.","code":""},{"path":"/reference/is_absent.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Is value absent — is_absent","text":"","code":"is_absent(x, na = TRUE, blank = TRUE)"},{"path":"/reference/is_absent.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Is value absent — is_absent","text":"x value check absent na NAs count absent blank blanks .e. \"\" count absent","code":""},{"path":"/reference/is_char_fact.html","id":null,"dir":"Reference","previous_headings":"","what":"Is character or factor — is_char_fact","title":"Is character or factor — is_char_fact","text":"returns true x character factor vector","code":""},{"path":"/reference/is_char_fact.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Is character or factor — is_char_fact","text":"","code":"is_char_fact(x)"},{"path":"/reference/is_char_fact.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Is character or factor — is_char_fact","text":"x character factor vector","code":""},{"path":"/reference/is_char_one.html","id":null,"dir":"Reference","previous_headings":"","what":"Is single character — is_char_one","title":"Is single character — is_char_one","text":"returns true x length 1 character vector","code":""},{"path":"/reference/is_char_one.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Is single character — is_char_one","text":"","code":"is_char_one(x)"},{"path":"/reference/is_char_one.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Is single character — is_char_one","text":"x character vector","code":""},{"path":"/reference/is_in_rbmi_development.html","id":null,"dir":"Reference","previous_headings":"","what":"Is package in development mode? — is_in_rbmi_development","title":"Is package in development mode? — is_in_rbmi_development","text":"Returns TRUE package developed .e. local copy source code actively editing Returns FALSE otherwise","code":""},{"path":"/reference/is_in_rbmi_development.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Is package in development mode? — is_in_rbmi_development","text":"","code":"is_in_rbmi_development()"},{"path":"/reference/is_in_rbmi_development.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Is package in development mode? — is_in_rbmi_development","text":"Main use function parallel processing indicate whether sub-processes need load current development version code whether load main installed package system","code":""},{"path":"/reference/is_num_char_fact.html","id":null,"dir":"Reference","previous_headings":"","what":"Is character, factor or numeric — is_num_char_fact","title":"Is character, factor or numeric — is_num_char_fact","text":"returns true x character, numeric factor vector","code":""},{"path":"/reference/is_num_char_fact.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Is character, factor or numeric — is_num_char_fact","text":"","code":"is_num_char_fact(x)"},{"path":"/reference/is_num_char_fact.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Is character, factor or numeric — is_num_char_fact","text":"x character, numeric factor vector","code":""},{"path":"/reference/locf.html","id":null,"dir":"Reference","previous_headings":"","what":"Last Observation Carried Forward — locf","title":"Last Observation Carried Forward — locf","text":"Returns vector applied last observation carried forward imputation.","code":""},{"path":"/reference/locf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Last Observation Carried Forward — locf","text":"","code":"locf(x)"},{"path":"/reference/locf.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Last Observation Carried Forward — locf","text":"x vector.","code":""},{"path":"/reference/locf.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Last Observation Carried Forward — locf","text":"","code":"if (FALSE) { # \\dontrun{ locf(c(NA, 1, 2, 3, NA, 4)) # Returns c(NA, 1, 2, 3, 3, 4) } # }"},{"path":"/reference/longDataConstructor.html","id":null,"dir":"Reference","previous_headings":"","what":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"longdata object allows efficient storage recall longitudinal datasets use bootstrap sampling. object works de-constructing data lists based upon subject id thus enabling efficient lookup.","code":""},{"path":"/reference/longDataConstructor.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"object also handles multiple operations specific rbmi defining whether outcome value MAR / Missing well tracking imputation strategy assigned subject. recognised objects functionality fairly overloaded hoped can split area specific objects / functions future. additions functionality object avoided possible.","code":""},{"path":"/reference/longDataConstructor.html","id":"public-fields","dir":"Reference","previous_headings":"","what":"Public fields","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"data original dataset passed constructor (sorted id visit) vars vars object (list key variables) passed constructor visits character vector containing distinct visit levels ids character vector containing unique ids subject self$data formula formula expressing design matrix data constructed strata numeric vector indicating strata corresponding value self$ids belongs . stratification variable defined default 1 subjects (.e. group). field used part self$sample_ids() function enable stratified bootstrap sampling ice_visit_index list indexed subject storing index number first visit affected ICE. ICE set equal number visits plus 1. values list indexed subject storing numeric vector original (unimputed) outcome values group list indexed subject storing single character indicating imputation group subject belongs defined self$data[id, self$ivars$group] used determine reference group used imputing subjects data. is_mar list indexed subject storing logical values indicating subjects outcome values MAR . list defaulted TRUE subjects & outcomes modified calls self$set_strategies(). Note indicate values missing, variable True outcome values either occurred ICE visit post ICE visit imputation strategy MAR strategies list indexed subject storing single character value indicating imputation strategy assigned subject. list defaulted \"MAR\" subjects modified calls either self$set_strategies() self$update_strategies() strategy_lock list indexed subject storing single logical value indicating whether patients imputation strategy locked . strategy locked means change MAR non-MAR. Strategies can changed non-MAR MAR though trigger warning. Strategies locked patient assigned MAR strategy non-missing ICE date. list populated call self$set_strategies(). indexes list indexed subject storing numeric vector indexes specify rows original dataset belong subject .e. recover full data subject \"pt3\" can use self$data[self$indexes[[\"pt3\"]],]. may seem redundant filtering data directly however enables efficient bootstrap sampling data .e. list populated object initialisation. is_missing list indexed subject storing logical vector indicating whether corresponding outcome subject missing. list populated object initialisation. is_post_ice list indexed subject storing logical vector indicating whether corresponding outcome subject post date ICE. ICE data provided defaults False observations. list populated call self$set_strategies().","code":"indexes <- unlist(self$indexes[c(\"pt3\", \"pt3\")]) self$data[indexes,]"},{"path":[]},{"path":"/reference/longDataConstructor.html","id":"public-methods","dir":"Reference","previous_headings":"","what":"Public methods","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"longDataConstructor$get_data() longDataConstructor$add_subject() longDataConstructor$validate_ids() longDataConstructor$sample_ids() longDataConstructor$extract_by_id() longDataConstructor$update_strategies() longDataConstructor$set_strategies() longDataConstructor$check_has_data_at_each_visit() longDataConstructor$set_strata() longDataConstructor$new() longDataConstructor$clone()","code":""},{"path":"/reference/longDataConstructor.html","id":"method-get-data-","dir":"Reference","previous_headings":"","what":"Method get_data()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Returns data.frame based upon required subject IDs. Replaces missing values new ones provided.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$get_data( obj = NULL, nmar.rm = FALSE, na.rm = FALSE, idmap = FALSE )"},{"path":"/reference/longDataConstructor.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"obj Either NULL, character vector subjects IDs imputation list object. See details. nmar.rm Logical value. TRUE remove observations regarded MAR (determined self$is_mar). na.rm Logical value. TRUE remove outcome values missing (determined self$is_missing). idmap Logical value. TRUE add attribute idmap contains mapping new subject ids old subject ids. See details.","code":""},{"path":"/reference/longDataConstructor.html","id":"details-1","dir":"Reference","previous_headings":"","what":"Details","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"obj NULL full original dataset returned. obj character vector new dataset consisting just subjects returned; character vector contains duplicate entries subject returned multiple times. obj imputation_df object (created imputation_df()) subject ids specified object returned missing values filled specified imputation list object. .e. return data.frame consisting observations pt1 twice observations pt3 . first set observations pt1 missing values filled c(1,2,3) second set filled c(4,5,6). length values must equal sum(self$is_missing[[id]]). obj NULL subject IDs scrambled order ensure unique .e. pt2 requested twice process guarantees set observations unique subject ID number. idmap attribute (requested) can used map new ids back old ids.","code":"obj <- imputation_df( imputation_single( id = \"pt1\", values = c(1,2,3)), imputation_single( id = \"pt1\", values = c(4,5,6)), imputation_single( id = \"pt3\", values = c(7,8)) ) longdata$get_data(obj)"},{"path":"/reference/longDataConstructor.html","id":"returns","dir":"Reference","previous_headings":"","what":"Returns","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"data.frame.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-add-subject-","dir":"Reference","previous_headings":"","what":"Method add_subject()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"function decomposes patient data self$data populates corresponding lists .e. self$is_missing, self$values, self$group, etc. function called upon objects initialization.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-1","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$add_subject(id)"},{"path":"/reference/longDataConstructor.html","id":"arguments-1","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"id Character subject id exists within self$data.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-validate-ids-","dir":"Reference","previous_headings":"","what":"Method validate_ids()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Throws error element ids within source data self$data.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-2","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$validate_ids(ids)"},{"path":"/reference/longDataConstructor.html","id":"arguments-2","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"ids character vector ids.","code":""},{"path":"/reference/longDataConstructor.html","id":"returns-1","dir":"Reference","previous_headings":"","what":"Returns","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"TRUE","code":""},{"path":"/reference/longDataConstructor.html","id":"method-sample-ids-","dir":"Reference","previous_headings":"","what":"Method sample_ids()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Performs random stratified sampling patient ids (replacement) patient equal weight picked within strata (.e dependent many non-missing visits ).","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-3","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$sample_ids()"},{"path":"/reference/longDataConstructor.html","id":"returns-2","dir":"Reference","previous_headings":"","what":"Returns","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Character vector ids.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-extract-by-id-","dir":"Reference","previous_headings":"","what":"Method extract_by_id()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Returns list key information given subject. convenience wrapper save manually grab element.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-4","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$extract_by_id(id)"},{"path":"/reference/longDataConstructor.html","id":"arguments-3","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"id Character subject id exists within self$data.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-update-strategies-","dir":"Reference","previous_headings":"","what":"Method update_strategies()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Convenience function run self$set_strategies(dat_ice, update=TRUE) kept legacy reasons.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-5","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$update_strategies(dat_ice)"},{"path":"/reference/longDataConstructor.html","id":"arguments-4","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"dat_ice data.frame containing ICE information see impute() format dataframe.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-set-strategies-","dir":"Reference","previous_headings":"","what":"Method set_strategies()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Updates self$strategies, self$is_mar, self$is_post_ice variables based upon provided ICE information.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-6","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$set_strategies(dat_ice = NULL, update = FALSE)"},{"path":"/reference/longDataConstructor.html","id":"arguments-5","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"dat_ice data.frame containing ICE information. See details. update Logical, indicates ICE data used update. See details.","code":""},{"path":"/reference/longDataConstructor.html","id":"details-2","dir":"Reference","previous_headings":"","what":"Details","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"See draws() specification dat_ice update=FALSE. See impute() format dat_ice update=TRUE. update=TRUE function ensures MAR strategies changed non-MAR presence post-ICE observations.","code":""},{"path":"/reference/longDataConstructor.html","id":"method-check-has-data-at-each-visit-","dir":"Reference","previous_headings":"","what":"Method check_has_data_at_each_visit()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Ensures visits least 1 observed \"MAR\" observation. Throws error criteria met. ensure initial MMRM can resolved.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-7","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$check_has_data_at_each_visit()"},{"path":"/reference/longDataConstructor.html","id":"method-set-strata-","dir":"Reference","previous_headings":"","what":"Method set_strata()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Populates self$strata variable. user specified stratification variables first visit used determine value variables. stratification variables specified everyone defined strata 1.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-8","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$set_strata()"},{"path":"/reference/longDataConstructor.html","id":"method-new-","dir":"Reference","previous_headings":"","what":"Method new()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"Constructor function.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-9","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$new(data, vars)"},{"path":"/reference/longDataConstructor.html","id":"arguments-6","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"data longitudinal dataset. vars ivars object created set_vars().","code":""},{"path":"/reference/longDataConstructor.html","id":"method-clone-","dir":"Reference","previous_headings":"","what":"Method clone()","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"objects class cloneable method.","code":""},{"path":"/reference/longDataConstructor.html","id":"usage-10","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"","code":"longDataConstructor$clone(deep = FALSE)"},{"path":"/reference/longDataConstructor.html","id":"arguments-7","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for Storing / Accessing & Sampling Longitudinal Data — longDataConstructor","text":"deep Whether make deep clone.","code":""},{"path":"/reference/ls_design.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate design vector for the lsmeans — ls_design","title":"Calculate design vector for the lsmeans — ls_design","text":"Calculates design vector required generate lsmean standard error. ls_design_equal calculates applying equal weight per covariate combination whilst ls_design_proportional applies weighting proportional frequency covariate combination occurred actual dataset.","code":""},{"path":"/reference/ls_design.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate design vector for the lsmeans — ls_design","text":"","code":"ls_design_equal(data, frm, fix) ls_design_counterfactual(data, frm, fix) ls_design_proportional(data, frm, fix)"},{"path":"/reference/ls_design.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate design vector for the lsmeans — ls_design","text":"data data.frame frm Formula used fit original model fix named list variables fixed values","code":""},{"path":"/reference/lsmeans.html","id":null,"dir":"Reference","previous_headings":"","what":"Least Square Means — lsmeans","title":"Least Square Means — lsmeans","text":"Estimates least square means linear model. exact implementation / interpretation depends weighting scheme; see weighting section information.","code":""},{"path":"/reference/lsmeans.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Least Square Means — lsmeans","text":"","code":"lsmeans( model, ..., .weights = c(\"counterfactual\", \"equal\", \"proportional_em\", \"proportional\") )"},{"path":"/reference/lsmeans.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Least Square Means — lsmeans","text":"model model created lm. ... Fixes specific variables specific values .e. trt = 1 age = 50. name argument must name variable within dataset. .weights Character, either \"counterfactual\" (default), \"equal\", \"proportional_em\" \"proportional\". Specifies weighting strategy used calculating lsmeans. See weighting section details.","code":""},{"path":[]},{"path":"/reference/lsmeans.html","id":"counterfactual","dir":"Reference","previous_headings":"","what":"Counterfactual","title":"Least Square Means — lsmeans","text":"weights = \"counterfactual\" (default) lsmeans obtained taking average predicted values patient assigning patients arm turn. approach equivalent standardization g-computation. comparison emmeans approach equivalent : Note ensure backwards compatibility previous versions rbmi weights = \"proportional\" alias weights = \"counterfactual\". get results consistent emmeans's weights = \"proportional\" please use weights = \"proportional_em\".","code":"emmeans::emmeans(model, specs = \"\", counterfactual = \"\")"},{"path":"/reference/lsmeans.html","id":"equal","dir":"Reference","previous_headings":"","what":"Equal","title":"Least Square Means — lsmeans","text":"weights = \"equal\" lsmeans obtained taking model fitted value hypothetical patient whose covariates defined follows: Continuous covariates set mean(X) Dummy categorical variables set 1/N N number levels Continuous * continuous interactions set mean(X) * mean(Y) Continuous * categorical interactions set mean(X) * 1/N Dummy categorical * categorical interactions set 1/N * 1/M comparison emmeans approach equivalent :","code":"emmeans::emmeans(model, specs = \"\", weights = \"equal\")"},{"path":"/reference/lsmeans.html","id":"proportional","dir":"Reference","previous_headings":"","what":"Proportional","title":"Least Square Means — lsmeans","text":"weights = \"proportional_em\" lsmeans obtained per weights = \"equal\" except instead weighting observation equally weighted proportion given combination categorical values occurred data. comparison emmeans approach equivalent : Note confused weights = \"proportional\" alias weights = \"counterfactual\".","code":"emmeans::emmeans(model, specs = \"\", weights = \"proportional\")"},{"path":"/reference/lsmeans.html","id":"fixing","dir":"Reference","previous_headings":"","what":"Fixing","title":"Least Square Means — lsmeans","text":"Regardless weighting scheme named arguments passed via ... fix value covariate specified value. example, lsmeans(model, trt = \"\") fix dummy variable trtA 1 patients (real hypothetical) calculating lsmeans. See references similar implementations done SAS R via emmeans package.","code":""},{"path":"/reference/lsmeans.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Least Square Means — lsmeans","text":"https://CRAN.R-project.org/package=emmeans https://documentation.sas.com/doc/en/pgmsascdc/9.4_3.3/statug/statug_glm_details41.htm","code":""},{"path":"/reference/lsmeans.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Least Square Means — lsmeans","text":"","code":"if (FALSE) { # \\dontrun{ mod <- lm(Sepal.Length ~ Species + Petal.Length, data = iris) lsmeans(mod) lsmeans(mod, Species = \"virginica\") lsmeans(mod, Species = \"versicolor\") lsmeans(mod, Species = \"versicolor\", Petal.Length = 1) } # }"},{"path":"/reference/make_rbmi_cluster.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a rbmi ready cluster — make_rbmi_cluster","title":"Create a rbmi ready cluster — make_rbmi_cluster","text":"Create rbmi ready cluster","code":""},{"path":"/reference/make_rbmi_cluster.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a rbmi ready cluster — make_rbmi_cluster","text":"","code":"make_rbmi_cluster(ncores = 1, objects = NULL, packages = NULL)"},{"path":"/reference/make_rbmi_cluster.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a rbmi ready cluster — make_rbmi_cluster","text":"ncores Number parallel processes use existing cluster make use objects named list objects export sub-processes packages character vector libraries load sub-processes function wrapper around parallel::makePSOCKcluster() takes care configuring rbmi used sub-processes well loading user defined objects libraries setting seed reproducibility. ncores 1 function return NULL. ncores cluster created via parallel::makeCluster() function just takes care inserting relevant rbmi objects existing cluster.","code":""},{"path":"/reference/make_rbmi_cluster.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a rbmi ready cluster — make_rbmi_cluster","text":"","code":"if (FALSE) { # \\dontrun{ # Basic usage make_rbmi_cluster(5) # User objects + libraries VALUE <- 5 myfun <- function(x) { x + day(VALUE) # From lubridate::day() } make_rbmi_cluster(5, list(VALUE = VALUE, myfun = myfun), c(\"lubridate\")) # Using a already created cluster cl <- parallel::makeCluster(5) make_rbmi_cluster(cl) } # }"},{"path":"/reference/method.html","id":null,"dir":"Reference","previous_headings":"","what":"Set the multiple imputation methodology — method","title":"Set the multiple imputation methodology — method","text":"functions determine methods rbmi use creating imputation models, generating imputed values pooling results.","code":""},{"path":"/reference/method.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set the multiple imputation methodology — method","text":"","code":"method_bayes( burn_in = 200, burn_between = 50, same_cov = TRUE, n_samples = 20, seed = NULL ) method_approxbayes( covariance = c(\"us\", \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"), threshold = 0.01, same_cov = TRUE, REML = TRUE, n_samples = 20 ) method_condmean( covariance = c(\"us\", \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"), threshold = 0.01, same_cov = TRUE, REML = TRUE, n_samples = NULL, type = c(\"bootstrap\", \"jackknife\") ) method_bmlmi( covariance = c(\"us\", \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"), threshold = 0.01, same_cov = TRUE, REML = TRUE, B = 20, D = 2 )"},{"path":"/reference/method.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set the multiple imputation methodology — method","text":"burn_in numeric specifies many observations discarded prior extracting actual samples. Note sampler initialized maximum likelihood estimates weakly informative prior used thus theory value need high. burn_between numeric specifies \"thinning\" rate .e. many observations discarded sample. used prevent issues associated autocorrelation samples. same_cov logical, TRUE imputation model fitted using single shared covariance matrix observations. FALSE separate covariance matrix fit group determined group argument set_vars(). n_samples numeric determines many imputed datasets generated. case method_condmean(type = \"jackknife\") argument must set NULL. See details. seed deprecated. Please use set.seed() instead. covariance character string specifies structure covariance matrix used imputation model. Must one \"us\" (default), \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"). See details. threshold numeric 0 1, specifies proportion bootstrap datasets can fail produce valid samples error thrown. See details. REML logical indicating whether use REML estimation rather maximum likelihood. type character string specifies resampling method used perform inference conditional mean imputation approach (set via method_condmean()) used. Must one \"bootstrap\" \"jackknife\". B numeric determines number bootstrap samples method_bmlmi. D numeric determines number random imputations bootstrap sample. Needed method_bmlmi().","code":""},{"path":"/reference/method.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Set the multiple imputation methodology — method","text":"case method_condmean(type = \"bootstrap\") n_samples + 1 imputation models datasets generated first sample based original dataset whilst n_samples samples bootstrapped datasets. Likewise, method_condmean(type = \"jackknife\") length(unique(data$subjid)) + 1 imputation models datasets generated. cases represented n + 1 displayed print message. user able specify different covariance structures using covariance argument. Currently supported structures include: Unstructured (\"us\") (default) Ante-dependence (\"ad\") Heterogeneous ante-dependence (\"adh\") First-order auto-regressive (\"ar1\") Heterogeneous first-order auto-regressive (\"ar1h\") Compound symmetry (\"cs\") Heterogeneous compound symmetry (\"csh\") Toeplitz (\"toep\") Heterogeneous Toeplitz (\"toeph\") full details please see mmrm::cov_types(). Note present Bayesian methods support unstructured. case method_condmean(type = \"bootstrap\"), method_approxbayes() method_bmlmi() repeated bootstrap samples original dataset taken MMRM fitted sample. Due randomness sampled datasets, well limitations optimisers used fit models, uncommon estimates particular dataset generated. instances rbmi designed throw bootstrapped dataset try another. However ensure errors due chance due underlying misspecification data /model tolerance limit set many samples can discarded. tolerance limit reached error thrown process aborted. tolerance limit defined ceiling(threshold * n_samples). Note jackknife method estimates need generated leave-one-datasets error thrown fail fit. Please note time writing (September 2021) Stan unable produce reproducible samples across different operating systems even seed used. care must taken using Stan across different machines. information limitation please consult Stan documentation https://mc-stan.org/docs/2_27/reference-manual/reproducibility-chapter.html","code":""},{"path":"/reference/par_lapply.html","id":null,"dir":"Reference","previous_headings":"","what":"Parallelise Lapply — par_lapply","title":"Parallelise Lapply — par_lapply","text":"Simple wrapper around lapply parallel::clusterApplyLB abstract away logic deciding one use","code":""},{"path":"/reference/par_lapply.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Parallelise Lapply — par_lapply","text":"","code":"par_lapply(cl, fun, x, ...)"},{"path":"/reference/par_lapply.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Parallelise Lapply — par_lapply","text":"cl Cluster created parallel::makeCluster() NULL fun Function run x object looped ... extra arguements passed fun","code":""},{"path":"/reference/parametric_ci.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate parametric confidence intervals — parametric_ci","title":"Calculate parametric confidence intervals — parametric_ci","text":"Calculates confidence intervals based upon parametric distribution.","code":""},{"path":"/reference/parametric_ci.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate parametric confidence intervals — parametric_ci","text":"","code":"parametric_ci(point, se, alpha, alternative, qfun, pfun, ...)"},{"path":"/reference/parametric_ci.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate parametric confidence intervals — parametric_ci","text":"point point estimate. se standard error point estimate. using non-\"normal\" distribution set 1. alpha type 1 error rate, value 0 1. alternative character string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". qfun quantile function assumed distribution .e. qnorm. pfun CDF function assumed distribution .e. pnorm. ... additional arguments passed qfun pfun .e. df = 102.","code":""},{"path":"/reference/pool.html","id":null,"dir":"Reference","previous_headings":"","what":"Pool analysis results obtained from the imputed datasets — pool","title":"Pool analysis results obtained from the imputed datasets — pool","text":"Pool analysis results obtained imputed datasets","code":""},{"path":"/reference/pool.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Pool analysis results obtained from the imputed datasets — pool","text":"","code":"pool( results, conf.level = 0.95, alternative = c(\"two.sided\", \"less\", \"greater\"), type = c(\"percentile\", \"normal\") ) # S3 method for class 'pool' as.data.frame(x, ...) # S3 method for class 'pool' print(x, ...)"},{"path":"/reference/pool.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Pool analysis results obtained from the imputed datasets — pool","text":"results analysis object created analyse(). conf.level confidence level returned confidence interval. Must single number 0 1. Default 0.95. alternative character string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". type character string either \"percentile\" (default) \"normal\". Determines method used calculate bootstrap confidence intervals. See details. used method_condmean(type = \"bootstrap\") specified original call draws(). x pool object generated pool(). ... used.","code":""},{"path":"/reference/pool.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Pool analysis results obtained from the imputed datasets — pool","text":"calculation used generate point estimate, standard errors confidence interval depends upon method specified original call draws(); particular: method_approxbayes() & method_bayes() use Rubin's rules pool estimates variances across multiple imputed datasets, Barnard-Rubin rule pool degree's freedom; see Little & Rubin (2002). method_condmean(type = \"bootstrap\") uses percentile normal approximation; see Efron & Tibshirani (1994). Note percentile bootstrap, standard error calculated, .e. standard errors NA object / data.frame. method_condmean(type = \"jackknife\") uses standard jackknife variance formula; see Efron & Tibshirani (1994). method_bmlmi uses pooling procedure Bootstrapped Maximum Likelihood MI (BMLMI). See Von Hippel & Bartlett (2021).","code":""},{"path":"/reference/pool.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Pool analysis results obtained from the imputed datasets — pool","text":"Bradley Efron Robert J Tibshirani. introduction bootstrap. CRC press, 1994. [Section 11] Roderick J. . Little Donald B. Rubin. Statistical Analysis Missing Data, Second Edition. John Wiley & Sons, Hoboken, New Jersey, 2002. [Section 5.4] Von Hippel, Paul T Bartlett, Jonathan W. Maximum likelihood multiple imputation: Faster imputations consistent standard errors without posterior draws. 2021.","code":""},{"path":"/reference/pool_bootstrap_normal.html","id":null,"dir":"Reference","previous_headings":"","what":"Bootstrap Pooling via normal approximation — pool_bootstrap_normal","title":"Bootstrap Pooling via normal approximation — pool_bootstrap_normal","text":"Get point estimate, confidence interval p-value using normal approximation.","code":""},{"path":"/reference/pool_bootstrap_normal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bootstrap Pooling via normal approximation — pool_bootstrap_normal","text":"","code":"pool_bootstrap_normal(est, conf.level, alternative)"},{"path":"/reference/pool_bootstrap_normal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bootstrap Pooling via normal approximation — pool_bootstrap_normal","text":"est numeric vector point estimates bootstrap sample. conf.level confidence level returned confidence interval. Must single number 0 1. Default 0.95. alternative character string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\".","code":""},{"path":"/reference/pool_bootstrap_normal.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bootstrap Pooling via normal approximation — pool_bootstrap_normal","text":"point estimate taken first element est. remaining n-1 values est used generate confidence intervals.","code":""},{"path":"/reference/pool_bootstrap_percentile.html","id":null,"dir":"Reference","previous_headings":"","what":"Bootstrap Pooling via Percentiles — pool_bootstrap_percentile","title":"Bootstrap Pooling via Percentiles — pool_bootstrap_percentile","text":"Get point estimate, confidence interval p-value using percentiles. Note quantile \"type=6\" used, see stats::quantile() details.","code":""},{"path":"/reference/pool_bootstrap_percentile.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bootstrap Pooling via Percentiles — pool_bootstrap_percentile","text":"","code":"pool_bootstrap_percentile(est, conf.level, alternative)"},{"path":"/reference/pool_bootstrap_percentile.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bootstrap Pooling via Percentiles — pool_bootstrap_percentile","text":"est numeric vector point estimates bootstrap sample. conf.level confidence level returned confidence interval. Must single number 0 1. Default 0.95. alternative character string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\".","code":""},{"path":"/reference/pool_bootstrap_percentile.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bootstrap Pooling via Percentiles — pool_bootstrap_percentile","text":"point estimate taken first element est. remaining n-1 values est used generate confidence intervals.","code":""},{"path":"/reference/pool_internal.html","id":null,"dir":"Reference","previous_headings":"","what":"Internal Pool Methods — pool_internal","title":"Internal Pool Methods — pool_internal","text":"Dispatches pool methods based upon results object class. See pool() details.","code":""},{"path":"/reference/pool_internal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Internal Pool Methods — pool_internal","text":"","code":"pool_internal(results, conf.level, alternative, type, D) # S3 method for class 'jackknife' pool_internal(results, conf.level, alternative, type, D) # S3 method for class 'bootstrap' pool_internal( results, conf.level, alternative, type = c(\"percentile\", \"normal\"), D ) # S3 method for class 'bmlmi' pool_internal(results, conf.level, alternative, type, D) # S3 method for class 'rubin' pool_internal(results, conf.level, alternative, type, D)"},{"path":"/reference/pool_internal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Internal Pool Methods — pool_internal","text":"results list results .e. x$results element analyse object created analyse()). conf.level confidence level returned confidence interval. Must single number 0 1. Default 0.95. alternative character string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". type character string either \"percentile\" (default) \"normal\". Determines method used calculate bootstrap confidence intervals. See details. used method_condmean(type = \"bootstrap\") specified original call draws(). D numeric representing number imputations bootstrap sample BMLMI method.","code":""},{"path":"/reference/prepare_stan_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Prepare input data to run the Stan model — prepare_stan_data","title":"Prepare input data to run the Stan model — prepare_stan_data","text":"Prepare input data run Stan model. Creates / calculates required inputs required data{} block MMRM Stan program.","code":""},{"path":"/reference/prepare_stan_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Prepare input data to run the Stan model — prepare_stan_data","text":"","code":"prepare_stan_data(ddat, subjid, visit, outcome, group)"},{"path":"/reference/prepare_stan_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Prepare input data to run the Stan model — prepare_stan_data","text":"ddat design matrix subjid Character vector containing subjects IDs. visit Vector containing visits. outcome Numeric vector containing outcome variable. group Vector containing group variable.","code":""},{"path":"/reference/prepare_stan_data.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Prepare input data to run the Stan model — prepare_stan_data","text":"stan_data object. named list per data{} block related Stan file. particular returns: N - number rows design matrix P - number columns design matrix G - number distinct covariance matrix groups (.e. length(unique(group))) n_visit - number unique outcome visits n_pat - total number pattern groups (defined missingness patterns & covariance group) pat_G - Index Sigma pattern group use pat_n_pt - number patients within pattern group pat_n_visit - number non-missing visits pattern group pat_sigma_index - rows/cols Sigma subset pattern group (padded 0's) y - outcome variable Q - design matrix (QR decomposition) R - R matrix QR decomposition design matrix","code":""},{"path":"/reference/prepare_stan_data.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Prepare input data to run the Stan model — prepare_stan_data","text":"group argument determines covariance matrix group subject belongs . want subjects use shared covariance matrix set group \"1\" everyone.","code":""},{"path":"/reference/print.analysis.html","id":null,"dir":"Reference","previous_headings":"","what":"Print analysis object — print.analysis","title":"Print analysis object — print.analysis","text":"Print analysis object","code":""},{"path":"/reference/print.analysis.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print analysis object — print.analysis","text":"","code":"# S3 method for class 'analysis' print(x, ...)"},{"path":"/reference/print.analysis.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print analysis object — print.analysis","text":"x analysis object generated analyse(). ... used.","code":""},{"path":"/reference/print.draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Print draws object — print.draws","title":"Print draws object — print.draws","text":"Print draws object","code":""},{"path":"/reference/print.draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print draws object — print.draws","text":"","code":"# S3 method for class 'draws' print(x, ...)"},{"path":"/reference/print.draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print draws object — print.draws","text":"x draws object generated draws(). ... used.","code":""},{"path":"/reference/print.imputation.html","id":null,"dir":"Reference","previous_headings":"","what":"Print imputation object — print.imputation","title":"Print imputation object — print.imputation","text":"Print imputation object","code":""},{"path":"/reference/print.imputation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print imputation object — print.imputation","text":"","code":"# S3 method for class 'imputation' print(x, ...)"},{"path":"/reference/print.imputation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print imputation object — print.imputation","text":"x imputation object generated impute(). ... used.","code":""},{"path":"/reference/progressLogger.html","id":null,"dir":"Reference","previous_headings":"","what":"R6 Class for printing current sampling progress — progressLogger","title":"R6 Class for printing current sampling progress — progressLogger","text":"Object initalised total number iterations expected occur. User can update object add method indicate many iterations just occurred. Every time step * 100 % iterations occurred message printed console. Use quiet argument prevent object printing anything ","code":""},{"path":"/reference/progressLogger.html","id":"public-fields","dir":"Reference","previous_headings":"","what":"Public fields","title":"R6 Class for printing current sampling progress — progressLogger","text":"step real, percentage iterations allow printing progress console step_current integer, total number iterations completed since progress last printed console n integer, current number completed iterations n_max integer, total number expected iterations completed acts denominator calculating progress percentages quiet logical holds whether print anything","code":""},{"path":[]},{"path":"/reference/progressLogger.html","id":"public-methods","dir":"Reference","previous_headings":"","what":"Public methods","title":"R6 Class for printing current sampling progress — progressLogger","text":"progressLogger$new() progressLogger$add() progressLogger$print_progress() progressLogger$clone()","code":""},{"path":"/reference/progressLogger.html","id":"method-new-","dir":"Reference","previous_headings":"","what":"Method new()","title":"R6 Class for printing current sampling progress — progressLogger","text":"Create progressLogger object","code":""},{"path":"/reference/progressLogger.html","id":"usage","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for printing current sampling progress — progressLogger","text":"","code":"progressLogger$new(n_max, quiet = FALSE, step = 0.1)"},{"path":"/reference/progressLogger.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for printing current sampling progress — progressLogger","text":"n_max integer, sets field n_max quiet logical, sets field quiet step real, sets field step","code":""},{"path":"/reference/progressLogger.html","id":"method-add-","dir":"Reference","previous_headings":"","what":"Method add()","title":"R6 Class for printing current sampling progress — progressLogger","text":"Records n iterations completed add number current step count (step_current) print progress message log step limit (step) reached. function nothing quiet set TRUE","code":""},{"path":"/reference/progressLogger.html","id":"usage-1","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for printing current sampling progress — progressLogger","text":"","code":"progressLogger$add(n)"},{"path":"/reference/progressLogger.html","id":"arguments-1","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for printing current sampling progress — progressLogger","text":"n number successfully complete iterations since add() last called","code":""},{"path":"/reference/progressLogger.html","id":"method-print-progress-","dir":"Reference","previous_headings":"","what":"Method print_progress()","title":"R6 Class for printing current sampling progress — progressLogger","text":"method print current state progress","code":""},{"path":"/reference/progressLogger.html","id":"usage-2","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for printing current sampling progress — progressLogger","text":"","code":"progressLogger$print_progress()"},{"path":"/reference/progressLogger.html","id":"method-clone-","dir":"Reference","previous_headings":"","what":"Method clone()","title":"R6 Class for printing current sampling progress — progressLogger","text":"objects class cloneable method.","code":""},{"path":"/reference/progressLogger.html","id":"usage-3","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for printing current sampling progress — progressLogger","text":"","code":"progressLogger$clone(deep = FALSE)"},{"path":"/reference/progressLogger.html","id":"arguments-2","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for printing current sampling progress — progressLogger","text":"deep Whether make deep clone.","code":""},{"path":"/reference/pval_percentile.html","id":null,"dir":"Reference","previous_headings":"","what":"P-value of percentile bootstrap — pval_percentile","title":"P-value of percentile bootstrap — pval_percentile","text":"Determines (necessarily unique) quantile (type=6) \"est\" gives value 0 , derive p-value corresponding percentile bootstrap via inversion.","code":""},{"path":"/reference/pval_percentile.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"P-value of percentile bootstrap — pval_percentile","text":"","code":"pval_percentile(est)"},{"path":"/reference/pval_percentile.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"P-value of percentile bootstrap — pval_percentile","text":"est numeric vector point estimates bootstrap sample.","code":""},{"path":"/reference/pval_percentile.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"P-value of percentile bootstrap — pval_percentile","text":"named numeric vector length 2 containing p-value H_0: theta=0 vs H_A: theta>0 (\"pval_greater\") p-value H_0: theta=0 vs H_A: theta<0 (\"pval_less\").","code":""},{"path":"/reference/pval_percentile.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"P-value of percentile bootstrap — pval_percentile","text":"p-value H_0: theta=0 vs H_A: theta>0 value alpha q_alpha = 0. least one estimate equal zero returns largest alpha q_alpha = 0. bootstrap estimates > 0 returns 0; bootstrap estimates < 0 returns 1. Analogous reasoning applied p-value H_0: theta=0 vs H_A: theta<0.","code":""},{"path":"/reference/random_effects_expr.html","id":null,"dir":"Reference","previous_headings":"","what":"Construct random effects formula — random_effects_expr","title":"Construct random effects formula — random_effects_expr","text":"Constructs character representation random effects formula fitting MMRM subject visit format required mmrm::mmrm().","code":""},{"path":"/reference/random_effects_expr.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Construct random effects formula — random_effects_expr","text":"","code":"random_effects_expr( cov_struct = c(\"us\", \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\"), cov_by_group = FALSE )"},{"path":"/reference/random_effects_expr.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Construct random effects formula — random_effects_expr","text":"cov_struct Character - covariance structure used, must one \"us\" (default), \"ad\", \"adh\", \"ar1\", \"ar1h\", \"cs\", \"csh\", \"toep\", \"toeph\") cov_by_group Boolean - Whenever use separate covariances per group level","code":""},{"path":"/reference/random_effects_expr.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Construct random effects formula — random_effects_expr","text":"example assuming user specified covariance structure \"us\" groups provided return cov_by_group set FALSE indicates separate covariance matrices required per group following returned:","code":"us(visit | subjid) us( visit | group / subjid )"},{"path":"/reference/rbmi-package.html","id":null,"dir":"Reference","previous_headings":"","what":"rbmi: Reference Based Multiple Imputation — rbmi-package","title":"rbmi: Reference Based Multiple Imputation — rbmi-package","text":"rbmi package used perform reference based multiple imputation. package provides implementations common, patient-specific imputation strategies whilst allowing user select various standard Bayesian frequentist approaches. package designed around 4 core functions: draws() - Fits multiple imputation models impute() - Imputes multiple datasets analyse() - Analyses multiple datasets pool() - Pools multiple results single statistic learn rbmi, please see quickstart vignette: vignette(topic= \"quickstart\", package = \"rbmi\")","code":""},{"path":[]},{"path":"/reference/rbmi-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"rbmi: Reference Based Multiple Imputation — rbmi-package","text":"Maintainer: Craig Gower-Page craig.gower-page@roche.com Authors: Alessandro Noci alessandro.noci@roche.com Isaac Gravestock isaac.gravestock@roche.com contributors: Marcel Wolbers marcel.wolbers@roche.com [contributor] F. Hoffmann-La Roche AG [copyright holder, funder]","code":""},{"path":"/reference/rbmi-settings.html","id":null,"dir":"Reference","previous_headings":"","what":"rbmi settings — rbmi-settings","title":"rbmi settings — rbmi-settings","text":"Define settings modify behaviour rbmi package following name options can set via:","code":"options( = )"},{"path":"/reference/rbmi-settings.html","id":"rbmi-cache-dir","dir":"Reference","previous_headings":"","what":"rbmi.cache_dir","title":"rbmi settings — rbmi-settings","text":"Default = tools::R_user_dir(\"rbmi\", = \"cache\") Directory store compiled Stan model . set, temporary directory used given R session. Can also set via environment variable RBMI_CACHE_DIR.","code":""},{"path":"/reference/rbmi-settings.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"rbmi settings — rbmi-settings","text":"","code":"set_options()"},{"path":"/reference/rbmi-settings.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"rbmi settings — rbmi-settings","text":"","code":"if (FALSE) { # \\dontrun{ options(rbmi.cache_dir = \"some/directory/path\") } # }"},{"path":"/reference/record.html","id":null,"dir":"Reference","previous_headings":"","what":"Capture all Output — record","title":"Capture all Output — record","text":"function silences warnings, errors & messages instead returns list containing results (error) + warning error messages character vectors.","code":""},{"path":"/reference/record.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Capture all Output — record","text":"","code":"record(expr)"},{"path":"/reference/record.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Capture all Output — record","text":"expr expression executed","code":""},{"path":"/reference/record.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Capture all Output — record","text":"list containing results - object returned expr list() error thrown warnings - NULL character vector warnings thrown errors - NULL string error thrown messages - NULL character vector messages produced","code":""},{"path":"/reference/record.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Capture all Output — record","text":"","code":"if (FALSE) { # \\dontrun{ record({ x <- 1 y <- 2 warning(\"something went wrong\") message(\"O nearly done\") x + y }) } # }"},{"path":"/reference/recursive_reduce.html","id":null,"dir":"Reference","previous_headings":"","what":"recursive_reduce — recursive_reduce","title":"recursive_reduce — recursive_reduce","text":"Utility function used replicated purrr::reduce. Recursively applies function list elements 1 element remains","code":""},{"path":"/reference/recursive_reduce.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"recursive_reduce — recursive_reduce","text":"","code":"recursive_reduce(.l, .f)"},{"path":"/reference/recursive_reduce.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"recursive_reduce — recursive_reduce","text":".l list values apply function .f function apply element list turn .e. .l[[1]] <- .f( .l[[1]] , .l[[2]]) ; .l[[1]] <- .f( .l[[1]] , .l[[3]])","code":""},{"path":"/reference/remove_if_all_missing.html","id":null,"dir":"Reference","previous_headings":"","what":"Remove subjects from dataset if they have no observed values — remove_if_all_missing","title":"Remove subjects from dataset if they have no observed values — remove_if_all_missing","text":"function takes data.frame variables visit, outcome & subjid. removes rows given subjid non-missing values outcome.","code":""},{"path":"/reference/remove_if_all_missing.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Remove subjects from dataset if they have no observed values — remove_if_all_missing","text":"","code":"remove_if_all_missing(dat)"},{"path":"/reference/remove_if_all_missing.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Remove subjects from dataset if they have no observed values — remove_if_all_missing","text":"dat data.frame","code":""},{"path":"/reference/rubin_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Barnard and Rubin degrees of freedom adjustment — rubin_df","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"Compute degrees freedom according Barnard-Rubin formula.","code":""},{"path":"/reference/rubin_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"","code":"rubin_df(v_com, var_b, var_t, M)"},{"path":"/reference/rubin_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"v_com Positive number representing degrees freedom complete-data analysis. var_b -variance point estimate across multiply imputed datasets. var_t Total-variance point estimate according Rubin's rules. M Number imputations.","code":""},{"path":"/reference/rubin_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"Degrees freedom according Barnard-Rubin formula. See Barnard-Rubin (1999).","code":""},{"path":"/reference/rubin_df.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"computation takes account limit cases missing data (.e. -variance var_b zero) complete-data degrees freedom set Inf. Moreover, v_com given NA, function returns Inf.","code":""},{"path":"/reference/rubin_df.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Barnard and Rubin degrees of freedom adjustment — rubin_df","text":"Barnard, J. Rubin, D.B. (1999). Small sample degrees freedom multiple imputation. Biometrika, 86, 948-955.","code":""},{"path":"/reference/rubin_rules.html","id":null,"dir":"Reference","previous_headings":"","what":"Combine estimates using Rubin's rules — rubin_rules","title":"Combine estimates using Rubin's rules — rubin_rules","text":"Pool together results M complete-data analyses according Rubin's rules. See details.","code":""},{"path":"/reference/rubin_rules.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Combine estimates using Rubin's rules — rubin_rules","text":"","code":"rubin_rules(ests, ses, v_com)"},{"path":"/reference/rubin_rules.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Combine estimates using Rubin's rules — rubin_rules","text":"ests Numeric vector containing point estimates complete-data analyses. ses Numeric vector containing standard errors complete-data analyses. v_com Positive number representing degrees freedom complete-data analysis.","code":""},{"path":"/reference/rubin_rules.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Combine estimates using Rubin's rules — rubin_rules","text":"list containing: est_point: pooled point estimate according Little-Rubin (2002). var_t: total variance according Little-Rubin (2002). df: degrees freedom according Barnard-Rubin (1999).","code":""},{"path":"/reference/rubin_rules.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Combine estimates using Rubin's rules — rubin_rules","text":"rubin_rules applies Rubin's rules (Rubin, 1987) pooling together results multiple imputation procedure. pooled point estimate est_point average across point estimates complete-data analyses (given input argument ests). total variance var_t sum two terms representing within-variance -variance (see Little-Rubin (2002)). function also returns df, estimated pooled degrees freedom according Barnard-Rubin (1999) can used inference based t-distribution.","code":""},{"path":"/reference/rubin_rules.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Combine estimates using Rubin's rules — rubin_rules","text":"Barnard, J. Rubin, D.B. (1999). Small sample degrees freedom multiple imputation. Biometrika, 86, 948-955 Roderick J. . Little Donald B. Rubin. Statistical Analysis Missing Data, Second Edition. John Wiley & Sons, Hoboken, New Jersey, 2002. [Section 5.4]","code":""},{"path":[]},{"path":"/reference/sample_ids.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample Patient Ids — sample_ids","title":"Sample Patient Ids — sample_ids","text":"Performs stratified bootstrap sample IDS ensuring return vector length input vector","code":""},{"path":"/reference/sample_ids.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample Patient Ids — sample_ids","text":"","code":"sample_ids(ids, strata = rep(1, length(ids)))"},{"path":"/reference/sample_ids.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample Patient Ids — sample_ids","text":"ids vector sample strata strata indicator, ids sampled within strata ensuring numbers strata maintained","code":""},{"path":"/reference/sample_ids.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sample Patient Ids — sample_ids","text":"","code":"if (FALSE) { # \\dontrun{ sample_ids( c(\"a\", \"b\", \"c\", \"d\"), strata = c(1,1,2,2)) } # }"},{"path":"/reference/sample_list.html","id":null,"dir":"Reference","previous_headings":"","what":"Create and validate a sample_list object — sample_list","title":"Create and validate a sample_list object — sample_list","text":"Given list sample_single objects generate sample_single(), creates sample_list objects validate .","code":""},{"path":"/reference/sample_list.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create and validate a sample_list object — sample_list","text":"","code":"sample_list(...)"},{"path":"/reference/sample_list.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create and validate a sample_list object — sample_list","text":"... list sample_single objects.","code":""},{"path":"/reference/sample_mvnorm.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample random values from the multivariate normal distribution — sample_mvnorm","title":"Sample random values from the multivariate normal distribution — sample_mvnorm","text":"Sample random values multivariate normal distribution","code":""},{"path":"/reference/sample_mvnorm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample random values from the multivariate normal distribution — sample_mvnorm","text":"","code":"sample_mvnorm(mu, sigma)"},{"path":"/reference/sample_mvnorm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample random values from the multivariate normal distribution — sample_mvnorm","text":"mu mean vector sigma covariance matrix Samples multivariate normal variables multiplying univariate random normal variables cholesky decomposition covariance matrix. mu length 1 just uses rnorm instead.","code":""},{"path":"/reference/sample_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Create object of sample_single class — sample_single","title":"Create object of sample_single class — sample_single","text":"Creates object class sample_single named list containing input parameters validate .","code":""},{"path":"/reference/sample_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create object of sample_single class — sample_single","text":"","code":"sample_single( ids, beta = NA, sigma = NA, theta = NA, failed = any(is.na(beta)), ids_samp = ids )"},{"path":"/reference/sample_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create object of sample_single class — sample_single","text":"ids Vector characters containing ids subjects included original dataset. beta Numeric vector estimated regression coefficients. sigma List estimated covariance matrices (one level vars$group). theta Numeric vector transformed covariances. failed Logical. TRUE model fit failed. ids_samp Vector characters containing ids subjects included given sample.","code":""},{"path":"/reference/sample_single.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create object of sample_single class — sample_single","text":"named list class sample_single. contains following: ids vector characters containing ids subjects included original dataset. beta numeric vector estimated regression coefficients. sigma list estimated covariance matrices (one level vars$group). theta numeric vector transformed covariances. failed logical. TRUE model fit failed. ids_samp vector characters containing ids subjects included given sample.","code":""},{"path":"/reference/scalerConstructor.html","id":null,"dir":"Reference","previous_headings":"","what":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Scales design matrix non-categorical columns mean 0 standard deviation 1.","code":""},{"path":"/reference/scalerConstructor.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"object initialisation used determine relevant mean SD's scale scaling (un-scaling) performed relevant object methods. Un-scaling done linear model Beta Sigma coefficients. purpose first column dataset scaled assumed outcome variable variables assumed post-transformation predictor variables (.e. dummy variables already expanded).","code":""},{"path":"/reference/scalerConstructor.html","id":"public-fields","dir":"Reference","previous_headings":"","what":"Public fields","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"centre Vector column means. first value outcome variable, variables predictors. scales Vector column standard deviations. first value outcome variable, variables predictors.","code":""},{"path":[]},{"path":"/reference/scalerConstructor.html","id":"public-methods","dir":"Reference","previous_headings":"","what":"Public methods","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"scalerConstructor$new() scalerConstructor$scale() scalerConstructor$unscale_sigma() scalerConstructor$unscale_beta() scalerConstructor$clone()","code":""},{"path":"/reference/scalerConstructor.html","id":"method-new-","dir":"Reference","previous_headings":"","what":"Method new()","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Uses dat determine relevant column means standard deviations use scaling un-scaling future datasets. Implicitly assumes new datasets column order dat","code":""},{"path":"/reference/scalerConstructor.html","id":"usage","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"","code":"scalerConstructor$new(dat)"},{"path":"/reference/scalerConstructor.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"dat data.frame matrix. columns must numeric (.e dummy variables, must already expanded ).","code":""},{"path":"/reference/scalerConstructor.html","id":"details-1","dir":"Reference","previous_headings":"","what":"Details","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Categorical columns (determined values entirely 1 0) scaled. achieved setting corresponding values centre 0 scale 1.","code":""},{"path":"/reference/scalerConstructor.html","id":"method-scale-","dir":"Reference","previous_headings":"","what":"Method scale()","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Scales dataset continuous variables mean 0 standard deviation 1.","code":""},{"path":"/reference/scalerConstructor.html","id":"usage-1","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"","code":"scalerConstructor$scale(dat)"},{"path":"/reference/scalerConstructor.html","id":"arguments-1","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"dat data.frame matrix whose columns numeric (.e. dummy variables expanded ) whose columns order dataset used initialization function.","code":""},{"path":"/reference/scalerConstructor.html","id":"method-unscale-sigma-","dir":"Reference","previous_headings":"","what":"Method unscale_sigma()","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Unscales sigma value (matrix) estimated linear model using design matrix scaled object. function works first column initialisation data.frame outcome variable.","code":""},{"path":"/reference/scalerConstructor.html","id":"usage-2","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"","code":"scalerConstructor$unscale_sigma(sigma)"},{"path":"/reference/scalerConstructor.html","id":"arguments-2","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"sigma numeric value matrix.","code":""},{"path":"/reference/scalerConstructor.html","id":"returns","dir":"Reference","previous_headings":"","what":"Returns","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"numeric value matrix","code":""},{"path":"/reference/scalerConstructor.html","id":"method-unscale-beta-","dir":"Reference","previous_headings":"","what":"Method unscale_beta()","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"Unscales beta value (vector) estimated linear model using design matrix scaled object. function works first column initialization data.frame outcome variable.","code":""},{"path":"/reference/scalerConstructor.html","id":"usage-3","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"","code":"scalerConstructor$unscale_beta(beta)"},{"path":"/reference/scalerConstructor.html","id":"arguments-3","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"beta numeric vector beta coefficients estimated linear model.","code":""},{"path":"/reference/scalerConstructor.html","id":"returns-1","dir":"Reference","previous_headings":"","what":"Returns","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"numeric vector.","code":""},{"path":"/reference/scalerConstructor.html","id":"method-clone-","dir":"Reference","previous_headings":"","what":"Method clone()","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"objects class cloneable method.","code":""},{"path":"/reference/scalerConstructor.html","id":"usage-4","dir":"Reference","previous_headings":"","what":"Usage","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"","code":"scalerConstructor$clone(deep = FALSE)"},{"path":"/reference/scalerConstructor.html","id":"arguments-4","dir":"Reference","previous_headings":"","what":"Arguments","title":"R6 Class for scaling (and un-scaling) design matrices — scalerConstructor","text":"deep Whether make deep clone.","code":""},{"path":"/reference/set_simul_pars.html","id":null,"dir":"Reference","previous_headings":"","what":"Set simulation parameters of a study group. — set_simul_pars","title":"Set simulation parameters of a study group. — set_simul_pars","text":"function provides input arguments study group needed simulate data simulate_data(). simulate_data() generates data two-arms clinical trial longitudinal continuous outcomes two intercurrent events (ICEs). ICE1 may thought discontinuation study treatment due study drug condition related (SDCR) reasons. ICE2 may thought discontinuation study treatment due uninformative study drop-, .e. due study drug condition related (NSDRC) reasons outcome data ICE2 always missing.","code":""},{"path":"/reference/set_simul_pars.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set simulation parameters of a study group. — set_simul_pars","text":"","code":"set_simul_pars( mu, sigma, n, prob_ice1 = 0, or_outcome_ice1 = 1, prob_post_ice1_dropout = 0, prob_ice2 = 0, prob_miss = 0 )"},{"path":"/reference/set_simul_pars.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set simulation parameters of a study group. — set_simul_pars","text":"mu Numeric vector describing mean outcome trajectory visit (including baseline) assuming ICEs. sigma Covariance matrix outcome trajectory assuming ICEs. n Number subjects belonging group. prob_ice1 Numeric vector specifies probability experiencing ICE1 (discontinuation study treatment due SDCR reasons) visit subject observed outcome visit equal mean baseline (mu[1]). single numeric provided, probability applied visit. or_outcome_ice1 Numeric value specifies odds ratio experiencing ICE1 visit corresponding +1 higher value observed outcome visit. prob_post_ice1_dropout Numeric value specifies probability study drop-following ICE1. subject simulated drop-ICE1, outcomes ICE1 set missing. prob_ice2 Numeric specifies additional probability post-baseline visit affected study drop-. Outcome data subject's first simulated visit affected study drop-subsequent visits set missing. generates second intercurrent event ICE2, may thought treatment discontinuation due NSDRC reasons subsequent drop-. subject, ICE1 ICE2 simulated occur, assumed earlier counts. case ICEs simulated occur time, assumed ICE1 counts. means single subject can experience either ICE1 ICE2, . prob_miss Numeric value specifies additional probability given post-baseline observation missing. can used produce \"intermittent\" missing values associated ICE.","code":""},{"path":"/reference/set_simul_pars.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Set simulation parameters of a study group. — set_simul_pars","text":"simul_pars object named list containing simulation parameters.","code":""},{"path":"/reference/set_simul_pars.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Set simulation parameters of a study group. — set_simul_pars","text":"details, please see simulate_data().","code":""},{"path":[]},{"path":"/reference/set_vars.html","id":null,"dir":"Reference","previous_headings":"","what":"Set key variables — set_vars","title":"Set key variables — set_vars","text":"function used define names key variables within data.frame's provided input arguments draws() ancova().","code":""},{"path":"/reference/set_vars.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set key variables — set_vars","text":"","code":"set_vars( subjid = \"subjid\", visit = \"visit\", outcome = \"outcome\", group = \"group\", covariates = character(0), strata = group, strategy = \"strategy\" )"},{"path":"/reference/set_vars.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set key variables — set_vars","text":"subjid name \"Subject ID\" variable. length 1 character vector. visit name \"Visit\" variable. length 1 character vector. outcome name \"Outcome\" variable. length 1 character vector. group name \"Group\" variable. length 1 character vector. covariates name covariates used context modeling. See details. strata name stratification variable used context bootstrap sampling. See details. strategy name \"strategy\" variable. length 1 character vector.","code":""},{"path":"/reference/set_vars.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Set key variables — set_vars","text":"draws() ancova() covariates argument can specified indicate variables included imputation analysis models respectively. wish include interaction terms need manually specified .e. covariates = c(\"group*visit\", \"age*sex\"). Please note use () function inhibit interpretation/conversion objects supported. Currently strata used draws() combination method_condmean(type = \"bootstrap\") method_approxbayes() order allow specification stratified bootstrap sampling. default strata set equal value group assumed users want preserve group size samples. See draws() details. Likewise, currently strategy argument used draws() specify name strategy variable within data_ice data.frame. See draws() details.","code":""},{"path":[]},{"path":"/reference/set_vars.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Set key variables — set_vars","text":"","code":"if (FALSE) { # \\dontrun{ # Using CDISC variable names as an example set_vars( subjid = \"usubjid\", visit = \"avisit\", outcome = \"aval\", group = \"arm\", covariates = c(\"bwt\", \"bht\", \"arm * avisit\"), strategy = \"strat\" ) } # }"},{"path":"/reference/simulate_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate data — simulate_data","title":"Generate data — simulate_data","text":"Generate data two-arms clinical trial longitudinal continuous outcome two intercurrent events (ICEs). ICE1 may thought discontinuation study treatment due study drug condition related (SDCR) reasons. ICE2 may thought discontinuation study treatment due uninformative study drop-, .e. due study drug condition related (NSDRC) reasons outcome data ICE2 always missing.","code":""},{"path":"/reference/simulate_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate data — simulate_data","text":"","code":"simulate_data(pars_c, pars_t, post_ice1_traj, strategies = getStrategies())"},{"path":"/reference/simulate_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate data — simulate_data","text":"pars_c simul_pars object generated set_simul_pars(). specifies simulation parameters control arm. pars_t simul_pars object generated set_simul_pars(). specifies simulation parameters treatment arm. post_ice1_traj string specifies observed outcomes occurring ICE1 simulated. Must target function included strategies. Possible choices : Missing Random \"MAR\", Jump Reference \"JR\", Copy Reference \"CR\", Copy Increments Reference \"CIR\", Last Mean Carried Forward \"LMCF\". User-defined strategies also added. See getStrategies() details. strategies named list functions. Default equal getStrategies(). See getStrategies() details.","code":""},{"path":"/reference/simulate_data.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate data — simulate_data","text":"data.frame containing simulated data. includes following variables: id: Factor variable specifies id subject. visit: Factor variable specifies visit assessment. Visit 0 denotes baseline visit. group: Factor variable specifies treatment group subject belongs . outcome_bl: Numeric variable specifies baseline outcome. outcome_noICE: Numeric variable specifies longitudinal outcome assuming ICEs. ind_ice1: Binary variable takes value 1 corresponding visit affected ICE1 0 otherwise. dropout_ice1: Binary variable takes value 1 corresponding visit affected drop-following ICE1 0 otherwise. ind_ice2: Binary variable takes value 1 corresponding visit affected ICE2. outcome: Numeric variable specifies longitudinal outcome including ICE1, ICE2 intermittent missing values.","code":""},{"path":"/reference/simulate_data.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Generate data — simulate_data","text":"data generation works follows: Generate outcome data visits (including baseline) multivariate normal distribution parameters pars_c$mu pars_c$sigma control arm parameters pars_t$mu pars_t$sigma treatment arm, respectively. Note randomized trial, outcomes distribution baseline treatment groups, .e. one set pars_c$mu[1]=pars_t$mu[1] pars_c$sigma[1,1]=pars_t$sigma[1,1]. Simulate whether ICE1 (study treatment discontinuation due SDCR reasons) occurs visit according parameters pars_c$prob_ice1 pars_c$or_outcome_ice1 control arm pars_t$prob_ice1 pars_t$or_outcome_ice1 treatment arm, respectively. Simulate drop-following ICE1 according pars_c$prob_post_ice1_dropout pars_t$prob_post_ice1_dropout. Simulate additional uninformative study drop-probabilities pars_c$prob_ice2 pars_t$prob_ice2 visit. generates second intercurrent event ICE2, may thought treatment discontinuation due NSDRC reasons subsequent drop-. simulated time drop-subject's first visit affected drop-data visit subsequent visits consequently set missing. subject, ICE1 ICE2 simulated occur, assumed earlier counts. case ICEs simulated occur time, assumed ICE1 counts. means single subject can experience either ICE1 ICE2, . Adjust trajectories ICE1 according given assumption expressed post_ice1_traj argument. Note post-ICE1 outcomes intervention arm can adjusted. Post-ICE1 outcomes control arm adjusted. Simulate additional intermittent missing outcome data per arguments pars_c$prob_miss pars_t$prob_miss. probability ICE visit modeled according following logistic regression model: ~ 1 + (visit == 0) + ... + (visit == n_visits-1) + ((x-alpha)) : n_visits number visits (including baseline). alpha baseline outcome mean. term ((x-alpha)) specifies dependency probability ICE current outcome value. corresponding regression coefficients logistic model defined follows: intercept set 0, coefficients corresponding discontinuation visit subject outcome equal mean baseline set according parameters pars_c$prob_ice1 (pars_t$prob_ice1), regression coefficient associated covariate ((x-alpha)) set log(pars_c$or_outcome_ice1) (log(pars_t$or_outcome_ice1)). Please note baseline outcome missing affected ICEs.","code":""},{"path":"/reference/simulate_dropout.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate drop-out — simulate_dropout","title":"Simulate drop-out — simulate_dropout","text":"Simulate drop-","code":""},{"path":"/reference/simulate_dropout.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate drop-out — simulate_dropout","text":"","code":"simulate_dropout(prob_dropout, ids, subset = rep(1, length(ids)))"},{"path":"/reference/simulate_dropout.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate drop-out — simulate_dropout","text":"prob_dropout Numeric specifies probability post-baseline visit affected study drop-. ids Factor variable specifies id subject. subset Binary variable specifies subset affected drop-. .e. subset binary vector length equal length ids takes value 1 corresponding visit affected drop-0 otherwise.","code":""},{"path":"/reference/simulate_dropout.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate drop-out — simulate_dropout","text":"binary vector length equal length ids takes value 1 corresponding outcome affected study drop-.","code":""},{"path":"/reference/simulate_dropout.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Simulate drop-out — simulate_dropout","text":"subset can used specify outcome values affected drop-. default subset set 1 values except values corresponding baseline outcome, since baseline supposed affected drop-. Even subset specified user, values corresponding baseline outcome still hard-coded 0.","code":""},{"path":"/reference/simulate_ice.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate intercurrent event — simulate_ice","title":"Simulate intercurrent event — simulate_ice","text":"Simulate intercurrent event","code":""},{"path":"/reference/simulate_ice.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate intercurrent event — simulate_ice","text":"","code":"simulate_ice(outcome, visits, ids, prob_ice, or_outcome_ice, baseline_mean)"},{"path":"/reference/simulate_ice.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate intercurrent event — simulate_ice","text":"outcome Numeric variable specifies longitudinal outcome single group. visits Factor variable specifies visit assessment. ids Factor variable specifies id subject. prob_ice Numeric vector specifies visit probability experiencing ICE current visit subject outcome equal mean baseline. single numeric provided, probability applied visit. or_outcome_ice Numeric value specifies odds ratio ICE corresponding +1 higher value outcome visit. baseline_mean Mean outcome value baseline.","code":""},{"path":"/reference/simulate_ice.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate intercurrent event — simulate_ice","text":"binary variable takes value 1 corresponding outcome affected ICE 0 otherwise.","code":""},{"path":"/reference/simulate_ice.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Simulate intercurrent event — simulate_ice","text":"probability ICE visit modeled according following logistic regression model: ~ 1 + (visit == 0) + ... + (visit == n_visits-1) + ((x-alpha)) : n_visits number visits (including baseline). alpha baseline outcome mean set via argument baseline_mean. term ((x-alpha)) specifies dependency probability ICE current outcome value. corresponding regression coefficients logistic model defined follows: intercept set 0, coefficients corresponding discontinuation visit subject outcome equal mean baseline set according parameter or_outcome_ice, regression coefficient associated covariate ((x-alpha)) set log(or_outcome_ice).","code":""},{"path":"/reference/simulate_test_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Create simulated datasets — simulate_test_data","title":"Create simulated datasets — simulate_test_data","text":"Creates longitudinal dataset format rbmi designed analyse.","code":""},{"path":"/reference/simulate_test_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create simulated datasets — simulate_test_data","text":"","code":"simulate_test_data( n = 200, sd = c(3, 5, 7), cor = c(0.1, 0.7, 0.4), mu = list(int = 10, age = 3, sex = 2, trt = c(0, 4, 8), visit = c(0, 1, 2)) ) as_vcov(sd, cor)"},{"path":"/reference/simulate_test_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create simulated datasets — simulate_test_data","text":"n number subjects sample. Total number observations returned thus n * length(sd) sd standard deviations outcome visit. .e. square root diagonal covariance matrix outcome cor correlation coefficients outcome values visit. See details. mu coefficients use construct mean outcome value visit. Must named list elements int, age, sex, trt & visit. See details.","code":""},{"path":"/reference/simulate_test_data.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Create simulated datasets — simulate_test_data","text":"number visits determined size variance covariance matrix. .e. 3 standard deviation values provided 3 visits per patient created. covariates simulated dataset produced follows: Patients age sampled random N(0,1) distribution Patients sex sampled random 50/50 split Patients group sampled random fixed group n/2 patients outcome variable sampled multivariate normal distribution, see details mean outcome variable derived : coefficients intercept, age sex taken mu$int, mu$age mu$sex respectively, must length 1 numeric. Treatment visit coefficients taken mu$trt mu$visit respectively must either length 1 (.e. constant affect across visits) equal number visits (determined length sd). .e. wanted treatment slope 5 visit slope 1 specify: correlation matrix constructed cor follows. Let cor = c(, b, c, d, e, f) correlation matrix :","code":"outcome = Intercept + age + sex + visit + treatment mu = list(..., \"trt\" = c(0,5,10), \"visit\" = c(0,1,2)) 1 a b d a 1 c e b c 1 f d e f 1"},{"path":"/reference/sort_by.html","id":null,"dir":"Reference","previous_headings":"","what":"Sort data.frame — sort_by","title":"Sort data.frame — sort_by","text":"Sorts data.frame (ascending default) based upon variables within dataset","code":""},{"path":"/reference/sort_by.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sort data.frame — sort_by","text":"","code":"sort_by(df, vars = NULL, decreasing = FALSE)"},{"path":"/reference/sort_by.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sort data.frame — sort_by","text":"df data.frame vars character vector variables decreasing logical whether sort order descending ascending (default) order. Can either single logical value (case applied variables) vector length vars","code":""},{"path":"/reference/sort_by.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sort data.frame — sort_by","text":"","code":"if (FALSE) { # \\dontrun{ sort_by(iris, c(\"Sepal.Length\", \"Sepal.Width\"), decreasing = c(TRUE, FALSE)) } # }"},{"path":"/reference/split_dim.html","id":null,"dir":"Reference","previous_headings":"","what":"Transform array into list of arrays — split_dim","title":"Transform array into list of arrays — split_dim","text":"Transform array list arrays listing performed given dimension.","code":""},{"path":"/reference/split_dim.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Transform array into list of arrays — split_dim","text":"","code":"split_dim(a, n)"},{"path":"/reference/split_dim.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Transform array into list of arrays — split_dim","text":"Array number dimensions least 2. n Positive integer. Dimension listed.","code":""},{"path":"/reference/split_dim.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Transform array into list of arrays — split_dim","text":"list length n arrays number dimensions equal number dimensions minus 1.","code":""},{"path":"/reference/split_dim.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Transform array into list of arrays — split_dim","text":"example, 3 dimensional array n = 1, split_dim(,n) returns list 2 dimensional arrays (.e. list matrices) element list [, , ], takes values 1 length first dimension array. Example: inputs: <- array( c(1,2,3,4,5,6,7,8,9,10,11,12), dim = c(3,2,2)), means : n <- 1 output res <- split_dim(,n) list 3 elements:","code":"a[1,,] a[2,,] a[3,,] [,1] [,2] [,1] [,2] [,1] [,2] --------- --------- --------- 1 7 2 8 3 9 4 10 5 11 6 12 res[[1]] res[[2]] res[[3]] [,1] [,2] [,1] [,2] [,1] [,2] --------- --------- --------- 1 7 2 8 3 9 4 10 5 11 6 12"},{"path":"/reference/split_imputations.html","id":null,"dir":"Reference","previous_headings":"","what":"Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations","title":"Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations","text":"Split flat list imputation_single() multiple imputation_df()'s ID","code":""},{"path":"/reference/split_imputations.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations","text":"","code":"split_imputations(list_of_singles, split_ids)"},{"path":"/reference/split_imputations.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations","text":"list_of_singles list imputation_single()'s split_ids list 1 element per required split. element must contain vector \"ID\"'s correspond imputation_single() ID's required within sample. total number ID's must equal length list_of_singles","code":""},{"path":"/reference/split_imputations.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Split a flat list of imputation_single() into multiple imputation_df()'s by ID — split_imputations","text":"function converts list imputations structured per patient structured per sample .e. converts :","code":"obj <- list( imputation_single(\"Ben\", numeric(0)), imputation_single(\"Ben\", numeric(0)), imputation_single(\"Ben\", numeric(0)), imputation_single(\"Harry\", c(1, 2)), imputation_single(\"Phil\", c(3, 4)), imputation_single(\"Phil\", c(5, 6)), imputation_single(\"Tom\", c(7, 8, 9)) ) index <- list( c(\"Ben\", \"Harry\", \"Phil\", \"Tom\"), c(\"Ben\", \"Ben\", \"Phil\") ) output <- list( imputation_df( imputation_single(id = \"Ben\", values = numeric(0)), imputation_single(id = \"Harry\", values = c(1, 2)), imputation_single(id = \"Phil\", values = c(3, 4)), imputation_single(id = \"Tom\", values = c(7, 8, 9)) ), imputation_df( imputation_single(id = \"Ben\", values = numeric(0)), imputation_single(id = \"Ben\", values = numeric(0)), imputation_single(id = \"Phil\", values = c(5, 6)) ) )"},{"path":"/reference/str_contains.html","id":null,"dir":"Reference","previous_headings":"","what":"Does a string contain a substring — str_contains","title":"Does a string contain a substring — str_contains","text":"Returns vector TRUE/FALSE element x contains element subs .e.","code":"str_contains( c(\"ben\", \"tom\", \"harry\"), c(\"e\", \"y\")) [1] TRUE FALSE TRUE"},{"path":"/reference/str_contains.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Does a string contain a substring — str_contains","text":"","code":"str_contains(x, subs)"},{"path":"/reference/str_contains.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Does a string contain a substring — str_contains","text":"x character vector subs character vector substrings look ","code":""},{"path":"/reference/strategies.html","id":null,"dir":"Reference","previous_headings":"","what":"Strategies — strategies","title":"Strategies — strategies","text":"functions used implement various reference based imputation strategies combining subjects distribution reference distribution based upon visits failed meet Missing--Random (MAR) assumption.","code":""},{"path":"/reference/strategies.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Strategies — strategies","text":"","code":"strategy_MAR(pars_group, pars_ref, index_mar) strategy_JR(pars_group, pars_ref, index_mar) strategy_CR(pars_group, pars_ref, index_mar) strategy_CIR(pars_group, pars_ref, index_mar) strategy_LMCF(pars_group, pars_ref, index_mar)"},{"path":"/reference/strategies.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Strategies — strategies","text":"pars_group list parameters subject's group. See details. pars_ref list parameters subject's reference group. See details. index_mar logical vector indicating visits meet MAR assumption subject. .e. identifies observations non-MAR intercurrent event (ICE).","code":""},{"path":"/reference/strategies.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Strategies — strategies","text":"pars_group pars_ref must list containing elements mu sigma. mu must numeric vector sigma must square matrix symmetric covariance matrix dimensions equal length mu index_mar. e.g. Users can define strategy functions include via strategies argument impute() using getStrategies(). said following strategies available \"box\": Missing Random (MAR) Jump Reference (JR) Copy Reference (CR) Copy Increments Reference (CIR) Last Mean Carried Forward (LMCF)","code":"list( mu = c(1,2,3), sigma = matrix(c(4,3,2,3,5,4,2,4,6), nrow = 3, ncol = 3) )"},{"path":"/reference/string_pad.html","id":null,"dir":"Reference","previous_headings":"","what":"string_pad — string_pad","title":"string_pad — string_pad","text":"Utility function used replicate str_pad. Adds white space either end string get equal desired length","code":""},{"path":"/reference/string_pad.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"string_pad — string_pad","text":"","code":"string_pad(x, width)"},{"path":"/reference/string_pad.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"string_pad — string_pad","text":"x string width desired length","code":""},{"path":"/reference/transpose_imputations.html","id":null,"dir":"Reference","previous_headings":"","what":"Transpose imputations — transpose_imputations","title":"Transpose imputations — transpose_imputations","text":"Takes imputation_df object transposes e.g.","code":"list( list(id = \"a\", values = c(1,2,3)), list(id = \"b\", values = c(4,5,6) ) )"},{"path":"/reference/transpose_imputations.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Transpose imputations — transpose_imputations","text":"","code":"transpose_imputations(imputations)"},{"path":"/reference/transpose_imputations.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Transpose imputations — transpose_imputations","text":"imputations imputation_df object created imputation_df()","code":""},{"path":"/reference/transpose_imputations.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Transpose imputations — transpose_imputations","text":"becomes","code":"list( ids = c(\"a\", \"b\"), values = c(1,2,3,4,5,6) )"},{"path":"/reference/transpose_results.html","id":null,"dir":"Reference","previous_headings":"","what":"Transpose results object — transpose_results","title":"Transpose results object — transpose_results","text":"Transposes Results object (created analyse()) order group estimates together vectors.","code":""},{"path":"/reference/transpose_results.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Transpose results object — transpose_results","text":"","code":"transpose_results(results, components)"},{"path":"/reference/transpose_results.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Transpose results object — transpose_results","text":"results list results. components character vector components extract (.e. \"est\", \"se\").","code":""},{"path":"/reference/transpose_results.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Transpose results object — transpose_results","text":"Essentially function takes object format: produces:","code":"x <- list( list( \"trt1\" = list( est = 1, se = 2 ), \"trt2\" = list( est = 3, se = 4 ) ), list( \"trt1\" = list( est = 5, se = 6 ), \"trt2\" = list( est = 7, se = 8 ) ) ) list( trt1 = list( est = c(1,5), se = c(2,6) ), trt2 = list( est = c(3,7), se = c(4,8) ) )"},{"path":"/reference/transpose_samples.html","id":null,"dir":"Reference","previous_headings":"","what":"Transpose samples — transpose_samples","title":"Transpose samples — transpose_samples","text":"Transposes samples generated draws() grouped subjid instead sample number.","code":""},{"path":"/reference/transpose_samples.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Transpose samples — transpose_samples","text":"","code":"transpose_samples(samples)"},{"path":"/reference/transpose_samples.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Transpose samples — transpose_samples","text":"samples list samples generated draws().","code":""},{"path":"/reference/validate.analysis.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate analysis objects — validate.analysis","title":"Validate analysis objects — validate.analysis","text":"Validates return object analyse() function.","code":""},{"path":"/reference/validate.analysis.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate analysis objects — validate.analysis","text":"","code":"# S3 method for class 'analysis' validate(x, ...)"},{"path":"/reference/validate.analysis.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate analysis objects — validate.analysis","text":"x analysis results object (class \"jackknife\", \"bootstrap\", \"rubin\"). ... used.","code":""},{"path":"/reference/validate.draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate draws object — validate.draws","title":"Validate draws object — validate.draws","text":"Validate draws object","code":""},{"path":"/reference/validate.draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate draws object — validate.draws","text":"","code":"# S3 method for class 'draws' validate(x, ...)"},{"path":"/reference/validate.draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate draws object — validate.draws","text":"x draws object generated as_draws(). ... used.","code":""},{"path":"/reference/validate.html","id":null,"dir":"Reference","previous_headings":"","what":"Generic validation method — validate","title":"Generic validation method — validate","text":"function used perform assertions object conforms expected structure basic assumptions violated. throw error checks pass.","code":""},{"path":"/reference/validate.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generic validation method — validate","text":"","code":"validate(x, ...)"},{"path":"/reference/validate.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generic validation method — validate","text":"x object validated. ... additional arguments pass specific validation method.","code":""},{"path":"/reference/validate.is_mar.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate is_mar for a given subject — validate.is_mar","title":"Validate is_mar for a given subject — validate.is_mar","text":"Checks longitudinal data patient divided MAR followed non-MAR data; non-MAR observation followed MAR observation allowed.","code":""},{"path":"/reference/validate.is_mar.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate is_mar for a given subject — validate.is_mar","text":"","code":"# S3 method for class 'is_mar' validate(x, ...)"},{"path":"/reference/validate.is_mar.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate is_mar for a given subject — validate.is_mar","text":"x Object class is_mar. Logical vector indicating whether observations MAR. ... used.","code":""},{"path":"/reference/validate.is_mar.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate is_mar for a given subject — validate.is_mar","text":"error issue otherwise return TRUE.","code":""},{"path":"/reference/validate.ivars.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate inputs for vars — validate.ivars","title":"Validate inputs for vars — validate.ivars","text":"Checks required variable names defined within vars appropriate datatypes","code":""},{"path":"/reference/validate.ivars.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate inputs for vars — validate.ivars","text":"","code":"# S3 method for class 'ivars' validate(x, ...)"},{"path":"/reference/validate.ivars.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate inputs for vars — validate.ivars","text":"x named list indicating names key variables source dataset ... used","code":""},{"path":"/reference/validate.references.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate user supplied references — validate.references","title":"Validate user supplied references — validate.references","text":"Checks ensure user specified references expect values (.e. found within source data).","code":""},{"path":"/reference/validate.references.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate user supplied references — validate.references","text":"","code":"# S3 method for class 'references' validate(x, control, ...)"},{"path":"/reference/validate.references.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate user supplied references — validate.references","text":"x named character vector. control factor variable (group variable source dataset). ... used.","code":""},{"path":"/reference/validate.references.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate user supplied references — validate.references","text":"error issue otherwise return TRUE.","code":""},{"path":"/reference/validate.sample_list.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate sample_list object — validate.sample_list","title":"Validate sample_list object — validate.sample_list","text":"Validate sample_list object","code":""},{"path":"/reference/validate.sample_list.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate sample_list object — validate.sample_list","text":"","code":"# S3 method for class 'sample_list' validate(x, ...)"},{"path":"/reference/validate.sample_list.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate sample_list object — validate.sample_list","text":"x sample_list object generated sample_list(). ... used.","code":""},{"path":"/reference/validate.sample_single.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate sample_single object — validate.sample_single","title":"Validate sample_single object — validate.sample_single","text":"Validate sample_single object","code":""},{"path":"/reference/validate.sample_single.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate sample_single object — validate.sample_single","text":"","code":"# S3 method for class 'sample_single' validate(x, ...)"},{"path":"/reference/validate.sample_single.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate sample_single object — validate.sample_single","text":"x sample_single object generated sample_single(). ... used.","code":""},{"path":"/reference/validate.simul_pars.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate a simul_pars object — validate.simul_pars","title":"Validate a simul_pars object — validate.simul_pars","text":"Validate simul_pars object","code":""},{"path":"/reference/validate.simul_pars.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate a simul_pars object — validate.simul_pars","text":"","code":"# S3 method for class 'simul_pars' validate(x, ...)"},{"path":"/reference/validate.simul_pars.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate a simul_pars object — validate.simul_pars","text":"x simul_pars object generated set_simul_pars(). ... used.","code":""},{"path":"/reference/validate.stan_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate a stan_data object — validate.stan_data","title":"Validate a stan_data object — validate.stan_data","text":"Validate stan_data object","code":""},{"path":"/reference/validate.stan_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate a stan_data object — validate.stan_data","text":"","code":"# S3 method for class 'stan_data' validate(x, ...)"},{"path":"/reference/validate.stan_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate a stan_data object — validate.stan_data","text":"x stan_data object. ... used.","code":""},{"path":"/reference/validate_analyse_pars.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate analysis results — validate_analyse_pars","title":"Validate analysis results — validate_analyse_pars","text":"Validates analysis results generated analyse().","code":""},{"path":"/reference/validate_analyse_pars.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate analysis results — validate_analyse_pars","text":"","code":"validate_analyse_pars(results, pars)"},{"path":"/reference/validate_analyse_pars.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate analysis results — validate_analyse_pars","text":"results list results generated analysis fun used analyse(). pars list expected parameters analysis. lists .e. c(\"est\", \"se\", \"df\").","code":""},{"path":"/reference/validate_datalong.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate a longdata object — validate_datalong","title":"Validate a longdata object — validate_datalong","text":"Validate longdata object","code":""},{"path":"/reference/validate_datalong.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate a longdata object — validate_datalong","text":"","code":"validate_datalong(data, vars) validate_datalong_varExists(data, vars) validate_datalong_types(data, vars) validate_datalong_notMissing(data, vars) validate_datalong_complete(data, vars) validate_datalong_unifromStrata(data, vars) validate_dataice(data, data_ice, vars, update = FALSE)"},{"path":"/reference/validate_datalong.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate a longdata object — validate_datalong","text":"data data.frame containing longitudinal outcome data + covariates multiple subjects vars vars object created set_vars() data_ice data.frame containing subjects ICE data. See draws() details. update logical, indicates ICE data set first time update applied","code":""},{"path":"/reference/validate_datalong.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Validate a longdata object — validate_datalong","text":"functions used validate various different parts longdata object used draws(), impute(), analyse() pool(). particular: validate_datalong_varExists - Checks variable listed vars actually exists data validate_datalong_types - Checks types key variable expected .e. visit factor variable validate_datalong_notMissing - Checks none key variables (except outcome variable) contain missing values validate_datalong_complete - Checks data complete .e. 1 row subject * visit combination. e.g. nrow(data) == length(unique(subjects)) * length(unique(visits)) validate_datalong_unifromStrata - Checks make sure variables listed stratification variables vary time. e.g. subjects switch stratification groups.","code":""},{"path":"/reference/validate_strategies.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate user specified strategies — validate_strategies","title":"Validate user specified strategies — validate_strategies","text":"Compares user provided strategies required (reference). throw error values reference defined.","code":""},{"path":"/reference/validate_strategies.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate user specified strategies — validate_strategies","text":"","code":"validate_strategies(strategies, reference)"},{"path":"/reference/validate_strategies.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate user specified strategies — validate_strategies","text":"strategies named list strategies. reference list character vector strategies need defined.","code":""},{"path":"/reference/validate_strategies.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate user specified strategies — validate_strategies","text":"throw error issue otherwise return TRUE.","code":""},{"path":"/news/index.html","id":"rbmi-131","dir":"Changelog","previous_headings":"","what":"rbmi 1.3.1","title":"rbmi 1.3.1","text":"Fixed bug stale caches rstan model correctly cleared (#459)","code":""},{"path":"/news/index.html","id":"rbmi-130","dir":"Changelog","previous_headings":"","what":"rbmi 1.3.0","title":"rbmi 1.3.0","text":"CRAN release: 2024-10-16","code":""},{"path":"/news/index.html","id":"breaking-changes-1-3-0","dir":"Changelog","previous_headings":"","what":"Breaking Changes","title":"rbmi 1.3.0","text":"Convert rstan suggested package simplify installation process. means Bayesian imputation functionality available default. use feature, need install rstan separately (#441) Deprecated seed argument method_bayes() favour using base set.seed() function (#431)","code":""},{"path":"/news/index.html","id":"new-features-1-3-0","dir":"Changelog","previous_headings":"","what":"New Features","title":"rbmi 1.3.0","text":"Added vignette implement retrieved dropout models time-varying intercurrent event (ICE) indicators (#414) Added vignette obtain frequentist information-anchored inference conditional mean imputation using rbmi (#406) Added FAQ vignette including statement validation (#407 #440) Renamed lsmeans(..., weights = \"proportional\") lsmeans(..., weights = \"counterfactual\")accurately reflect weights used calculation. Added lsmeans(..., weights = \"proportional_em\") provides consistent results emmeans(..., weights = \"proportional\") lsmeans(..., weights = \"proportional\") left package backwards compatibility alias lsmeans(..., weights = \"counterfactual\") now gives message prompting users use either “proptional_em” “counterfactual” instead. Added support parallel processing analyse() function (#370) Added documentation clarifying potential false-positive warnings rstan (#288) Added support covariance structures supported mmrm package (#437) Updated rbmi citation detail (#423 #425)","code":""},{"path":"/news/index.html","id":"miscellaneous-bug-fixes-1-3-0","dir":"Changelog","previous_headings":"","what":"Miscellaneous Bug Fixes","title":"rbmi 1.3.0","text":"Stopped warning messages accidentally supressed changing ICE type impute() (#408) Fixed equations rendering properly pkgdown website (#433)","code":""},{"path":"/news/index.html","id":"rbmi-126","dir":"Changelog","previous_headings":"","what":"rbmi 1.2.6","title":"rbmi 1.2.6","text":"CRAN release: 2023-11-24 Updated unit tests fix false-positive error CRAN’s testing servers","code":""},{"path":"/news/index.html","id":"rbmi-125","dir":"Changelog","previous_headings":"","what":"rbmi 1.2.5","title":"rbmi 1.2.5","text":"CRAN release: 2023-09-20 Updated internal Stan code ensure future compatibility (@andrjohns, #390) Updated package description include relevant references (#393) Fixed documentation typos (#393)","code":""},{"path":"/news/index.html","id":"rbmi-123","dir":"Changelog","previous_headings":"","what":"rbmi 1.2.3","title":"rbmi 1.2.3","text":"CRAN release: 2022-11-14 Minor internal tweaks ensure compatibility packages rbmi depends ","code":""},{"path":"/news/index.html","id":"rbmi-121","dir":"Changelog","previous_headings":"","what":"rbmi 1.2.1","title":"rbmi 1.2.1","text":"CRAN release: 2022-10-25 Removed native pipes |> testing code package backwards compatible older servers Replaced glmmTMB dependency mmrm package. resulted package stable (less model fitting convergence issues) well speeding run times 3-fold.","code":""},{"path":"/news/index.html","id":"rbmi-114","dir":"Changelog","previous_headings":"","what":"rbmi 1.1.4","title":"rbmi 1.1.4","text":"CRAN release: 2022-05-18 Updated urls references vignettes Fixed bug visit factor levels re-constructed incorrectly delta_template() Fixed bug wrong visit displayed error message specific visit doesn’t data draws() Fixed bug wrong input parameter displayed error message simulate_data()","code":""},{"path":"/news/index.html","id":"rbmi-111--113","dir":"Changelog","previous_headings":"","what":"rbmi 1.1.1 & 1.1.3","title":"rbmi 1.1.1 & 1.1.3","text":"CRAN release: 2022-03-08 change functionality 1.1.0 Various minor tweaks address CRAN checks messages","code":""},{"path":"/news/index.html","id":"rbmi-110","dir":"Changelog","previous_headings":"","what":"rbmi 1.1.0","title":"rbmi 1.1.0","text":"CRAN release: 2022-03-02 Initial public release","code":""}]