Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Binomial self report #11

Merged
merged 30 commits into from
Sep 14, 2023
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
68d10f0
Added binomial model to analyze self-report data
jeremy-haynes Jun 6, 2023
d61a80d
added check to preprocessing
jeremy-haynes Jun 6, 2023
d50a6de
fixed priors for binomial self-report model
jeremy-haynes Jun 7, 2023
d522a63
changed names for several files
jeremy-haynes Jun 10, 2023
1b9fdbf
added joint binomial model
jeremy-haynes Jun 11, 2023
901f2fd
changed names of models and files & started inspecting how the parame…
jeremy-haynes Jun 11, 2023
c897527
Update 1_IGT_PP/Code/R/1_Analyses/fitting_ORL_lineartime.Rmd
jeremy-haynes Jun 14, 2023
35a8fa9
Update 1_IGT_PP/Code/Stan/igt_selfreport_orl_binomial_joint_indMu.stan
jeremy-haynes Jun 14, 2023
1e926f9
updated files to reference correct datafile, added some extra fine-gr…
jeremy-haynes Jun 14, 2023
4cf696d
Merge branch 'Binomial-Self-Report' of https://github.com/hollysully/…
jeremy-haynes Jun 14, 2023
d2ef68d
Updated R files to be more concise and include more analysis of poste…
jeremy-haynes Jun 16, 2023
6f58143
joint orl-binomial model fitting RMD
jeremy-haynes Jun 21, 2023
e844397
added checks for ID orders for joint ORL-Bbinomial model
jeremy-haynes Jun 21, 2023
aed5339
Update 1_IGT_PP/Code/R/0_Preprocess/IGT_Preprocess.R
jeremy-haynes Jun 26, 2023
ab90ec0
Cleaned up correlations and consolidated analyses within model-fittin…
jeremy-haynes Jun 27, 2023
6fd790e
Updating linear time ORL model, but not finished - need to fix sessio…
jeremy-haynes Jun 28, 2023
d5ed9a5
Worked on calculating session 2 ORL estimates in linear time model
jeremy-haynes Jun 28, 2023
7bedb51
Fixed tabbing and removed whitespace on Joint ORL-Binomial model
jeremy-haynes Jun 28, 2023
d19b6e7
Updated plots for joint model and corrected proportion-scoring
jeremy-haynes Jul 12, 2023
d88df63
added carryover parameter to joint model (cow)
jeremy-haynes Jul 13, 2023
5998d89
added updated fictive-learning models to IGT & SIGT
jeremy-haynes Jul 24, 2023
a05cfc9
Update 1_IGT_PP/Code/R/1_Analyses/fitting_ORL_joint_updatedfic.Rmd
jeremy-haynes Aug 4, 2023
bd1953f
Update 1_IGT_PP/Code/Stan/igt_orl_joint_updatedfic.stan
jeremy-haynes Aug 4, 2023
0b38627
Updated model-comparison code and added function to streamline PPC plots
jeremy-haynes Aug 4, 2023
57f6d29
added model with updating only for plays
jeremy-haynes Aug 4, 2023
28f5ea5
fixed some coding errors in joint playfic playupdate modifiedk model
jeremy-haynes Aug 7, 2023
00e30b2
added log-ratios to Arew and Apun for testretest and increased iterat…
jeremy-haynes Aug 18, 2023
12327d7
add simplified version of model
Nathaniel-Haines Aug 20, 2023
07b2c76
Updated preprocessing scripts and PP ORL model
jeremy-haynes Sep 13, 2023
a702b33
Merge branch 'main' into Binomial-Self-Report
jeremy-haynes Sep 14, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -26,13 +26,16 @@ Manuscript/
1_IGT_PP/Code/R/2_Plotting/group_PPCs_joint_indMu.Rmd
1_IGT_PP/Code/Stan/simple_IRT.stan
1_IGT_PP/Data/2_Fitted/IRT/
1_IGT_PP/Figs_Tables/self_report/
1_IGT_PP/Code/R/1_Analyses/Full_PANAS_simple_IRT.Rmd
1_IGT_PP/Code/R/1_Analyses/NA_PANAS_simple_IRT.Rmd
1_IGT_PP/Code/R/1_Analyses/PA_PANAS_simple_IRT.Rmd
1_IGT_PP/Code/R/2_Plotting/posterior_predictive_checks_joint_indMu.Rmd
1_IGT_PP/Code/R/1_Analyses/fitting_orl_joint_indMu.Rmd
1_IGT_PP/Code/Stan/self_report_mus.stan
1_IGT_PP/Code/R/1_Analyses/selfreport_indMu.RMD
1_IGT_PP/Code/R/1_Analyses/selfreport_1mu.RMD
1_IGT_PP/Code/R/1_Analyses/self_report_1mu.stan


#
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,8 @@ AB <- mutate(AB, stim = recode(cardname, 'A'= 1, 'B' = 2, 'C' = 3, 'D' = 4), tim


BD$cardname
BD <- mutate(BD, stim = recode(cardname, 'A'= 3, 'B' = 1, 'C' = 4, 'D' = 2), time = SessionDate)
# BD <- mutate(BD, stim = recode(cardname, 'A'= 3, 'B' = 1, 'C' = 4, 'D' = 2), time = SessionDate)
BD <- mutate(BD, stim = recode(cardname, 'A'= 1, 'B' = 2, 'C' = 3, 'D' = 4), time = SessionDate)



Expand Down Expand Up @@ -267,10 +268,10 @@ unique(Sess2_IGT$ExperimentName)
# Save IGT dataframes as .csv files
############################################################################################################

Sess1_IGT_dat = here("Data", "1_Preprocessed", "Sess1_IGT.csv")
Sess1_IGT_dat = here("1_IGT_PP", "Data", "1_Preprocessed", "Sess1_IGT.csv")
write.csv(Sess1_IGT,Sess1_IGT_dat)

Sess2_IGT_dat = here("Data", "1_Preprocessed", "Sess2_IGT.csv")
Sess2_IGT_dat = here("1_IGT_PP", "Data", "1_Preprocessed", "Sess2_IGT.csv")
write.csv(Sess2_IGT,Sess2_IGT_dat)


Expand Down Expand Up @@ -337,7 +338,7 @@ for (session in 1:2) {
)

# save Sess 1 or 2 lists as .rds
saveRDS(dataList, file = here("Data", "1_Preprocessed", paste0("Sess", session, ".rds")))
saveRDS(dataList, file = here("1_IGT_PP", "Data", "1_Preprocessed", paste0("Sess", session, ".rds")))
}

# I'm here, this is old code for preprocessing the raw data for running a joint ORL model
Expand Down Expand Up @@ -378,7 +379,8 @@ maxTrials <- max(Tsubj)
# Behavioral data arrays
# this creates 3 identical arrays of [1:50, 1:100, 1:2]; the array is filled with -1 values
# array names are RLmatrix, SRLmatrix, choice
RLmatrix <- SRLmatrix <- choice <- card <- array(-1, c(numSubjs, maxTrials,2))
RLmatrix <- SRLmatrix <- choice <- card <- IDs <- array(-1, c(numSubjs, maxTrials,2))


# Loop through and format into 3 arrays: choice, RLmatrix, SRLmatrix
for (i in 1:numSubjs) {
Expand All @@ -392,11 +394,14 @@ for (i in 1:numSubjs) {
card[i,,s] <- tmp_dat$stim
RLmatrix[i,,s] <- tmp_dat$rewlos
SRLmatrix[i,,s] <- tmp_dat$Srewlos
IDs[i,,s] <- tmp_dat$Subject
}
}
}


# Check that how the for loop above loops through the subject IDs is in the same order as subjList
(condensedIDs = apply(IDs, c(1,3), mean)) # Visually check they are the same (-1 means missing data on that session)
ifelse(mean(condensedIDs[,1] == subjList) == 1, "Equal", "Not Equal") # Check if IDs in for loop equal IDs in subjList
jeremy-haynes marked this conversation as resolved.
Show resolved Hide resolved

# Put in stan-ready list
stan_dat <- list(
Expand All @@ -412,5 +417,5 @@ stan_dat <- list(
subjIDs = subjList
)

saveRDS(stan_dat, file = here("1_IGT_PP", "Data", "1_Preprocessed", "stan_ready_ORL_joint_retest.rds"))
saveRDS(stan_dat, file = here("1_IGT_PP", "Data", "1_Preprocessed", "stan_ready_ORL_IGT.rds"))

94 changes: 0 additions & 94 deletions 1_IGT_PP/Code/R/0_Preprocess/Preprocess_Joint1.Rmd

This file was deleted.

176 changes: 176 additions & 0 deletions 1_IGT_PP/Code/R/0_Preprocess/SelfReport_Preprocess.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,176 @@



library(dplyr)
library(tidyverse)
library(haven)
library(lubridate)
library(readxl)
library(here)



# -----------------------------------------------
# MEASURES
# * Demographics
# + ID, sex, race, Ethnicity
#
# * Behavioral Activation Scale
# + bastot = ?
# + basdrive = Drive subscale: measures persistent pursuit of goals
# + basfunsk = Fun Seeking subscale: measures desire for and approach towards new rewards
# + basrewres = Reward Responsiveness subscale: measures positive responses to the anticipation or consummation of rewards
#
# * Behavioral Inhibition Scale: measures sensitivity to negatively-valenced events
# + bis
#
# * Positive Affect/Negative Affect Schedules: measures state-level mood
# + panas_pa = Positive Affect
# + panas_na = Negative Affect
#
# * Mood and Anxiety Symptom Questionnaire (MASQ): assesses internalizing symptoms experienced in past week
# + masqGDA = General Distress Anxiety subscale: measures general anxious mood
# + masqAA = Anxious Arousal subscale: measures somatic hyperarousal
# + masqGDD = General Distress Depression subscale: measures general depressed mood
# + masqAD = Anhedonic Depression subscale: measures low positive affect
#
# * Snaith-Hamilton Pleasure Scale: measures ability to experience pleasure in last few days
# + shaps_tot
#
# * Patient-Reported Outcomes Measurement Information System: assesses symptoms of depression experienced in past week
# + prdep_tot



# -----------------------------------------------
## IMPORT DATA
session1 = read_sav(here("1_IGT_PP", "Data", "0_Raw", "MergedQuest_3.21.16-Session1.sav"),
col_select = c("ID", "sex", "race", "Ethnicity",
"bastot", "basdrive", "basfunsk",
"basrewres", "bis", "panas_pa",
"panas_na", "masqGDA", "masqAA", "masqGDD",
# "masqAD", # Removed
"shaps_tot", "prdep_tot")) %>%
mutate(session = 1) %>% # Create session variable
filter(ID >= 2049, # Subset PP participants
ID != 2059, # Remove participant that played old IFT
ID != 2083) # Remove participant that had no task data

session2 = read_sav(here("1_IGT_PP", "Data", "0_Raw", "MergedQuest_3.21.16-Session2.sav"),
col_select = c("ID", "bastot", "basdrive", "basfunsk",
"basrewres", "bis", "panas_pa",
"panas_na", "shaps_tot")) %>%
mutate(session = 2) %>% # Create session variable
filter(ID >= 2049, # Subset PP participants
ID != 2059, # Remove participant that played old IFT
ID != 2083, # Remove participant that had no task data
!ID %in% c(2057, 2063, 2064, # Remove participants that didn't do session 2
2067, 2086, 2090,
2093, 2094, 2096, 2098))

both_sessions = bind_rows(session1, session2) %>%
select(ID, session,
# Session 1 only
prdep_tot, masqGDA, masqAA, masqGDD,
# masqAD, # Removed
# Both sessions
bastot, basdrive, basfunsk, basrewres,
bis, panas_pa, panas_na, shaps_tot,
# Remove demographics
-c(sex, race, Ethnicity)) %>%
pivot_longer(cols = -c("ID", "session"), values_to = "score", names_to = "scale")



# -----------------------------------------------
# PREP DATA
IDs = unique(both_sessions$ID)
N = length(IDs)

# Make dataframe describing each scale
scales = data.frame(scale = c(# Session 1 only
"prdep_tot", "masqGDA", "masqAA", "masqGDD",
# "masqAD", # Removed
# Both Sessions
"bastot", "basdrive", "basfunsk", "basrewres",
"bis", "panas_pa", "panas_na", "shaps_tot"),
n_items = c(28, 11, 17, 12, 13, 4, 4, 5, 7, 10, 10, 14),
min_item_score = c(0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0),
max_item_score = c(4, 5, 5, 5, 4, 4, 4, 4, 4, 5, 5, 1),
n_sessions = c(1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2)) %>%
mutate(min = n_items * min_item_score,
max = n_items * max_item_score,
scaled_max = max - min)

# Make scaled scores and check out which scores are below the minimums - for those scores, we'll truncate them to 0
full_data = both_sessions %>%
left_join(scales) %>%
mutate(scaled_score = case_when(score-min < 0 ~ 0, is.numeric(score) ~ as.integer(round(score - min))))

# To check for range problems, uncomment & run below
# full_data %>%
# mutate(raw_pbx = case_when(score > max ~ "too high", score < min ~ "too low", T ~ ""),
# scaled_pbx = case_when(scaled_score > scaled_max ~ "too high", scaled_score < 0 ~ "too low", T ~ "")) %>%
# View(title = "Check_Pbxs")


# Create lists to store data in for each scale
participants = list()
scores = list()
missingness = list()
maxes = list()

for(l in 1:length(scales$scale)){
# Create matrices for the current scale in each list
participants[[scales$scale[l]]] = matrix(-999, nrow = N, ncol = scales$n_sessions[l])
scores[[scales$scale[l]]] = matrix(data = -999, nrow = N, ncol = scales$n_sessions[l])
missingness[[scales$scale[l]]] = matrix(data = 1, nrow = N, ncol = scales$n_sessions[l])

maxes[[scales$scale[l]]] = filter(scales, scale == scales$scale[l])[,"scaled_max"] # Save max score for current scale

cur_scale = filter(full_data, scale == scales$scale[l]) # Subset scores from scale of interest

for(s in 1:scales$n_sessions[l]){
cur_session = filter(cur_scale, session == s) # Subset scores from session of interest

for(i in 1:N){
cur_obs = filter(cur_session, ID == IDs[i]) # Subset score from person of interest
participants[[scales$scale[l]]][i,s] = IDs[i]

if(length(cur_obs$scaled_score)){ # Check if data is missing, if not, then get data
scores[[scales$scale[l]]][i,s] = cur_obs$scaled_score # Save score in element of matrix within the scale's score matrix
missingness[[scales$scale[l]]][i,s] = 0 # Save data as not missing in element of matrix within the scale's missingness matrix
}
}
}
}


stan_datas = list()

for(scale in unique(scales$scale)){
stan_datas[[scale]] = list(IDs = participants[[scale]],
N = nrow(scores[[scale]]), # Number of participants for current scale
S = ncol(scores[[scale]]), # Number of sessions for current scale
missing = missingness[[scale]], # Matrix of missingness for current scale
score = scores[[scale]], # Matrix of scores for current scale
M = maxes[[scale]]) # Max score for current scale
}


saveRDS(stan_datas, file = here("1_IGT_PP", "Data", "1_Preprocessed", "stan_ready_binomial_selfreport.rds"))















Loading