Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using all prior admissions data to project future admissions #81

Open
samclifford opened this issue May 12, 2020 · 3 comments
Open

Using all prior admissions data to project future admissions #81

samclifford opened this issue May 12, 2020 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@samclifford
Copy link
Collaborator

Currently only the final day's cases are used to simulate the branching process outbreak. We can use the whole data set, convert to an incidence object (after filling in the unreported values) and simulate the branching process from there.

For the exponential change in cases with the doubling/halving time model, it may be worth estimating the constant term in a Poisson GLM with log(λ) = a + r t and then predicting what the number of admissions should be on the final date of admissions in order to give behaviour that doesn't result in a huge drop in recent cases leading to a low baseline for future projections. We might need to go down the road of estimating the doubling time from a time series of cases if we are going to look at modelling.

For the time being we can probably just suck in all the data, check if we're using the branching process and then pass in only the most recent row if using doubling/halving.

@samclifford samclifford added the enhancement New feature or request label May 12, 2020
@samclifford samclifford self-assigned this May 12, 2020
@samclifford
Copy link
Collaborator Author

Incidence object made for the branching process as of cf651c4

@samclifford
Copy link
Collaborator Author

Model of exponential growth implemented in 48122c7

@samclifford
Copy link
Collaborator Author

This is probably fine for now until someone tries using the doubling time approach and fitting some data that doesn't neatly fit into exponential growth or decay. It might be worth considering, in such a case, using only the last serial interval or two of data, or considering some sort of weighting scheme that down-weights data from far in the past (e.g. exponential decay with a half-life of one week).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant