You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that we've successfully run TEPs-I and mapped and documented its processes and functionality, we need to plan out the structure of bdit_traffic_prophet, the Python successor to TEPs-I. This plan is a work-in-progress and will be periodically updated to reflect discussions between me and @aharpalaniTO.
Traffic Prophet
Countfill
Pipeline
Fit counts with sufficient data with a FARIMA model. Vary model weights using an iterative technique described in Ganji et al. 2001.
Fill data gaps (potentially extrapolate forward in time) for counts with sufficient data using the best-fit FARIMA.
Manually select spatial associations between "aggregated" counts (with more data) and "key" counts (with less data).
Fill data gaps and predict hourly traffic counts for key locations using Ganji et al. Eqn. 9.
Countmatch
Pipeline
ETL process to load data from 15-minute count zip files or from Postgres. Separate count data into blocks by centreline ID and year, and identify whether each block is a permanent count (more than 3/4 of all days of the year, and all 12 months represented) or short-term count.
Determine candidate MADT, AADT, day-of-week of the month (DoM) ADT and scaling factors for permanent counts. Determine growth factors for permanent counts between years.
Determine nearest permanent count neighbours for each short-term count.
Estimate MADT, AADT, DoMADT, etc. for short-term counts using their nearest permanent count.
Determine closest DoMADT pattern between short-term and permanent count, and re-estimate AADT for short-term counts.
Validate using permanent stations on other permanent stations.
Consider unifying directional data in this module rather after the next two.
Issues
We may have to make significant changes to the ETL code to read from Postgres rather than Arman's zip files.
The number of permanent count locations is very small, and biased toward locations like the Gardiner Expressway. We can boost the number of permanent counts by augmenting our data using PECOUNT, or introducing additional data from Miovision or SCOOT.
Arman's code splits permanent and short-term counts into individual years. It is not obvious if this is necessary to compare DoMADT patterns, or if it was meant as a RAM-saving measure.
How growth factors are calculated is contentious, and we may need to explore alternative methods.
Investigate tuning permanent count and (KDTree-based) nearest neighbour search criteria as hyperparameters.
Investigate if we can significantly relax criteria for data to be included for validation, and isolating a portion of this data as a holdout set. We technically don't need to only consider permanent stations, just any station with sufficient data across multiple years.
RoadKridge
Pipeline
Prepare land use and road class data for arterial roads.
Associate AADT estimates with arterial road segment. Calculate distances between road segments and estimate locations.
Estimate variogram for kriging using OLS.
Iterative kriging regression to estimate AADT on arterials. Could use PyKrige or SciKit-GStat
Issues
Much of the input data to KCOUNT was manually prepared, using code outside of TEPs. We'll need to reproduce them in our data, which might require disaggregating or interpolating neighbourhood or census-tract level data.
LocalSVR
Pipeline
Snap Countmatch outputs and land use and road data onto a grid.
Now that we've successfully run TEPs-I and mapped and documented its processes and functionality, we need to plan out the structure of
bdit_traffic_prophet
, the Python successor to TEPs-I. This plan is a work-in-progress and will be periodically updated to reflect discussions between me and @aharpalaniTO.Traffic Prophet
Countfill
Pipeline
Countmatch
Pipeline
Issues
RoadKridge
Pipeline
Issues
LocalSVR
Pipeline
Issues
Postprocessing
Pipeline
The text was updated successfully, but these errors were encountered: