-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(StopPageRedesign): "no service" via route patterns #1785
Conversation
and add more useful names to some variables
- keep more headsigns in the response, thus providing a mechanism for seeing headsigns that are not being served on a route today. - also add direction_id to output, which will be used in the frontend
- show "No service today" in departure card - show "No service today, (date)" in departure list
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excited for this change! 🎉 Added a few questions & comments. Looks like some checks are failing too.
Seeing 282 elixir unit test failures, which is 1 more failing than the last merge to main, but didn't dig into which one is the difference.
required(:direction_id) => 0 | 1, | ||
required(:route_patterns) => [RoutePattern.t()] | ||
} | ||
@type by_route_and_headsign :: %{Route.id_t() => %{String.t() => headsign_info}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: Do these backend changes need to be deployed separately from the frontend changes in a backwards compatible way? What would a user on the stop page see during a deploy if they have the new frontend and the old backend (or the new frontend and the old backend)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't do separately versioned backends and frontends here, they'll be deployed together 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may be missing a nuance here, but my previous understanding on Skate was that this would be considered a breaking change. Even though the frontend and backends are deployed together, there will be a period of time where some ecs instances have the old backend and some instances have the new backend, during which someone using the stop page has no control over whether the new or old instances are hit, so there could be a mismatch between the data that is returned and what the frontend expects. Is that situation avoided with how dotcom is set up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we've never thought to mitigate this before! but when you explain it here it makes sense...aah 🫠 🫠 🫠 🫠
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do think that problem might be less frequently encountered on Dotcom, because folks don't have it open very long per session (just a few minutes -- I presume folks working in Skate maintain a much longer working session during typical use). I'll have to figure out how to handle that more systematically but for the sake of this PR will consider it out of scope. But thanks so much for raising this point!!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if because sessions are shorter on Dotcom this could be more of an issue, because users might be hopping between things more in a short duration. Here is a search in splunk that shows how many ips hit this endpoint more than once in the last X minutes. I'm seeing 24 / 85 ips that hit this endpoint in the lat 5 minutes hit it more than once. If that is representative of traffic during a deploy, then ~25% of users on the stop page could receive errors on the page during a deploy.
A way we typically handled this issue on Skate is to split this work over multiple PRs, potentially by adding a new backend endpoint in one PR, then removing the old endpoint once the frontend has been updated to stop calling it. Would that kind of approach be feasible here?
|> Enum.map(&add_polyline/1) | ||
|> Enum.group_by(& &1.route_id) | ||
|> Enum.map(fn {k, v} -> {k, Enum.group_by(v, & &1.headsign)} end) | ||
|> Stream.reject(&ends_at?(&1, stop_id)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: Did switching to stream improving performance? I find the use of Enum.group_by
more readable if it performs roughly the same. Potentially Enum.group_by/3
would help, where the 3rd param is a function to transform the value could cut down on some of the passes through the list.
related non-blocking thought: Would it help simplify this logic if by_route_and_headsign
was something like %{Route.id_t() => [headsign_info]
, where headsign_info
also contains a headsign field? I'm not sure if that would have negative affects on the frontend side of things though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ooh I forgot about Enum.group_by/3
, I'll give this a try
|> Map.new() | ||
end | ||
|
||
@spec with_headsign_info([RoutePattern.t()]) :: headsign_info |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Consider renaming this - with_headsign_info
reads to me like it is only adding headsign information to the route patterns, but this is also filtering & adding polylines
no longer needed with the copy change
Sorry about the failing tests -- those particular fails have been popping up more here, I think it's from a race condition with how the workflow steps are building the application before running the tests 😵💫 Anyway, thanks @KaylaBrady for the many great points, I've added some revisions here. |
|> Enum.group_by(& &1.route_id) | ||
|> Enum.map(&with_headsign_groups/1) | ||
|> Map.new() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion(non-blocking): this could save a couple of passes through the list of routes
|> Enum.group_by(& &1.route_id) | |
|> Enum.map(&with_headsign_groups/1) | |
|> Map.new() | |
|> Enum.group_by(& &1.route_id, &with_headsign_groups/1) |
do: {headsign, with_annotation(route_patterns)} | ||
|
||
@spec with_annotation([RoutePattern.t()]) :: headsign_info | ||
defp with_annotation(headsign_route_patterns) do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (non-blocking): Enum.reject(¬_serving_today?/1)
still seems a bit like it doesn't match the name of this function. Maybe this filtering could be done as a separate step in with_headsign_groups
?
defp with_headsign_groups(route_patterns) do
route_patterns
|> Enum.group_by(& &1.headsign)
|> Enum.reject(¬_serving_today?/1)
|> Enum.map(&with_annotation/1)
|> Map.new()
end
required(:direction_id) => 0 | 1, | ||
required(:route_patterns) => [RoutePattern.t()] | ||
} | ||
@type by_route_and_headsign :: %{Route.id_t() => %{String.t() => headsign_info}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if because sessions are shorter on Dotcom this could be more of an issue, because users might be hopping between things more in a short duration. Here is a search in splunk that shows how many ips hit this endpoint more than once in the last X minutes. I'm seeing 24 / 85 ips that hit this endpoint in the lat 5 minutes hit it more than once. If that is representative of traffic during a deploy, then ~25% of users on the stop page could receive errors on the page during a deploy.
A way we typically handled this issue on Skate is to split this work over multiple PRs, potentially by adding a new backend endpoint in one PR, then removing the old endpoint once the frontend has been updated to stop calling it. Would that kind of approach be feasible here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved with offline discussion about preventing breaking changes in the future!
Summary of changes
Asana Ticket: No ticket
This gave me flashbacks to earlier in this page's development when we were struggling to distinguish "doesn't serve this stop" vs "no schedules or predictions for this stop." But I think I figured out an okay approach.
Background
The changes from #1762 went too far, and ends up leaving us unable to display any route card when no service is running. It turns out that produces a really jarring display on certain bus stop pages.
For example, there are several bus routes which only operate on weekdays -- visiting the stop pages serving these routes on the weekends will greet you with a cryptic blank display:
(screens from a technically unrelated bug report, but that's also the blank display you get in the scenario described above – I failed to take a screenshot of a relevant stop over the weekend 😅)
So I wanted to make another attempt at showing route cards in the case where transit service isn't available today.
It was somewhat straightforward – we needed to keep the list of route patterns intact, instead of prematurely removing them before grouping them by route and headsign. Click for the pseudocode description:
Highlights
I tried to split this into smaller, incremental commits that each describe an incremental change. But this goal inspired modest changes on both the backend and frontend.
Backend
Enum(¬_serving_today?/1)
operation in the grouped route patterns endpoint to operate over a list of route patterns for each route, instead of removing route patterns before we group the list by route+headsign. That way we could keep all routes+headsigns as part of the response, even if they're not associated with today's service.Less relevant changes:
Frontend
Most changes are adapting to the changed backend.
<DepartureList />
.Reviewing the result
hasService
prop tofalse
. For example:General checks