-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mixed concerns in metrics package #44
Comments
Ping @Particular/metrics-maintainers |
thanks @Scooletz for the great write-up of my initial weird thoughts! |
Why would we still need an attribute? The attribute was previously only used to iterate over all probes to generate the script for the perfcounters. |
You still need the attribute for the compile time inspection |
@danielmarbach I think that, with all the ordering/recreation/checking problems with |
@Scooletz I think it would be easy to reflect over the new static const file and generate counters according to that. But maybe you are right and each counter requires manual mapping anyway. Though it would be a shame to drop and deprecate the build packages again. Maybe there is a good mapping we can come up with that works for the most counters in a generic way and then leave for those the generation on. |
After proposing Particular/NServiceBus.Metrics.PerformanceCounters#27 I think we can combine these approaches @danielmarbach . Let me provide a quick sketch:
This, combined with the new approach for recreation. looks to me as the best option for both, keeping the packages truly separated and not putting too much effort in providing next counters. |
I vote 👎 to keeping the generation and 👍 for a simple static script. As I understand it, the original idea was that we generate the counters so that we could define new metrics in I always thought that was a bad idea because now we need to make the end-user aware of the fact that if Besides that
|
I agree with @Scooletz that this is the best approach to move forward as of now but leave it up to the judgement of the maintainer group. |
@Scooletz Nice issue about the responsibility of who defines the probe and who defines the metric as I noted prior to my holiday. I wanted to add that we could even define a new metric in the performance counters package based on the current probes. For example, currently we have the metric stating the successful rate but we could also add a total successful processed messages counter to the package. A new metric isn't bound specifically to a newly exposed probe. We can also still maintain a human readable name that could be used in the downstream metric provider. Its just that an ID would help in having a better mapping and identifies that the ID is more of less used as a key:
The mapping remains as a metric provider can have its own naming convention as Prometheus has.
@dvdstelt Maybe this belongs in a separate issue? |
@ramonsmits No, they need to be tested thoroughly by someone who knows how the counters worked and verify if they still work that way. Or have two versions side-by-side to check if they produce the same results. I'm not blaming anyone, because mistakes can be made and should be learned from; that's why I mention it. But the counters used to be in seconds, then changed to milliseconds and were changed back to seconds. That's what I meant with testing properly. We're now just making a bigger issue of it then neccesary. :-) |
This is issue is a result of a discussion with @danielmarbach
Problem
The
NServiceBus.Metrics
package mixes concerns, providing information that is not required by the package itself. The information is required by thePerformanceCounters
package, but this could be addressed in another way, but not by pollutingMetrics
with unneeded information. Additionally, reporting toSC.Monitoring
is now based on the transformed.Name
of a probe, which doesn't seem to be the best way to identify a probe.Description
Please find below details of this problem.
ProbeProperties
Below you can find a definition of a probe signalizing a successful processing of a message
The current schema of probe properties delivers information not the probe itself but the aggregation method and reporting period. These values clearly does not belong to this package as it knows nothing about the way the signal will be used. It might, for example, be aggregated or have its date of occurrence reported. What if, instead of aggregating per sec, we'd like to provide a total?
Reporting
The reporters for
SC.Monitoring
are created with theMetricType
header constructed on the basis of the name with removed spaces.Possible solutions
Below you can find possible solutions
Breaking changes
Remove the name and the description from
IProbe
, leaving justId
in there. This would require following things to happen:NSB.Metrics
v2 would need to be released with new interfacesIProbe.Id
ProbeProperties
would turn intoProbeIdAttribute
NSB.Metrics.PeformanceCounters
would require to include all the mappings from the newly assigned ids to names and descriptions. The identity map would need to be created and possibly break the build if no mapping exists for a probe that needs to be exposedSC.Monitoring
recognition of newMetricType
values should be added# of msgs successfully processed / sec
for the processing success, it should beprocessing-successful
which clearly states what's the signal about, leaving the aggregation, etc for the consumer.Not so breaking changes
We could use
.Name
as Id.@Scooletz personal opinion: being explicit about id (yes, it would require to rerelease) is my preferable way of doing it.
The text was updated successfully, but these errors were encountered: