Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

Why is the attenuation weight for Medium range higher than for Close range? #775

Open
mthiop opened this issue Dec 10, 2021 · 14 comments
Open
Labels
question Further information is requested

Comments

@mthiop
Copy link

mthiop commented Dec 10, 2021

Hi,

i have a question regarding the attenuation and the associated weights.

My understanding is that the exposure time is weighted based on attenuation.
Isn't the assumption that smaller attenuation pose a higher risk than larger attenuation?

According to the risk calculation parameters and Figure 16 of the solution architecture, the weight for "Close" range is 0.8 and the weight for "Medium" range is 1.0.
In a previous commit dated March 1, 2021, the "Close" range had a higher weight than the "Medium" range. This made sense.

Shouldn't the weight for "Close" range be higher than the "Medium" range, as it was before?

Is there a reason behind this change?

@mthiop mthiop added the question Further information is requested label Dec 10, 2021
@mlenkeit
Copy link
Member

@mthiop this was the result of the analysis/optimization that Fraunhofer Institut did, see:

@Ein-Tim
Copy link
Contributor

Ein-Tim commented Feb 3, 2022

@mthiop Are you satisfied with @mlenkeit's answer and this issue can be closed?

@OlympianRevolution
Copy link

I would actually like somebody from Fraunhofer to confirm or publish the analysis data.

@MikeMcC399
Copy link
Contributor

@OlympianRevolution

I would actually like somebody from Fraunhofer to confirm or publish the analysis data.

The Science blog article: About the Effectiveness and Benefits of the Corona-Warn-App in the section What is the purpose of evaluating the CWA and what aspects play a role in the evaluation? includes a video showing how the calibration was carried out. It also says: "We also intend to provide more information about the results of these tests.". I don't remember seeing any further data published however.

@OlympianRevolution
Copy link

For such a patently illogical conclusion it would be good to publish the data or at least paper, to make sure this is not a typo.

@vaubaehn
Copy link

vaubaehn commented Feb 4, 2022

I couldn't find any explanations from a quick internet research for this special case in question here. However, there are theoretically statistical implications that would explain why the attenuation weights had been adopted in this way:
When analysis of the former attenuation weight model reveals a high sensitivity (correct detection of high risk contacts) but a poor specifity (i.e., incorrect detection of no/low risk contacts resulting in too many high risc contacts/false positives), you would refine the model (e.g., by a receiver operating curve) to better balance sensitivity and specifity. That could mean that you reduce the weights for all attenuation buckets to enhance specifity, which results in a too low sensitivity. But by again increasing the weight of the medium range attenuation bucket the result could be again a better sensitivity with marginal loss in specifity, hence a better balanced model.
Imagine a radio with an equilizer that is connected to loudspeakers that only have large bass membran and a powerful high tone unit. If you turn all equilizer frequencies to maximum, the sound may be distorted by low and high frequencies. If you then pull down the low and high frequencies of the equilizer but leave the middle frequencies untouched, the sound becomes balanced.
That is what may have happened here.

But I agree to you, it would be good to have an official publication about the study.

@mlenkeit
Copy link
Member

mlenkeit commented Feb 4, 2022

@OlympianRevolution @vaubaehn thank you for your interest in the science behind this! It looks like there is indeed no scientific publication about this available yet. I'll let you know once something is published.

I suggest keeping the issue open until then.

@mlenkeit
Copy link
Member

mlenkeit commented Feb 9, 2022

@OlympianRevolution @vaubaehn in the meantime, you might find this one here interesting (if you have access): https://ieeexplore.ieee.org/document/9662591

Let's still keep the issue open. There might be something else I can refer you to, soon 😉

@OlympianRevolution
Copy link

Here is an open link
https://www.researchgate.net/publication/357270880_Contact_Tracing_with_the_Exposure_Notification_Framework_in_the_German_Corona-Warn-App

@OlympianRevolution
Copy link

Sadly the optimized parameters section does not go in to detail about how the optimization was performed or how increasing the close range weight would decrease the F2 score. It also does not have any discussion of the counterintuitive short range weight. I suspect there may have been relatively little data at short range. And thus the short range weight may be due to chance. But without the data or eval code it is impossible to know.

@Ein-Tim
Copy link
Contributor

Ein-Tim commented Apr 19, 2022

@mlenkeit

Are you able to share an update with us here?

@mlenkeit
Copy link
Member

mlenkeit commented Apr 22, 2022

@Ein-Tim no, the document that I was referring to at the time is not yet finalized. I'll check again with the author...

@AnonymousUserUse
Copy link

@mlenkeit

Are you able to share an update with us here now?

@mlenkeit
Copy link
Member

@AnonymousUserUse yes and no. I have checked with the author and the document is still being finalized. I don't have an ETA though.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants