You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
currently test cases that rely on TPM give auto FAIL result when TPM is missing, it causes some problems for unsuspecting tester or when posting results on dashboard. When the tpm module is not plugged in, it is expected for these tests to fail. Such fails are more of quasi false negatives, for they obviously would work (or fail), should the proper module be inserted.
Why have some elaborate error messages in logs when the origin of the problem is known?
Describe the solution you'd like
if there is a way to detect a missing module we could telegraph it:
add prompts that inform the operator a tpm is possibly missing
ideally we could make it so that in such case the test also returns SKIP instead of FAIL
Where is the value to a user, and who might that user be?
As a result these changes would eliminate those quasi false negatives and:
give space for actual issues to emerge
speed up the result analysis and log reading
the immortal fails phenomenon would be less frequent when posting XML file on results dashboard with no way of removing them on your own
PR mentioned by @BeataZdunczyk addresses this problem via variable "EXPECTED_TPM_VERSION".
If no TPM is present, get-robot-variables.sh will set that value to 0. This will cause test to be skipped.
In case a config is set to expect TPM2.0 or 1.2, and no module is present, tests will fail on a keyword, which should clearly indicate lack of installed module.
I wonder if TPMCMD should even be included as part of regression testing. These tests were made for TwPM, and they test whether TPM supports given commands. This has nothing to do with Dasharo, and Dasharo can't fix any potential issues caused by poor TPM implementation.
The problem you're addressing (if any)
currently test cases that rely on TPM give auto FAIL result when TPM is missing, it causes some problems for unsuspecting tester or when posting results on dashboard. When the tpm module is not plugged in, it is expected for these tests to fail. Such fails are more of quasi false negatives, for they obviously would work (or fail), should the proper module be inserted.
Why have some elaborate error messages in logs when the origin of the problem is known?
Describe the solution you'd like
if there is a way to detect a missing module we could telegraph it:
Where is the value to a user, and who might that user be?
As a result these changes would eliminate those quasi false negatives and:
immortal
fails phenomenon would be less frequent when posting XML file on results dashboard with no way of removingthem
on your ownDescribe alternatives you've considered
No response
Additional context
affected test cases:
MBO001.001
TPM001.001
TPM001.002
TPM001.003
TPM002.001
TPM002.002
TPM002.003
TPM003.001
TPM003.002
TPM003.003
TPMCMD001.001
TPMCMD002.001
TPMCMD003.001
TPMCMD003.002
TPMCMD004.001
TPMCMD005.001
TPMCMD006.001
TPMCMD007.001
TPMCMD007.002
TPMCMD008.001
TPMCMD009.001
TPMCMD0010.001
TPMCMD0011.001
The text was updated successfully, but these errors were encountered: