You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently a counter for each test scenario is tracked and exported during the fuzz testing.
For example, some random initialisations may not allow one to initialise a valid dim contract, even though one may want to test something for which a valid dim contract is required. One can overcome this in multiple ways:
Write a expect Revert or assert valid for each configuration of random values
See if you can have a valid configuration out of the random test parameters, and if yes, test the case you want to test.
... Other ways.
Option 1 would lead to large complicated test files.
Option 2 is chosen, yet it could lead to false positives, where one assumes a fuzz test covers some (edge) case, even though the actual test is never reached because all the random initialisations prevent reaching the required conditions to reach the test case.
To manage these potential false positives, a tracker is built that logs to an output file, per test run how often each case is reached, so that one can see how often the fuzz test actually tested the case that needs to be tested in that fuzz test.
Current state
Currently one can manually inspect the logs to verify the fuzz test has reached the actual tests cases.
Risky state
One could add a requirement for the fuzz tests to run untill each (relevant) test case is hit n times. However, that is risky because a test-developer may erroneously expect the test cases will be hit, leading to an infinite wait without any signals of their being a problem.
Ideal state
After the fuzz tests are completed, an additional test is performed on the output logs, which throw a failure if the required test cases are not hit (often enough).
Difficulties
I did not yet find a afterAll (fuzz runs/test files) method in Solidity Foundry.
(Fuzz) tests should always be runnable in random order, but if you need something/an additional test to run at the end, that would not be the random order.
The text was updated successfully, but these errors were encountered:
Currently a counter for each test scenario is tracked and exported during the fuzz testing.
For example, some random initialisations may not allow one to initialise a valid dim contract, even though one may want to test something for which a valid dim contract is required. One can overcome this in multiple ways:
Option 1 would lead to large complicated test files.
Option 2 is chosen, yet it could lead to false positives, where one assumes a fuzz test covers some (edge) case, even though the actual test is never reached because all the random initialisations prevent reaching the required conditions to reach the test case.
To manage these potential false positives, a tracker is built that logs to an output file, per test run how often each case is reached, so that one can see how often the fuzz test actually tested the case that needs to be tested in that fuzz test.
Current state
Currently one can manually inspect the logs to verify the fuzz test has reached the actual tests cases.
Risky state
One could add a requirement for the fuzz tests to run untill each (relevant) test case is hit
n
times. However, that is risky because a test-developer may erroneously expect the test cases will be hit, leading to an infinite wait without any signals of their being a problem.Ideal state
After the fuzz tests are completed, an additional test is performed on the output logs, which throw a failure if the required test cases are not hit (often enough).
Difficulties
afterAll (fuzz runs/test files)
method in Solidity Foundry.The text was updated successfully, but these errors were encountered: