Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Axe results violations save #498

Open
wants to merge 31 commits into
base: master
Choose a base branch
from
Open

Conversation

mohanqxf2
Copy link
Collaborator

  • Split the method "get_new_violations" in the snapshot_util file
  • Display a message saying "no existing snapshots were found, we created a new snapshot. Please check the snapshot of violations and use it" when new snapshot is created.
  • New txt file to save all the new violations caught while comparing with old.
  • Updated the code to save the snapshot_json file under ./conf/snapshot as previously it was saving under ./utils/snapshot
  • Moved the save and load snapshot json file funtions from test script to snapshot_utils.py file

To run the pytest from root dir:
python -m pytest tests/test_accessibility.py

expected Output when running first time:
| PASS: No existing snapshot was found for page. A new snapshot has been created. Please review the snapshot for violations before running the test again.

@mohanqxf2 mohanqxf2 self-assigned this Nov 28, 2024
@mohanqxf2 mohanqxf2 requested a review from avinash010 November 28, 2024 13:43
Copy link
Collaborator

@avinash010 avinash010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mohanqxf2

  1. I don't think the failed violations are getting saved in the log file. Can you double check it?
  2. Would we be updating the same violation file everytime?
  3. I remember discussing, we wanted few lines of violations printed in log and the whole violations in the text file. Currently I just see a message
+------------------------------------------------+
| Tests Failed                                   |
+------------------------------------------------+
| test_accessibility:                            |
| ✗ Accessibility checks for contact page failed |
|                                                |
+------------------------------------------------+

How would the user know what's the error and where he can look for the violation?

May not be required for this PR, we need to figure out an easy way for user to accept the violation and add it to a snapshot in case they are ok with the new violation

conf/snapshot_dir_conf.py Show resolved Hide resolved
Copy link
Collaborator

@avinash010 avinash010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also i feel lot of complexity is in the test.
The initialize snapshot can be a separate method and moved to utility
Violation comparison can also move to the utility

@mohanqxf2
Copy link
Collaborator Author

mohanqxf2 commented Dec 6, 2024

  1. I don't think the failed violations are getting saved in the log file. Can you double check it?

@avinash010 looking at your output below, it looks like no new violations found to save in the log file

  1. Would we be updating the same violation file everytime?

Yes, as of now if the violation txt file exist in the conf then its gonna update the same file.

  1. I remember discussing, we wanted few lines of violations printed in log and the whole violations in the text file. Currently I just see a message
+------------------------------------------------+
| Tests Failed                                   |
+------------------------------------------------+
| test_accessibility:                            |
| ✗ Accessibility checks for contact page failed |
|                                                |
+------------------------------------------------+

Looking at this output there are no new violations caught when compared with a snapshot file, if new violations are caught then it should print the first 50 characters from each of the violations.

How would the user know what's the error and where he can look for the violation?

Along with the violations first 50 characters it will display the conf path in the output .
image

Could you please retry running the tests by doing a pre-request to change the HTML element contents in the existing snapshot_output_contact_page.json file which is saved under ./conf/snapshot/snahshot_output_contact_page.json

@mohanqxf2
Copy link
Collaborator Author

The initialize snapshot can be a separate method and moved to utility
Violation comparison can also move to the utility

I have moved these methods to snapshot_utils.py @avinash010

@mohanqxf2
Copy link
Collaborator Author

@avinash010 I havent pushed the latest commit yet, I will push by tomorrow. please hold of the review process. I will post here as soon the pr is ready

@avinash010
Copy link
Collaborator

@mohanqxf2 Is this PR ready for review?

@mohanqxf2
Copy link
Collaborator Author

mohanqxf2 commented Jan 2, 2025

Looks like there are Codacy issues, I will fix the codacy issues, you can review it by tomorrow @avinash010

@mohanqxf2
Copy link
Collaborator Author

@avinash010 The axe accessibility is working well. I used DeepDiff library
Test cases tested:

  1. Removed the complete violations dictionary and checked if catching
  2. removed one of the nodes within one violation dictionary and checked if catching
  3. Updated the existing violations and checked if showing the changed violations

CircleCI is 👎 failing because of weather shopper payment.

@avinash010
Copy link
Collaborator

If there are no shopshot file the current code doesn't create new snapshot. I see below error

Python says: 'NoneType' object is not iterable
FAILED

=================================================================================== Failure Summary ====================================================================================
+---------------------------------------------------------------------------------------------------------------------+
| Tests Failed                                                                                                        |
+---------------------------------------------------------------------------------------------------------------------+
| test_accessibility:                                                                                                 |
| ✗ Exception when trying to run test: /home/avinash/qxf2/qxf2_pom/qxf2-page-object-model/tests/test_accessibility.py |
| Python says: 'NoneType' object is not iterable                                                                      |
|                                                                                                                     |

@avinash010
Copy link
Collaborator

If there is accessibility with multiple nodes, then the missing node is repeated multiple times in the violations and error logs

@avinash010
Copy link
Collaborator

The violation details aren't displayed in error logs properly when there some violation with single node is missing. I don't see 80 characters in this case (test_obj.write(f"{violation_message[:80]})

Description: Ensures every HTML... Complete violation output is saved in../conf/new_violations_record.txt
2025-01-03 14:55:37.129 | ERROR    | /tests/test_accessibility.py::test_accessibility | FAIL: Accessibility checks for contact failed

@mohanqxf2
Copy link
Collaborator Author

@avinash010 I have fixed the following

If there are no shopshot file the current code doesn't create new snapshot. I see below error

Have revert back the code to save the snapshot file if its not present in the existing_snapshot

If there is accessibility with multiple nodes, then the missing node is repeated multiple times in the violations and error logs

Updated the Deepdiff with ignore_order to get rid of repeated nodes , if any nested nodes get updated.
violation_diff = DeepDiff(existing_snapshot_dict, current_violations_dict, ignore_order=True, verbose_level=2)

@mohanqxf2
Copy link
Collaborator Author

The CircleCI is failing for

******Verifying registered car count********
2025-01-10 08:17:09.572 | INFO     | /endpoints/api_player.py::get_registered_cars | Fetched registered cars list: {'url': 'https://cars-app.qxf2.com/register/', 'response': {'registered': [{'car': {'brand': 'Maruti', 'car_type': 'hatchback', 'name': 'Swift', 'price_range': '3-5 lacs'}, 'customer_details': {'city': 'Ponda', 'customer_name': 'Rohan'}, 'registration_token': 3, 'successful': True}, {'car': {'brand': 'Maruti', 'car_type': 'hatchback', 'name': 'Swift', 'price_range': '3-5 lacs'}, 'customer_details': {'city': 'Ponda', 'customer_name': 'Rohan'}, 'registration_token': 1, 'successful': True}, {'car': {'brand': 'Maruti', 'car_type': 'hatchback', 'name': 'Swift', 'price_range': '3-5 lacs'}, 'customer_details': {'city': 'Ponda', 'customer_name': 'Rohan'}, 'registration_token': 1, 'successful': True}, {'car': {'brand': 'Maruti', 'car_type': 'hatchback', 'name': 'Swift', 'price_range': '3-5 lacs'}, 'customer_details': {'city': 'Ponda', 'customer_name': 'Rohan'}, 'registration_token': 3, 'successful': True}, {'car': {'brand': 'Maruti', 'car_type': 'hatchback', 'name': 'Swift', 'price_range': '3-5 lacs'}, 'customer_details': {'city': 'Ponda', 'customer_name': 'Rohan'}, 'registration_token': 3, 'successful': True}, {'car': {'brand': 'Maruti', 'car_type': 'hatchback', 'name': 'Swift', 'price_range': '3-5 lacs'}, 'customer_details': {'city': 'Ponda', 'customer_name': 'Rohan'}, 'registration_token': 2, 'successful': True}], 'successful': True}}
2025-01-10 08:17:09.578 | INFO     | /utils/results.py::conditional_write |   - Successfully fetched registered cars list
2025-01-10 08:17:09.584 | INFO     | /utils/results.py::log_result | FAIL: Registered car count doesnt match expected value
2025-01-10 08:17:09.590 | INFO     | /tests/test_api_example.py::test_api_example | Exception when trying to run test:/home/tox/project/tests/test_api_example.py
2025-01-10 08:17:09.597 | INFO     | /tests/test_api_example.py::test_api_example | Python says:

[gw1] FAILED tests/test_api_example.py::test_api_example 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants