Skip to content

edge_orchestrator CI on branch >> 61/merge << #372

edge_orchestrator CI on branch >> 61/merge <<

edge_orchestrator CI on branch >> 61/merge << #372

GitHub Actions / Unit tests report in Python 3.9 failed Dec 27, 2024 in 1s

64 passed, 3 failed and 0 skipped

Tests failed

❌ edge_orchestrator/reports/pytest/unit-tests-report.xml

67 tests were completed in 3s with 64 passed, 3 failed and 0 skipped.

Test suite Passed Failed Skipped Time
pytest 64✅ 3❌ 3s

❌ pytest

tests.unit_tests.domain.test_supervisor.TestSupervisor
  ✅ test_2_models_in_parallel
  ✅ test_2_models_in_serie
  ✅ test_get_prediction_for_camera_should_return_2_predicted_objects_by_one_object_detection_model[camera_id1]
  ✅ test_get_prediction_for_camera_should_return_2_predicted_objects_by_one_object_detection_model[camera_id2]
  ✅ test_get_prediction_for_camera_should_return_2_predicted_objects_by_one_object_detection_model[camera_id3]
  ✅ test_get_prediction_for_camera_should_return_2_predicted_objects_by_one_object_detection_model[camera_id4]
  ✅ test_get_prediction_for_camera_should_return_1_predicted_object_by_one_classification_model[camera_id1]
  ✅ test_get_prediction_for_camera_should_return_1_predicted_object_by_one_classification_model[camera_id2]
  ✅ test_get_prediction_for_camera_should_return_1_predicted_object_by_one_classification_model[camera_id3]
  ✅ test_get_prediction_for_camera_should_return_1_predicted_object_by_one_classification_model[camera_id4]
  ✅ test_get_prediction_for_camera_returns_2_objects_with_label_for_object_detection_followed_by_classif[camera_id1]
  ✅ test_get_prediction_for_camera_returns_2_objects_with_label_for_object_detection_followed_by_classif[camera_id2]
  ✅ test_get_prediction_for_camera_returns_2_objects_with_label_for_object_detection_followed_by_classif[camera_id3]
  ✅ test_get_prediction_for_camera_returns_2_objects_with_label_for_object_detection_followed_by_classif[camera_id4]
  ✅ test_get_prediction_for_camera_returns_2_objects_with_label_for_object_detection_with_classif_model[camera_id1]
  ✅ test_get_prediction_for_camera_returns_2_objects_with_label_for_object_detection_with_classif_model[camera_id2]
  ✅ test_get_prediction_for_camera_returns_2_objects_with_label_for_object_detection_with_classif_model[camera_id3]
  ✅ test_get_prediction_for_camera_returns_2_objects_with_label_for_object_detection_with_classif_model[camera_id4]
  ✅ test_get_prediction_for_camera_should_return_1_output_by_model_and_among_them_1_is_classification[camera_id1]
  ✅ test_get_prediction_for_camera_should_return_1_output_by_model_and_among_them_1_is_classification[camera_id2]
  ✅ test_get_prediction_for_camera_should_return_1_output_by_model_and_among_them_1_is_classification[camera_id3]
  ✅ test_get_prediction_for_camera_should_return_1_output_by_model_and_among_them_1_is_classification[camera_id4]
  ✅ test_get_prediction_for_camera_should_return_1_output_by_model_and_among_them_2_are_classification[camera_id1]
  ✅ test_get_prediction_for_camera_should_return_1_output_by_model_and_among_them_2_are_classification[camera_id2]
  ✅ test_get_prediction_for_camera_should_return_1_output_by_model_and_among_them_2_are_classification[camera_id3]
  ✅ test_get_prediction_for_camera_should_return_1_output_by_model_and_among_them_2_are_classification[camera_id4]
  ✅ test_apply_crop_function_with_correct_box_should_resize_the_picture
  ✅ test_apply_crop_function_with_incorrect_box_should_log_an_error_and_return_the_same_picture
  ❌ test_set_decision_should_send_final_decision_to_telemetry_sink
	self = <test_supervisor.TestSupervisor object at 0x7f2bb0bd0430>
  ❌ test_inspect_should_log_information_about_item_processing
	self = <test_supervisor.TestSupervisor object at 0x7f2bb0bc3820>
tests.unit_tests.domain.models.test_edge_station.TestEdgeStation
  ❌ test_register_cameras_raises_exception_when_no_active_configuration_is_set
	self = <test_edge_station.TestEdgeStation object at 0x7f2bb0be9040>
  ✅ test_capture_should_raise_exception_when_cameras_are_not_registered
  ✅ test_capture_should_instantiate_item_with_1_binary
tests.unit_tests.domain.models.test_item.TestItem
  ✅ test_item_from_nothing_should_instantiate_empty_item_with_serial_number_and_category_hardcoded
tests.unit_tests.domain.models.business_rule.test_camera_business_rules.TestCameraBusinessRule
  ✅ test_camera_decision_should_return_KO_when_expected_label_is_OK
  ✅ test_camera_decision_should_return_OK_when_minimum_one_person_is_detected
  ✅ test_camera_decision_should_return_OK_when_minimum_one_face_is_detected_with_two_object_detection_models
  ✅ test_camera_decision_should_return_OK_when_minimum_one_connected_cellphone_is_detected_with_one_object_detection_and_one_classification_model
  ✅ test_camera_decision_should_return_no_decision_without_inference_results
tests.unit_tests.domain.models.business_rule.test_item_business_rules.TestItemBusinessRule
  ✅ test_item_decision_should_return_decision_ko_when_one_or_more_than_one_camera_decision_is_ko
  ✅ test_item_decision_should_return_decision_ok_when_more_than_50_pct_of_camera_decisions_are_ok
  ✅ test_item_decision_should_return_no_decision_ko_with_no_camera_decision
tests.unit_tests.infrastructure.binary_storage.test_filesystem_binary_storage.TestFileSystemBinaryStorage
  ✅ test_save_item_binaries_should_write_image_on_filesystem
  ✅ test_get_item_binary_should_return_requested_item_binary
  ✅ test_get_item_binaries_should_return_all_item_binaries_names
tests.unit_tests.infrastructure.binary_storage.test_gcp_binary_storage.TestGCPBinaryStorage
  ✅ test_save_item_binaries_should_write_image_in_gcp
  ✅ test_get_item_binary_should_return_image
tests.unit_tests.infrastructure.binary_storage.test_memory_binary_storage.TestMemoryBinaryStorage
  ✅ test_save_item_binaries_should_write_image_in_memory
  ✅ test_get_item_binary_should_return_requested_item_binary
  ✅ test_get_item_binaries_should_return_all_item_binaries_names
tests.unit_tests.infrastructure.camera.test_fake_camera.TestFakeCamera
  ✅ test_select_random_image_should_return_random_image_from_input_images_folder
tests.unit_tests.infrastructure.metadata_storage.test_filesystem_metadata_storage.TestFileSystemMetadataStorage
  ✅ test_save_item_metadata_should_write_metadata_on_filesystem
  ✅ test_get_item_metadata_should_return_requested_item_metadata
  ✅ test_get_item_state_should_return_expected_state
  ✅ test_get_all_items_metadata_should_return_expected_metadata_list
tests.unit_tests.infrastructure.metadata_storage.test_memory_item_storage.TestMemoryItemStorage
  ✅ test_save_item_metadata_should_write_item_in_memory
  ✅ test_get_item_metadata_should_return_requested_item_metadata
  ✅ test_get_all_items_metadata_should_return_all_items
tests.unit_tests.infrastructure.model_forward.test_fake_model_forwarder.TestFakeModelForwarder
  ✅ test_perform_inference_should_return_classification_results
  ✅ test_perform_inference_should_return_object_detection_results
  ✅ test_perform_inference_should_return_object_detection_with_classification_results
tests.unit_tests.infrastructure.model_forward.test_tf_serving_classification_wrapper.TestClassifModelHelper
  ✅ test_perform_pre_processing_should_return_an_image_as_an_array_with_the_expected_format
  ✅ test_perform_post_processing_should_transform_the_standard_output_from_the_model_into_the_expected_format
tests.unit_tests.infrastructure.model_forward.test_tf_serving_detection_and_classification_wrapper.TestDetectionClassificationHelper
  ✅ test_perform_pre_processing_should_return_an_image_as_an_array_with_the_expected_format
  ✅ test_perform_post_processing_should_transform_the_standard_output_from_the_model_into_the_expected_format
tests.unit_tests.infrastructure.model_forward.test_tf_serving_detection_wrapper.TestDetectionWrapperHelper
  ✅ test_perform_pre_processing_should_return_an_image_as_an_array_with_the_expected_format
  ✅ test_perform_post_processing_should_transform_the_standard_output_from_the_model_into_the_expected_format

Annotations

Check failure on line 0 in edge_orchestrator/reports/pytest/unit-tests-report.xml

See this annotation in the file changed.

@github-actions github-actions / Unit tests report in Python 3.9

pytest ► tests.unit_tests.domain.test_supervisor.TestSupervisor ► test_set_decision_should_send_final_decision_to_telemetry_sink

Failed test found in:
  edge_orchestrator/reports/pytest/unit-tests-report.xml
Error:
  self = <test_supervisor.TestSupervisor object at 0x7f2bb0bd0430>
Raw output
self = <test_supervisor.TestSupervisor object at 0x7f2bb0bd0430>
mock_send = <AsyncMock name='send' id='139825618674736'>

    @patch.object(PostgresTelemetrySink, "send")
    async def test_set_decision_should_send_final_decision_to_telemetry_sink(self, mock_send):
        # Given
        item = Item(serial_number="", category="", cameras_metadata={}, binaries={}, dimensions=[])
        item.id = "item_id"
        inventory = JsonInventory(TEST_INVENTORY_PATH)
        station_config = JsonStationConfig(TEST_STATION_CONFIGS_FOLDER_PATH, inventory, TEST_DATA_FOLDER_PATH)
        station_config.set_station_config("station_config_TEST")
        supervisor = Supervisor(
            station_config=station_config,
            metadata_storage=MemoryMetadataStorage(),
            model_forward=FakeModelForward(),
            binary_storage=MemoryBinaryStorage(),
        )
    
        # When
        await supervisor.inspect(item)
    
        # Then
        msg_dict = {
            "item_id": "item_id",
            "config": "station_config_TEST",
            "decision": "NO_DECISION",
        }
>       mock_send.assert_called_once_with(msg_dict)

tests/unit_tests/domain/test_supervisor.py:548: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/unittest/mock.py:919: in assert_called_once_with
    return self.assert_called_with(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <AsyncMock name='send' id='139825618674736'>
args = ({'config': 'station_config_TEST', 'decision': 'NO_DECISION', 'item_id': 'item_id'},)
kwargs = {}
expected = call({'item_id': 'item_id', 'config': 'station_config_TEST', 'decision': 'NO_DECISION'})
actual = call({'item_id': 'item_id', 'config': 'station_config_TEST', 'decision': 'OK'})
_error_message = <function NonCallableMock.assert_called_with.<locals>._error_message at 0x7f2bb055c940>
cause = None

    def assert_called_with(self, /, *args, **kwargs):
        """assert that the last call was made with the specified arguments.
    
        Raises an AssertionError if the args and keyword args passed in are
        different to the last call to the mock."""
        if self.call_args is None:
            expected = self._format_mock_call_signature(args, kwargs)
            actual = 'not called.'
            error_message = ('expected call not found.\nExpected: %s\nActual: %s'
                    % (expected, actual))
            raise AssertionError(error_message)
    
        def _error_message():
            msg = self._format_mock_failure_message(args, kwargs)
            return msg
        expected = self._call_matcher(_Call((args, kwargs), two=True))
        actual = self._call_matcher(self.call_args)
        if actual != expected:
            cause = expected if isinstance(expected, Exception) else None
>           raise AssertionError(_error_message()) from cause
E           AssertionError: expected call not found.
E           Expected: send({'item_id': 'item_id', 'config': 'station_config_TEST', 'decision': 'NO_DECISION'})
E           Actual: send({'item_id': 'item_id', 'config': 'station_config_TEST', 'decision': 'OK'})

/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/unittest/mock.py:907: AssertionError

Check failure on line 0 in edge_orchestrator/reports/pytest/unit-tests-report.xml

See this annotation in the file changed.

@github-actions github-actions / Unit tests report in Python 3.9

pytest ► tests.unit_tests.domain.test_supervisor.TestSupervisor ► test_inspect_should_log_information_about_item_processing

Failed test found in:
  edge_orchestrator/reports/pytest/unit-tests-report.xml
Error:
  self = <test_supervisor.TestSupervisor object at 0x7f2bb0bc3820>
Raw output
self = <test_supervisor.TestSupervisor object at 0x7f2bb0bc3820>
caplog = <_pytest.logging.LogCaptureFixture object at 0x7f2bb05881c0>
my_fake_item = <edge_orchestrator.domain.models.item.Item object at 0x7f2bb0591df0>

    async def test_inspect_should_log_information_about_item_processing(self, caplog, my_fake_item):
        # Given
        expected_messages = [
            "Activated the configuration station_config_TEST",
            "Starting Capture",
            "Entering try Capture",
            "End of Capture",
            "Starting Save Binaries",
            "Entering try Save Binaries",
            "End of Save Binaries",
            "Starting Inference",
            "Entering try Inference",
            "Getting inference for model model_id4",
            "End of Inference",
            "Starting Decision",
            "Entering try Decision",
            "End of Decision",
        ]
        inventory = JsonInventory(TEST_INVENTORY_PATH)
        station_config = JsonStationConfig(TEST_STATION_CONFIGS_FOLDER_PATH, inventory, TEST_DATA_FOLDER_PATH)
        station_config.set_station_config("station_config_TEST")
        supervisor = Supervisor(
            station_config=station_config,
            metadata_storage=MemoryMetadataStorage(),
            model_forward=FakeModelForward(),
            binary_storage=MemoryBinaryStorage(),
            telemetry_sink=FakeTelemetrySink(),
        )
    
        # When
        with caplog.at_level(logging.INFO, logger="edge_orchestrator"):
            await supervisor.inspect(my_fake_item)
    
        # Then
        actual_messages = [
            logger_msg
            for logger_name, logger_level, logger_msg in caplog.record_tuples
            if logger_name == "edge_orchestrator"
        ]
    
>       assert expected_messages == actual_messages
E       AssertionError: assert ['Activated t...inaries', ...] == ['Activated t...inaries', ...]
E         At index 1 diff: 'Starting Capture' != 'Activated the configuration station_config_TEST'
E         Right contains one more item: 'End of Decision'
E         Full diff:
E           [
E         -  'Activated the configuration station_config_TEST',
E            'Activated the configuration station_config_TEST',
E            'Starting Capture',
E            'Entering try Capture',
E            'End of Capture',
E            'Starting Save Binaries',
E            'Entering try Save Binaries',
E            'End of Save Binaries',
E            'Starting Inference',
E            'Entering try Inference',
E            'Getting inference for model model_id4',
E            'End of Inference',
E            'Starting Decision',
E            'Entering try Decision',
E            'End of Decision',
E           ]

tests/unit_tests/domain/test_supervisor.py:590: AssertionError

Check failure on line 0 in edge_orchestrator/reports/pytest/unit-tests-report.xml

See this annotation in the file changed.

@github-actions github-actions / Unit tests report in Python 3.9

pytest ► tests.unit_tests.domain.models.test_edge_station.TestEdgeStation ► test_register_cameras_raises_exception_when_no_active_configuration_is_set

Failed test found in:
  edge_orchestrator/reports/pytest/unit-tests-report.xml
Error:
  self = <test_edge_station.TestEdgeStation object at 0x7f2bb0be9040>
Raw output
self = <test_edge_station.TestEdgeStation object at 0x7f2bb0be9040>

    def test_register_cameras_raises_exception_when_no_active_configuration_is_set(
        self,
    ):
        # Given
        station_config: StationConfig = get_station_config()
    
        edge_station = EdgeStation(station_config)
    
        # Then
        with pytest.raises(TypeError) as error:
>           edge_station.register_cameras(station_config)
E           Failed: DID NOT RAISE <class 'TypeError'>

tests/unit_tests/domain/models/test_edge_station.py:24: Failed