Skip to content

Commit

Permalink
release 1.0.1 (#15)
Browse files Browse the repository at this point in the history
  • Loading branch information
durandv authored Nov 15, 2024
1 parent 5003a10 commit 831fa91
Show file tree
Hide file tree
Showing 48 changed files with 1,651 additions and 1,418 deletions.
3 changes: 3 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
models/*.tar.gz filter=lfs diff=lfs merge=lfs -text
*.mp4 filter=lfs diff=lfs merge=lfs -text
*.log filter=lfs diff=lfs merge=lfs -text
9 changes: 3 additions & 6 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
__pycache__/
*.py[cod]
*$py.class
pavimentados/models/artifacts/

# C extensions
*.so
Expand Down Expand Up @@ -131,13 +130,11 @@ dmypy.json

.idea/
.DS_Store
.tmp/

# Models
models/artifacts
models/*.tar.gz
docs/

*.csv
# Results files
*/outputs/
*.mp4
*.log
*.jpg
16 changes: 12 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
default_language_version:
python: python3.9
python: python3.10

repos:
- repo: https://github.com/pycqa/isort
Expand All @@ -9,21 +9,21 @@ repos:
args: ["--profile", "black", "--filter-files"]

- repo: https://github.com/psf/black
rev: 24.3.0
rev: 24.10.0
hooks:
- id: black
args: ["--config", ".code_quality/pyproject_black.toml"]

- repo: https://github.com/PyCQA/bandit
rev: 1.7.8
rev: 1.7.10
hooks:
- id: bandit
args:
- -c
- .code_quality/bandit.yaml

- repo: https://github.com/pycqa/flake8
rev: 7.0.0
rev: 7.1.1
hooks:
- id: flake8
args:
Expand All @@ -35,3 +35,11 @@ repos:
hooks:
- id: prettier
types: [yaml]

- repo: https://github.com/PyCQA/docformatter
rev: v1.7.5
hooks:
- id: docformatter
additional_dependencies: [tomli]
args: [--in-place, --config]
exclude: ^(tests|notebooks|docs|models)/
6 changes: 1 addition & 5 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,6 +1,2 @@
include pavimentados/configs/images_processor.json
include pavimentados/configs/models_general.json
include pavimentados/configs/processor.json
include pavimentados/configs/siamese_config.json
include pavimentados/configs/state_signal_config.json
include pavimentados/configs/yolo_config.json
include pavimentados/configs/workflows_general.json
59 changes: 0 additions & 59 deletions MODELS.md

This file was deleted.

182 changes: 106 additions & 76 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,142 +1,172 @@
[![Downloads](https://pepy.tech/badge/pavimentados)](https://pepy.tech/project/pavimentados)
# Pavimentados

## Project Description
---
<div align="center">
<h1>Pavimentados</h1>

Pavimentados is a tool that allows the identification of pavement faults located on highways or roads, as well as vertical and horizontal signage. This library is an environment around the computer vision models developed to allow the detection of the different elements. The faults and detected elements are then used to generate metrics that help in road maintenance planning.
[User Manual](docs/manual_202410.pdf)

You can download models files from this [link](https://github.com/EL-BID/pavimentados/raw/e1cdb42d01f6ae323b7dc02d6e05d3c8b3b625a8/models/models.tar.gz?download=).
![analytics image (flat)](https://raw.githubusercontent.com/vitr/google-analytics-beacon/master/static/badge-flat.gif)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=EL-BID_pavimentados&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=EL-BID_pavimentados)
[![Downloads](https://pepy.tech/badge/pavimentados)](https://pepy.tech/project/pavimentados)

These models require images or videos taken with the specifications that will be explained in later sections.
![video_results.gif](docs/assets/video_results.gif)
</div>

So far the system uses 4 models involved in different phases of detection and categorization.
## Description
---
Pavimentados is a tool that allows the identification of pavement faults located on highways or roads.
This library provides an environment to use computer vision models developed to detect different elements.
The detected elements are then used to generate metrics that aid the process of planning road maintenance.

| Model Name | Description | Classes |
|------------------------|---------------------------------------------------- | ------- |
| Paviment failures | Detection of failures on the road and classifies it | 8 |
| Signage detection | Detection of signage on the road | 2 |
| Signage classification | Classifies the signage detected | 314 |
| Signage State | Detect if the signage is in good or bad state | 3 |
The model files can be downloaded from this [link](https://github.com/EL-BID/pavimentados/raw/feature/v1.0.0/models/model_20240818.tar.gz?download=).

To understand each model in detail see the [models](https://github.com/EL-BID/pavimentados/blob/main/MODELS.md) section.
> **Important changes**: Unlike the previous version, this new version does not include traffic sign detection. We hope to be able to
> include it again in future versions.
## Main Features
---

Some of the features now available are as follows:
Some of the features available are:

- Scoring using the models already developed
- Workflows for image acquisition and assessment
- Scoring using the models already developed.
- Workflows for image acquisition and assessment.
- Evaluation of gps information to determine the location of faults and elements.
- Image or video support
- Support for GPS data in different formats (GPRRA, csv, embedded in image)
- Download models directly into the root of the package
- Image or video support.
- Support for GPS data in different formats (GPRRA, csv, embedded in image).

## Instalation
---

To install you can use the following commands
Install the library using the following command:

```
pip install pavimentados
```

Then to download the models use the following commands
Next,
* [download the model](https://github.com/EL-BID/pavimentados/raw/feature/v1.0.0/models/model_20240818.tar.gz?download=) from this link

* Decompress it using the following command
```bash
tar -xzvf model_20240818.tar.gz
```
from pavimentados import download_models
download_models(url = <signed_url>)
```

Or alternativily

```
from pavimentados import download_models
download_models(aws_access_key = <aws_access_key>, signature = <signature>, expires = <expiration_time>)
```


To obtain the corresponding credentials for downloading the models, please contact the Inter-American Development Bank at [email protected]

You can also clone the repository but remember that the package is configured to download the models and place them in the root of the environment.

## Quick Start
---

To make use of the tool import the components that create a workflow with images
In the `notebooks` folder there is a complete example of how to process both images and videos present
in `notebooks/road_videos` and `notebooks/road_images`. The results are saved to `notebooks/outputs`.

The first step is to import the components that create a workflow with images:
```
from pavimentados.processing.processors import MultiImage_Processor
from pavimentados.processing.workflows import Workflow_Processor
```

In this example, we have the image processor object MultiImage_Processor, which is in charge of taking the images and analyzing them individually using the models. In addition, we have the Workflow_Processor object that is in charge of the image processing workflow.
In this example, there is the image processor object MultiImage_Processor which is in charge of taking the images and analyzing them individually using the models. In addition, there is the Workflow_Processor object that is in charge of the image processing workflow.

Internally, the Workflow_Processor has objects that can interpret different image sources or GPS information.

Among the allowed image sources we have:
Among the allowed image sources are:

- image_routes: A list of image routes
- image_folder: a folder with all images
- images: images already loaded in numpy format
- video: The path to a video file
- image_routes: A list of image routes.
- image_folder: A folder with all images.
- video: The path to a video file.

Among the allowed GPS data sources we have:
Among the allowed GPS data sources are:

- image_routes: A list of the routes of the images that have the gps data embedded in them.
- image_folder: A folder with all the images that have the gps data embedded in them.
- loc: A file in GPRRA format
- csv: A gps file with the gps information in columns and rows.
- image_routes: A list of paths to the routes of the images that have the gps data embedded in them.
- image_folder: A path to a folder with all the images that have the gps data embedded in them.
- loc: A file in [NMEA format](docs%2Fgps_data_formats.md).

Once these elements are imported, the processor is instantiated as follows:

Once these elements are imported, the processor is instantiated as follows
```python
from pathlib import Path
models_path = Path("./artifacts") # Path to downloaded model
ml_processor = MultiImage_Processor(artifacts_path=str(models_path))
```

Alternatively, an additional JSON file can be specified to set or overwrite certain configuration parameters of the models.

```python
ml_processor = MultiImage_Processor(artifacts_path=str(models_path), config_file="./models_config.json")
```
ml_processor = MultiImage_Processor(assign_devices = True, gpu_enabled = True)
These parameters allow the specification of parameter such as the confidence, iou, or maximum amount of detections per frame.

Example of the configuration file:
```json
{
"paviment_model": {
"yolo_threshold": 0.20,
"yolo_iou": 0.45,
"yolo_max_detections": 100
}
}
```

The processor has the ability to allocate GPU usage automatically assuming that 6GB is available, it can be parameterized so that it is not automatically allocated, pass the allocation as a parameter, or even not work with the GPU.

You can modify the devices used according to the TensorFlow documentation regarding GPU usage (see https://www.tensorflow.org/guide/gpu)
The workflow object receives the instantiated processor. Without it is not able to execute the workflow.

The workflow object is able to receive the instantiated processor, without it it is not able to execute the workflow.
```python
input_video_file = "sample.mp4"
input_gps_file = "sample.log"

```
workflow = Workflow_Processor(route, image_source_type='image_folder', gps_source_type = 'image_folder')
# Create a workflow for videos
workflow = Workflow_Processor(
input_video_file, image_source_type="video", gps_source_type="loc", gps_input=input_gps_file, adjust_gps=True
)
```

The complete execution code would be as follows:
The last step is to execute the workflow:

```python
results = workflow.execute(ml_processor,
batch_size=16,
video_output_file="processed_video.mp4"
)
```
from pavimentados.processing.processors import MultiImage_Processor
from pavimentados.processing.workflows import Workflow_Processor
from pathlib import Path

### Image with the GPS data embebed
route = Path(<route with the images for processing>)
> * `video_output_file` and `image_folder_output` are optional and are only to save output video or image files along detections.
ml_processor = MultiImage_Processor(assign_devices = True, gpu_enabled = True)
The results can be saved in csv format or used for further processing.

workflow = Workflow_Processor(route, image_source_type='image_folder', gps_source_type = 'image_folder')
results = workflow.execute(ml_processor)
```python
# Save results to outputs directory
import pandas as pd
for result_name in results.keys():
pd.DataFrame(results[result_name]).to_csv(f"{result_name}.csv")
```

In results you will find the following:
In the `results` object you will find the following:

1. table_summary_sections: DataFrame with summary table by sections.
2. data_resulting: DataFrame with results per frame
2. data_resulting: DataFrame with results per frame.
3. data_resulting_fails: DataFrame with results by unique faults encountered.
4. signals_summary: DataFrame with all the information about the signals.
5. raw_results: Raw results totals

## Autores
To see more details about the results please refer to [this page](docs%2Fresults.md).


## Project structure
---
* `docs`: Documentation files.
* `models`: Reference path where the downloaded model artifact should be placed.
* `notebooks`: Examples of how to process images and videos.
* `pavimentados/analyzers`: Modules for image/video processing and generation of the final output.
* `pavimentados/configs`: General configuration and parameters of the models.
* `pavimentados/models`: Modules for YoloV8 and Siamese models.
* `pavimentados/processing`: Workflows for processing.


## Changelog
---
For information regarding the latest changes/updates in the library please refer to the [changes document](docs/CHANGELOG.md).


## Authors
---

This package has been developed by:

<a href="https://github.com/J0s3M4rqu3z" target="blank">Jose Maria Marquez Blanco</a>
<br/>
<a href="https://www.linkedin.com/in/joancerretani/" target="blank">Joan Alberto Cerretani</a>
<br/>
<a href="https://www.linkedin.com/in/ingvictordurand/" target="blank">Victor Durand</a>
Loading

0 comments on commit 831fa91

Please sign in to comment.