In this project, Fully Convolutional Network (FCN) are used to label the pixels of a road in images/video.
Since we want to maintain spatial dimensions of the image we use FCN instead of deep convolutional network. The latter excels at extracting meaningful features from the input and doesn't maintain the original dimensions.
The FCN up-samples the output from VGG16 using 1x1 Convolutions. We also add skip connections to make up for the information loss from down-sampling of the encoder. This way the network can use information from multiple resolutions.
Overall FCN uses 3 techniques:
- 1 x 1 Convolutions
- Transposed Convolutions
- Skip connections
The results are obvious in the video above
After tunning the following parameters were used for the best results:
- Epochs : 15
- Batch Size : 8
- Learning Rate : 0.00005
The following packages are required:
Download the Kitti Road dataset from here. Extract the dataset in the data
folder. This will create the folder data_road
with all the training a test images.
Run the following command to run the project:
python main.py
Note If running this in Jupyter Notebook system messages, such as those regarding test status, may appear in the terminal rather than the notebook.
From Self-Driving Car Engineer Nanodegree Program, Starter Code Provided by Udacity