Skip to content

Latest commit

 

History

History
81 lines (58 loc) · 2.75 KB

README.md

File metadata and controls

81 lines (58 loc) · 2.75 KB

Creating an image classification search engine using CNNs

This project's main purpose is to mimic Google's Image Search Engine, working with a large enough dataset we will try to achieve the best results we can.


First you should load the dataset from Darknet : Here

Second, you should put in the root of your project following this structure :

. project
+-- your_notebook.ipynb
+-- dataset
|   +-- train       ==> this contains your 50.000 training images
|   +-- test        ==> this contains your 10.000 training images
|   +-- labels.txt  ==> this contains your classes


Metrics used to compared query vector to training vector:

1. Cosine similarity:

2. Hamming distance:

Model summary

The entire model consists of 14 layers in total. In addition to layers below lists what techniques are applied to build the model.

  1. Convolution with 64 different filters in size of (3x3)
  2. Max Pooling by 2
  • ReLU activation function
  • Batch Normalization
  1. Convolution with 128 different filters in size of (3x3)
  2. Max Pooling by 2
  • ReLU activation function
  • Batch Normalization
  1. Convolution with 256 different filters in size of (3x3)
  2. Max Pooling by 2
  • ReLU activation function
  • Batch Normalization
  1. Convolution with 512 different filters in size of (3x3)
  2. Max Pooling by 2
  • ReLU activation function
  • Batch Normalization
  1. Flattening the 3-D output of the last convolutional operations.
  2. Fully Connected Layer with 128 units
  • Dropout
  • Batch Normalization
  1. Fully Connected Layer with 256 units
  • Dropout
  • Batch Normalization
  1. Fully Connected Layer with 512 units
  • Dropout
  • Batch Normalization
  1. Fully Connected Layer with 1024 units
  • Dropout
  • Batch Normalization
  1. Fully Connected Layer with 10 units (number of image classes)

Deploying using Flask

Reference

CIFAR10