To use iris, you need to define a project file in JSON or YAML format.
Optional name for this project.
Example:
"name": "cloud-segmentation"
Defines whether you want to use IRIS with or without user authentication (latter is not yet implemented).
Example:
"authentication_required": true
A dictionary which defines the inputs.
Example:
"images": {
"path": "images/{id}/image.tif",
"shape": [512, 512],
"thumbnails": "images/{id}/thumbnail.png",
"metadata": "images/{id}/metadata.json"
}
The input path to the images. Must be an existing path with the placeholder {id}
. The placeholder will be replaced by the unique id of the current image. IRIS can load standard image formats (like png or tif) and numpy files (npy). The arrays inside the numpy arrays should have the shape HxWxC.
Example:
"path": "images/{id}.tif"
The shape of the images. Must be a list of width and height.
"shape": [512, 512]
Optional thumbnail files for the images. Path must contain a placeholder {id}
.
Example:
"thumbnails": "thumbnails/{id}.png"
Optional metadata for the images. Path must contain a placeholder {id}
. Metadata files can be in json, yaml or another text file format. json and yaml files will be parsed and made accessible via the GUI. If the metadata contains the key location
with a list of two floats (longitude and latitude), it can be used for a bingmap view.
Example:
"metadata": "metadata/{id}.json"
This is a list of classes that you want to allow the user to label. Each class is represented as a dictionary with the following keys:
- *name:* Name of the class
- *description:* Further description which explains the user more about the class (e.g. why is it different from another class, etc.)
- *colour:* Colour for this class. Must be a list of 4 integers (RGBA) from 0 to 255.
- *user_colour (optional)*: Colour for this class when user mask is activated in the interface. Useful for background classes which are normally transparent.
Example:
"classes": [
{
"name": "Clear",
"description": "All clear pixels.",
"colour": [255,255,255,0],
"user_colour": [0,255,255,70]
},
{
"name": "Cloud",
"description": "All cloudy pixels.",
"colour": [255,255,0,70]
}
]
Since this app was developed for multi-spectral satellite data (i.e. images with more than just three channels), you can decide how to present the images to the user. This option must be a list of dictionaries where each dictionary represents a view and has the following keys:
- *name*: Name of the view
- *description:* Further description which explains what the user can see in this view.
- *content*: Can be either `bingmap` or a list of three strings which can be mathematical expressions with the image bands.
Iris can display up to 4 views.
Example:
"views": [
{
"name": "RGB",
"description": "Normal RGB image.",
"content": ["B5", "B3", "B2"]
},
{
"name": "NDVI",
"description": "Normalized Difference Vegetation Index (NDVI).",
"content": ["B4", "(B8 - B4) / (B8 + B0)", "B2"]
},
{
"name": "Aerial imagery from bing",
"content": "bingmap"
},
]
A dictionary which defines the parameters for the segmentation mode.
The output directory for your project. This directory will contain the mask files (from the segmentation) and user configurations
Example:
"path": "masks/{id}.png"
The encodings of the final masks. Can be integer
, binary
, rgb
or rgba
.
Example:
"mask_encoding": "rgb"
In case you don't want to allow the user to label the complete image, you can limit the segmentation area.
Example:
"mask_area": [100, 100, 400, 400]
Defines how to measure the score achieved by the user for each mask. Can be
f1
, jaccard
or accuracy
. Default is f1
Example:
"score": "f1"