-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathr.objects.activelearning.html
131 lines (120 loc) · 6.9 KB
/
r.objects.activelearning.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
<div id="description">
<h2>DESCRIPTION</h2>
<p>
This module uses SVM from the <em> scikit-learn </em> python package to perform classification on regions of raster maps. These regions can be the output of
<em><a href="i.segment.html">i.segment</a></em>
or
<em><a href="r.clump.html">r.clump</a></em>.
</p>
<p>
The module enables learning with only a small initial labeled data set via <em>active learning</em>. This semi-supervised learning algorithm interactively query the user to label the regions that are most useful to improve the overall classification score.
With this technique, the number of examples to learn the classification is often much lower than the number of examples needed in normal supervised algorithms. You should start the classification with a small training set and run the module multiple times to label new informative samples to improve the classification score.
The score metric is the number of correctly predicted labels over the total number of samples in the test set.
</p>
<p>
The samples that are chosen to be labeled are the ones where the class prediction is the most uncertain [2]. Moreover, from the more uncertain samples, only the most different samples are kept [1]. This diversity heuristic takes into account for each uncertain sample the distance to its closest neighbour and the average distance to all other samples. This ensures that newly labeled samples are not redundant with each other.
</p>
<p>
The learning data should be composed of features extracted from the regions, for example with the
<em><a href="i.segment.stats.html">i.segment.stats</a></em>
module.
The features of the training set, the test set and the unlabeled set should be in three different files in csv format. The first line of each file must be a header containing the features' name. Every regions should be uniquely identified by the first attribute. The classes for the training and test examples should be the second attribute.
</p>
Example of a training and test files :
<div class="code">
<pre>
cat,Class_num,attr1,attr2,attr3
167485,4,3.546,456.76,6.76
183234,6,5.76,1285.54,9.45
173457,2,5.65,468.76,6.78
</pre>
</div>
Example of an unlabeled file :
<div class="code">
<pre>
cat,attr1,attr2,attr3
167485,3.546,456.76,6.76
183234,5.76,1285.54,9.45
173457,5.65,468.76,6.78
</pre>
</div>
<p>
The training set can be easily updated once you have labeled new samples. Create a file to specify what label you give to which sample.
This file in csv format should have a header and two attributes per line : the ID of the sample you have labeled and the label itself. The module will transfer the newly labeled samples from the unlabeled set to the training set, adding the class you have provided. This is done internally and does not modify your original files.
If the user wants to save the changes in new files according to the updates, new files can be created with the new labeled samples added to the training file and removed from the unlabeled file. Just specify the path of those output files in the parameters (training_updated, unlabeled_updated).
</p>
Example of an update file :
<div class="code">
<pre>
cat,Class_num
194762,2
153659,6
178350,2
</pre>
</div>
Here are more details on a few parameters :
<ul>
<li><strong>learning_steps</strong> : This is the number of samples that the module will ask to label at each run.</li>
<li><strong>nbr_uncertainty</strong> : Number of uncertain samples to choose before applying the diversity filter. This number should be lower than <em>learning_steps</em></li>
<li><strong>diversity_lambda</strong> : Parameter used in the diversity heuristic. If close to 0 only take into account the average distance to all other samples. If close to 1 only take into account the distance to the closest neighbour</li>
<li><strong>c_SVM</strong> : Penalty parameter C of the error term. If it is too large there is a risk of overfitting the training data. If it is too small you may have underfitting.</li>
<li><strong>gamma_SVM</strong> : Kernel coefficient. 1/#features is often a good value to start with.</li>
If C or gamma is left empty (or both), a good value based on the data distibution is found by cross-validation (at least 3 samples per class in the training set is needed, more is better). This automatic parameter tuning requires more computation time as the training set grows.
<li><strong>search_iter</strong> :Number of parameter settings that are sampled in the automatic parameter search (C, gamma). search_iter trades off runtime vs quality of the solution.</li>
</ul>
</div>
<div id="examples">
<h2>EXAMPLES</h2>
The following examples are based on the data files found in this module repository.
<h3>Simple run without an update file</h3>
<div class="code">
<pre>
r.objects.activelearning training_set=/path/to/training_set.csv test_set=/path/to/test_set.csv unlabeled_set=/path/to/unlabeled_set.csv
Parameters used : C=146.398423284, gamma=0.0645567086567, lambda=0.25
12527959
9892568
13731120
15445003
13767630
Class predictions written to predictions.csv
Training set : 70
Test set : 585
Unlabeled set : 792
Score : 0.321367521368
</pre>
</div>
<h3>With an update file</h3>
The five samples output at the previous example have been labeled and added to the update file.
<div class="code">
<pre>
r.objects.activelearning training_set=/path/to/training_set.csv test_set=/path/to/test_set.csv unlabeled_set=/path/to/unlabeled_set.csv update=/path/to/update.csv
Parameters used : C=101.580687073, gamma=0.00075388337475, lambda=0.25
Class predictions written to predictions.csv
Training set : 75
Test set : 585
Unlabeled set : 787
Score : 0.454700854701
8691475
9321017
14254774
14954255
15838185
</pre>
</div>
</div>
<div id=notes>
<h2>NOTES</h2>
<p>
This module requires the <em> scikit-learn </em> python package. This module needs to be installed in your GRASS GIS Python environment. Please refer to <a href="r.learn.ml.html"><em>r.learn.ml</em></a>'s Notes to install this package.
</p>
<p>
The memory usage for ~1450 samples of 52 features each is around ~650 kb. This number can vary due to the unpredictablity of the garbage collector's behaviour. Everything is computed in memory; therefore the size of the data is limited by the amount of RAM available.
</p>
</div>
<div id="references">
<h2>REFERENCES</h2>
<p>[1] Bruzzone, L., & Persello, C. (2009). Active learning for classification of remote sensing images. 2009 IEEE International Geoscience and Remote Sensing Symposium. doi:10.1109/igarss.2009.5417857</p>
<p>[2] Tuia, D., Volpi, M., Copa, L., Kanevski, M., & Munoz-Mari, J. (2011). A Survey of Active Learning Algorithms for Supervised Remote Sensing Image Classification. IEEE Journal of Selected Topics in Signal Processing, 5(3), 606-617. doi:10.1109/jstsp.2011.2139193</p>
</div>
<h2>AUTHOR</h2>
Lucas Lefèvre