Diferenzas

Isto amosa as diferenzas entre a revisión seleccionada e a versión actual da páxina.

Enlazar a esta vista de comparación

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
hiperespectral:sae-cd [2018/01/16 17:18] – [Downloads] javier.lopez.fandinohiperespectral:sae-cd [2018/01/16 17:53] – [Outputs] javier.lopez.fandino
Liña 9: Liña 9:
  
  
-===== Downloads ===== +==== Input datasets ====
- +
-=== Input datasets ===+
  
 //All the images are avaiable in Matlab (.mat) format, among others. For further information see the readme in the files.// //All the images are avaiable in Matlab (.mat) format, among others. For further information see the readme in the files.//
Liña 17: Liña 15:
 * [[https://citius.usc.es/investigacion/datasets/hyperspectral-change-detection-dataset|Hermiston]]  * [[https://citius.usc.es/investigacion/datasets/hyperspectral-change-detection-dataset|Hermiston]] 
  
-=== Results === +==== Experimental setup ====
- +
-== Experimental setup ==+
  
 * Codes were run in Ubuntu 14.04.  * Codes were run in Ubuntu 14.04. 
  
 * Caffe framework 1.0.0-rc3 to perform the feature extraction by means of SAE. * Caffe framework 1.0.0-rc3 to perform the feature extraction by means of SAE.
 +  * The SAE is configured to obtain 12 features.
 +  * Two consecutive layers reduce the dimensionality of the data from 242 to 100 and from 100 to 12 features,respectively.
 +  * The SAE is trained with 20% of the available pixels randomly chosen.
 +  * A batch of 64 pixels per iteration is used.
 +  * The iteration limit is fixed to 300000 iterations.
 +  * The back-propagation process uses a Stochastic Gradient Descent (SGD) and the ’inv’ learning rate policy [inv = base lr ∗ (1 + γ ∗ i)^(−power)] being i the iteration number and with a base learning rate (base lr) of 0.01, and values for the parameters γ and power of 0.0001 and 0.75 respectively. 
  
-The SAE is configured to obtain 12 features.+NWFE and PCA used for comparision purposes retaining 12 features.
  
-Two consecutive layers reduce the dimensionality of the data from 242 to 100 and from 100 to 12 features,respectively.+ELM and SVM trained with 5% of the reference data available for each class. 
 +  * Training samples randomly chosen in each run.  
 +  * 10 independent runs for each classifier. 
 +  * SVM classification carried out using the LIB-SVM library and the Gaussian radial basis function (RBF).  
 +  * ELM configured with a sigmoidal activation function.
  
-* The SAE is trained with 20% of the available pixels randomly chosen.+==== Outputs ====
  
-* A batch of 64 pixels per iteration is used.+=== Image files === 
 +|Reference data of changes |Binary CD map |Multiclass CD map| 
 +|{{:hiperespectral:referencedatacolorhermiston5.png?200|}}|{{:hiperespectral:binarycd.png?200|}}|{{:hiperespectral:svmcolorhermiston.png?200|}}|
  
-* The iteration limit is fixed to 300000 iterations. 
- 
-* The back-propagation process uses a Stochastic Gradient Descent (SGD) and the ’inv’ learning rate policy [inv = base lr  
- 
-∗ (1 + γ ∗ i)^(−power)] being i the iteration number and with a base learning rate (base lr) of 0.01, and values for the parameters γ and power of 0.0001 and 0.75 respectively.  
- 
- 
-* NWFE and PCA used for comparision purposes retaining 12 features. 
- 
- 
-* ELM and SVM trained with 5% of the reference data available for each class. 
  
-* Training samples randomly chosen in each run.  
  
-* 10 independent runs for each classifier.+=== Accuracy results === 
 +==Binary CD accuracies== 
 +|Corect |Missed Alarms|False Alarms |Total Error| 
 +|77020 (98.74%) |509 |471 |980 (1.25%) |
  
-* SVM classification carried out using the LIB-SVM library and the Gaussian radial basis function (RBF)  
  
-* ELM configured with a sigmoidal activation function. 
  
 +==Multiclass CD accuracies==
 +|**Classifier**   | **Parameters**     |**FE**   | **OA (%)**  | **AA (%)**  | **Kappa**  |
 +| ELM             | N=120              | PCA     | 91.73       | 76.06       | 86.83      |
 +| ELM             | N=120              | NWFE    | 91.76       | 76.75       | 86.83      |
 +| ELM             | N=60               | SAE     | 95.19       | 90.45       | 92.31      |
 +| SVM             | C: 64.0 γ: 32.0    | PCA     | 91.46       | 71.16       | 86.46      |
 +| SVM             | C: 32.0 γ: 16.0    | NWFE    | 91.29       | 90.61       | 86.05      |
 +| SVM             | C: 32.0 γ: 0.0625  | SAE     | 95.52       | 92.56       | 92.90      |
  
 +C: penalty term in the training of the SVM. γ: radius of the gaussian function of the SVM. N: Number of neurons in the hidden layer of the ELM. FE: Feature Extraction method.
 ===== License ===== ===== License =====
  
 :cc-by-nc-nd:   :cc-by-nc-nd: