Diferenzas
Isto amosa as diferenzas entre a revisión seleccionada e a versión actual da páxina.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
inv:composit:validation [2013/02/27 18:49] – [Usage] pablo.rodriguez.mier | inv:composit:validation [2013/02/28 18:31] (actual) – pablo.rodriguez.mier | ||
---|---|---|---|
Liña 1: | Liña 1: | ||
- | ====== ComposIT: | + | ====== ComposIT: |
+ | **Authors: Pablo Rodriguez-Mier, | ||
- | In previous works [1][2], ComposIT was analyzed and compared with the winners of the Web Service Challenge 2008 (WSC' | + | |
+ | // | ||
+ | through automatic composition techniques. However, although the inclusion of semantics allows a greater precision in the | ||
+ | description of their functionality, | ||
+ | (SWS) composition techniques still remains limited due in part to both lack of publicly available, robust, and scalable software | ||
+ | composition engines and the heterogeneity that exists among the different SWS description languages. In this paper we introduce | ||
+ | ComposIT, a fast and scalable composition engine which is able to automatically compose multiple heterogeneous services from | ||
+ | the point of view of the semantic input-output matching thanks to the use of a minimal service model. We also present a complete | ||
+ | analysis of two publicly available state-of-the-art composition planners and a comparison between ComposIT and these planners. | ||
+ | To carry out this task, we developed a benchmarking tool that automates the evaluation process of the different composition | ||
+ | algorithms using an adapted version of the datasets from the Web Service Challenge 2008. Results obtained demonstrate that | ||
+ | ComposIT outperforms classical planners both in terms of scalability and performance. | ||
+ | |||
+ | |||
+ | ===== Purpose of this Web ===== | ||
+ | |||
+ | In previous works((P. Rodriguez-Mier, | ||
The experimentation is focused on the generation of semantically valid composite services taking into account the semantic information regarding the inputs and outputs of services in absence of preconditions and effects. To carry out this comparison, we developed a benchmarking tool that | The experimentation is focused on the generation of semantically valid composite services taking into account the semantic information regarding the inputs and outputs of services in absence of preconditions and effects. To carry out this comparison, we developed a benchmarking tool that | ||
- | automates the evaluation process of the different composition algorithms using an adapted version of the WSC'08 datasets. | + | automates the evaluation process of the different composition algorithms using an adapted version of the WSC'08 datasets. |
+ | page is to extend | ||
===== Datasets ===== | ===== Datasets ===== | ||
Liña 58: | Liña 76: | ||
- | These tables show: the number of services of each dataset (column #Serv); the number of services of the optimal solution (column #Serv. Sol.); the length of the shortest solution (column #Length); the average number of inputs and outputs (column | + | Exact-Matching datasets were calculated by extending the outputs of each web service, including all superclasses of each output as an output of the service itself (semantic expansion). Thus, the average number of outputs is bigger than in the other datasets. The semantic expansion transforms a semantic matching problem into a exact matching problem, when exact and plug-in match is used to perform the semantic matchmaking. This allows us to test composition algorithms (that do not use semantic reasoners) with the WSC'08 datasets. For example, suppose that a service S1 provides the instance " |
- | composite service (minimum number of services, minimum length) that satisfies the goal concepts, using only the initial inputs provided. | + | |
- | + | ||
- | Exact-Matching datasets were calculated by extending the outputs of each web service, including all superclasses of each output as an output of the service itself (semantic expansion). Thus, the average number of outputs is bigger than in the other datasets. The semantic expansion transforms a semantic matching problem into a exact matching problem, when exact and plug-in match is used to perform the semantic matchmaking. This allows us to test composition algorithms (that do not use semantic reasoners) with the WSC'08 datasets. For example, suppose that a service S1 provides the instance " | + | |
will return outputs " | will return outputs " | ||
- | The figure | + | The next figure represents an example of the semantic expansion. In the first case, a semantic reasoner is required to match output " |
{{: | {{: | ||
Liña 127: | Liña 142: | ||
=== 2. Semantic-Matching evaluation results === | === 2. Semantic-Matching evaluation results === | ||
- | In this experiment, we evaluate the performance | + | Results |
- | + | ||
- | PORSCE-II uses different threshold values | + | |
- | The values of these thresholds are directly related to the performance of the algorithm so that the higher | + | |
^ Plug-in threshold | ^ Plug-in threshold | ||
Liña 153: | Liña 165: | ||
| 19 | 58981.02 | | 19 | 58981.02 | ||
- | + | Based | |
- | Figure below shows the percentage of semantic problems solved by PORSCE-II with different values of the threshold and the performance | + | on these results, we selected the following |
- | measured by comparing the time taken by the algorithm to solve the // | + | |
- | the computational cost increases (performance decreases) when the threshold is incremented. Thus, using a threshold of 10, the algorithm | + | |
- | obtains a performance close to 20\%, which means that the algorithm is about 1/0.20=5 times slower than using a threshold of 1. Based | + | |
- | on these results, we selected the optimal | + | |
- | ComposIT does not require any special configuration for the semantic datasets as it calculates matches at unlimited depth. | + | |
- | + | ||
- | {{: | + | |
- | + | ||
^ Dataset | ^ Dataset | ||
Liña 210: | Liña 213: | ||
| ::: | PORSCE-II (17) | - | - | - | - | - | - | - | - | - | - | | | ::: | PORSCE-II (17) | - | - | - | - | - | - | - | - | - | - | | ||
+ | You can download the validation logs for each algorithm here: | ||
+ | * ComposIT Exact-Matching/ | ||
+ | * PORSCE-II Exact-Matching/ | ||
+ | * OWLS-Xplan Exact-Matching logs ([[http:// | ||
===== Usage ===== | ===== Usage ===== | ||
We provide three different java binaries for each composition algorithm. In order to launch a test, you must have installed the Java JDK 6+. Latest java JDK is available [[http:// | We provide three different java binaries for each composition algorithm. In order to launch a test, you must have installed the Java JDK 6+. Latest java JDK is available [[http:// | ||
Liña 217: | Liña 223: | ||
</ | </ | ||
Where algorithm.jar is one of the available algorithms: | Where algorithm.jar is one of the available algorithms: | ||
- | * CompositAlgorithm.jar | + | * ComposIT: |
- | * PorsceAlgorithm.jar | + | * PORSCE-II: |
- | * OWLSXplanAlgorithm.jar | + | * OWLS-Xplan: |
+ | <note important> | ||
+ | These versions of the OWLS-Xplan and PORSCE-II were modified to support the integration with the test platform. Original versions | ||
+ | of these algorithms can be downloaded here: | ||
+ | * PORSCE-II: http:// | ||
+ | * OWLS-Xplan 2.0: http:// | ||
+ | </ | ||
You can launch also a background test from the command line, with the following syntax: | You can launch also a background test from the command line, with the following syntax: | ||
Liña 237: | Liña 249: | ||
PORSCE-II: | PORSCE-II: | ||
< | < | ||
- | java -Xmx1024M -Xms512M -jar algorithm.jar " | + | java -Xmx1024M -Xms512M -jar PorsceAlgorithm.jar " |
</ | </ | ||