Ranking and optimization of web service compositions represent challenging areas of research with significant implications for the realization of the “Web of Services” vision. “Semantic web services” use formal semantic descriptions of web service functionality and interface to enable automated reasoning over web service compositions. To judge the quality of the overall composition, for example, we can start by calculating the semantic similarities between outputs and inputs of connected constituent services, and aggregate these values into a measure of semantic quality for the composition.
This paper takes a specific interest in combining semantic and nonfunctional criteria such as quality of service (QoS) to evaluate quality in web services composition. It proposes a novel and extensible model balancing the new dimension of semantic quality (as a functional quality metric) with a QoS metric, and using them together as ranking and optimization criteria. It also demonstrates the utility of Genetic Algorithms to allow optimization within the context of a large number of services foreseen by the “Web of Services” vision. We test the performance of the overall approach using a set of simulation experiments, and discuss its advantages and weaknesses.
The following databases have been fed as input to the existing work’s implementation:
This database consists of two tables : Task1 and Task2 representing the individual tasks of the composite service. Each table includes the five QoS parameter values for the three hundred candidate web services. The attributes of the table are Service number, Response time, Price, Reputation, Successful completion and Availability.
The resource consumption values of the web services along the dimensions of Response time and Price for the tasks are given in the tables Task1 and Task2 of this database. The attributes of the table are Response time and Price.
The attributes are service number, QoS score and resource score. The QoS score is computed taking the values of QoS parameters from the ‘services’ table mentioned above. Similarly, from the table ‘Resource’ the resource score is computed.
The following databases have been fed as input to our project:
This database is the same as that used for the existing work.
This database is the same as the resource database used for the existing work.
This database consists of the individual ratings given for three Qos parameters by five users along with their aggregate values. These aggregate ratings are retrieved and used in the selection process after the user specifies the dimension of QoS parameter along which he has his expectation high. The attributes are Service number, User1, User2, User3, User4, User5 and Aggregate. The dimensions may be price and response time.
The attributes are service number, aging, last updated date, QoS score and resource score. The QoS score is computed taking the values of QoS parameters from the ‘services’ table mentioned above. Similarly, from the table ‘Resource’ the resource score is computed. The last updated date field contains the date on which that particular service was modified. This value is employed to check if the service is of an outdated version or not. If the value in the last updated date field subtracted from the current date is greater than a specific threshold value (say, 100), then it means that the updated version needs to be brought in from the UDDI registry into the WSDB.
- Convex hull algorithm
- Algorithm to check the aging factor and update the service
- Algorithm to select services based on quality rating