| | |

Continually Learning Optimal Allocations of Services to Tasks

Open service-oriented systems which autonomously and continually satisfy users’ service requests to optimal levels are an appropriate response to the need for increased automation of information systems. Given a service request, an open service-oriented system interprets the functional and nonfunctional requirements laid out in the request and identifies the optimal selection of services-that is, identifies services. These services’ coordinated execution optimally satisfies the requirements in the request. When selecting services, it is relevant to: (1) revise selections as new services appear and others become unavailable; (2) use multiple criteria, including nonfunctional ones to choose among competing services; (3) base the comparisons of services on observed, instead of advertised performance; and (4) allow for uncertainty in the outcome of service executions. To address issues (1)-(4), we propose the multi-criteria randomized reinforcement learning (MCRRL) service selection approach. MCRRL learns and revises service selections using a novel multicriteria-driven (including quality of service parameters, deadline, reputation, cost, and preferences) reinforcement learning algorithm, which integrates the exploitation of data about individual services’ past performance with optimal, undirected, continual exploration of new selections that involve services whose behavior has not been observed. The experiments indicate the algorithm behaves as expected and outperforms two standard approaches.

Achbany, Y., Jureta, I.J., Faulkner, S. and Fouss, F., 2008. Continually learning optimal allocations of services to tasks. IEEE Transactions on Services Computing1(3), pp.141-154.

Similar Posts