AIMS 2012) took place from June 6 to 8 in Luxembourg. SESERV’s coordinator, University of Zurich, was involved in organizing the conference by the participation of Burkhard Stiller (Publication Chair and TPC member) and Martin Waldburger (Ph.D. Student Workshop Co-chair).
The relevance of SESERV’s work on high-speed accounting was reinforced especially by the paper “Hardware Acceleration for Measurements in 100 Gb/s Networks” by Viktor Puš, Czech Republic.
Viktor Puš’s work centered on high-speed hardware support of filtering, metering, and monitoring. This stated clearly that at speeds of above 10 Gbit/s there is no technical solution available, which can run any accounting at all (e.g., at 100 Gbit/s line speed), since existing hardware is barely able to collect all data! As such the relevance of flow-specific or incentive-based per-user traffic management in practical solutions on higher Internet levels cannot supported by now on the backbone of large ISPs.
Andrei Vancea, Ph.D. student at the University of Zurich, presented his paper on “Cooperative Database Caching within Cloud Environments” (co-authored by Guilherme Sperb Machado, Laurent d’Orazio, and Burkhard Stiller) and won one of the two best paper awards at AIMS 2012. Andrei Vancea introduced databases to typically work in such a way that clients ask queries (in SQL), and servers deliver result sets. Leaving this concept, in principle, untouched, the concept of semantic regions is introduced in this work. As an application environment, database caching in clouds are looked at here. More specifically, a cooperative semantic caching mechanism is developed, implemented, and deployed in a cloud environment. The implemented system, called CoopSC – Cooperative Semantic Caching – supports n-dimensional queries, and it employs local and remote query rewriting. It features a distributed index for the organization of semantic regions and queries; an organization unit in the index being called a quad. Regions are indexed in the smallest quads possible. CoopSC addresses the challenge of updates (caching might be confronted with invalid data due to old entries, inconsistencies by combining different snapshots might appear) by means of a dedicated algorithm comparing a specific “before” and “after” parameter.CoopSC was deployed in a cloud environment and evaluated in two scenarios which differ in whether servers or clients were in the cloud or local. Measurements for response time, amount of data transferred, and payments resulting (for data transfer) were done. Results for the scenario where clients were in the Rackspace cloud and the server was in EmanicsLab show that CoopSC reduces the amount of data transferred considerably and consistently by means of its semantic caching mechanism. With regard to performance though (response time), no significant change was found (even though expected). These results imply the conclusion that Rackspace runs unstable. In the other scenario, using Amazon’s EC2 cloud infrastructure for the server, results with respect to both transferred data (decreased) and response time were positive. Overall conclusions, thus, cover the reduction of data transferred being reduced by cooperative semantic caching, while performance benefits may only be obtained if the cloud provider runs stable.