FG2 report

The second focus group at the SESERV Athens Workshop involved a discussion around Content and service delivery architectures, with an emphasis on information-centric technologies. There were fourteen participants in all, of whom eight took an active part in the focus group discussion: they took the following stakeholder roles:

Stakeholder role
Term used
Technology maker FG2A
Access Network Provider FG2B
End-user FG2C
Access Network Provider FG2D
Inter-connectivity Provider FG2E
Content Provider FG2F
Inter-connectivity Provider FG2G
Inter-connectivity Provider FG2H

The session was moderated by a SESERV partner, who was briefed to adopt a middle-ground: encouraging discussion and maintaining focus without too much interruption. The focus group began with two presentations on "Value Network Configurations (VNC) in Information-Centric Networking (ICN)" by Mr. Tapio Leva (Aalto University) representing the FP7 SAIL project,. and "Caching and Mobility Support in Publish/Subscribe Internet (PSI)" by Prof. George Xylomenos (Athens University of Economics and Business) representing the FP7 PURSUIT project.  The main subject of the first presentation included the description of the VNC methodology and its application in FI ICN scenarios. Additionally, in the beginning, a brief overview of the SAIL project and its architecture was given. The subject of the second presentation was the description of the publish/subscribe architecture specified with the FP7 PURSUIT project, the support of caching, replication, (three uses-cases have been discussed that demonstrate the benefits from the deployment of those, i.e., off-path caching, on-path caching and content replication), as well as the support of mobility in the context of ICN. The discussions covered a range of issues associated with how content and services can be delivered, whilst attempting to maintain or at least consider QoS and QoE. From the start, it was clear that pricing for the transmission of data is clearly an issue, and one which is typically resolved by the ISP with little regard to QoS or QoE issues for the end-user (the consumer of the content):

FG2F: […] even if the ISP doesn't want to restrict, suppose he has two different providers for (the) CNN (channel), the local rendezvous (owned by the ISP) can choose the one that is cheaper for him, who at some point would not be the best for the end-user. So, there you have a tussle between what is best for the user and what is best for the ISP.


In fact the ISP would appear to have the upper hand in all of this and to be at liberty to set prices at whatever level they wish. Ultimately, it is a question of maximising profit, and not improving service. There is no consideration for the end-user at all.

FG2Mod: […] So, let us try to think the tussles that may arise between the access network provider and the interconnectivity provider. I mean, the access network provider does all that in order to reduce his inter-domain traffic, and reduce respectively the interconnection costs. How would an interconnectivity provider respond to that? Because he loses revenues. Could he enter the market and deploy a CDN of his own? What would you do if you were an interconnectivity provider?

FG2F: Maybe increase the price per gigabyte!

FG2H: The way this is setup, this is the ISPs dream. The ISPs control everything and you (the end-user or the content provider) have a huge switching cost, if you are not multi-homed. In this specific setup, the interconnectivity provider is much weaker.. this is an ISP's dream

FG2Mod: An access ISP's dream, because for transit provider..  he only transfers the bytes , and he might go in a price war with another transit provider..


Note in this extract how the moderator attempts to encourage discussion with the suggestion that the ISP might resolve a potential tussle between network and interconnectivity provider by implementation a CDN of their own. This is not taken up by participants representing the Content provider (FG2F) and the Inter-connectivity Provider (FG2H), who instead opt for pricing action only. Indeed, they recognise the advantageous position of the ISP. In effect, the tussle is unbalanced with more power to the ISP.

Pricing therefore is a significant issue for the ISPs involved in content and service delivery. Their concern though is not for the consumers of the content and services, but rather to maintain their competitive position. What is more, and in connection with the keynote speeches at the workshop, there was no consideration of the amount of traffic and whether this could be deemed a problem, or whether traffic shaping might be used. The ISP may therefore choose to adjust their prices in order to reciprocate to other players’ actions.

A different issue, though, concerns what actions different stakeholders might take if they feel threatened in respect of their own business and markets.

FG2H: Some CDNs are also doing [this]. They deploy their own cable and fiber.

FG2B: The latest reports show also that there is also much traffic crossing tier-1s which is not created by the content providers.

FG2H: (Interconnection providers will probably offer) CDN interconnection, (they will comprise) higher level CDN providers.

FG2B: You (the interconnectivity providers) have to build your own CDNs.


Faced with quasi arbitrary pricing levels from carriers within the network, which might be understood in terms of the pricing actions described above, then a CDN may decide to go it alone, install their own infrastructure and effectively become carrier as well as content provider. Under those circumstances, according to FG2B having the Access Network provider role, such a move could be countered by the carrier developing their own CDN capabilities. The original control tussle is responded to by developing competing capabilities on either side: the CDN “deploy[s] […] fiber” or the network develops CDN services of their own: the contention has now been shifted to a solely economic issue of competition. Interestingly, the comment that recent reports suggesting that much of the traffic across Tier-1 carriers is not content-related is not picked up on or developed. The impact of increasing traffic was discussed several times during the keynote speeches; however, the stakeholders here do not seem to regard it as a major issue, even in relation to content.


Contention between the content provider and the carrier network discussed so far are confined to issues within the network itself. Solutions are based on punitive pricing or, as just explored, the CDN and the carrier encroach on each other’s core business to develop the capabilities they lack vis-à-vis the other player. The focus group moderator tried to explore other ramifications and specifically for the end-user.


FG2Mod: This situation can be lead to a 'walled garden', because the ANP controls the infrastructure, the caches, the rendezvous network, everything.. so he can redirect any request of the end-user, any subscription to his own caches.


If network players start to impose control over content management and not just delivery, the moderator suggests, this leads to a walled-garden for the end-user: they are restricted in how they consume content. But the implications run further, and especially in respect of overall architecture.


FG2F: Can this lead to a restriction for the end-user regarding the content accessibility.. or does he control the Quality of Experience, or both?

FG2C: I think also that the copies (of content) might not updated by the ISP on a regular basis, or too late, and the end-user might see outdated content and be unsatisfied with that.


The location of caches becomes significant to the consumer in terms of QoE: if the carrier networks (the ANP or ISP) deploy their own caches, then surely this is a good idea both for carrier (who then assumes control of how and when content is transmitted) and the consumer (who receives content more quickly, assuming the cache is located close to them). As the stakeholders (FG2F and FG2C) point out this is not necessarily the case: if the ISP or ANP do not update content at appropriate intervals – and why would they? – the end-user could find themselves with outmoded content and a sense of dissatisfaction.


So in the context of cache location, the management of the content in that cache becomes significant: implementing a local cache not only affects traffic going through an interconnectivity provider

FG2D: I think […] the idea is the local optimization; the idea is not the restriction of the end-user to get some content. He can get the content from the local cache, if it is available there; if not, he gets it from the original source. I am referring to your (FG2A) comment here.. In one of the configurations, we show that the access network provider takes all of the roles; he takes also the content provider role and provides IP-TV from local servers and optimizes a local network only for this content. Then the interconnectivity role is not taken, there is no interconnectivity provider here; this takes place only inside an ISP.


but this can potentially lead to other tussles, with the ISP doing what is best for them rather than the end-user who expects to receive the best QoE and QoS (as for IP-television):


FG2F: Yes, but even if the ISP doesn't want to restrict, suppose he has two different providers for (the) CNN (channel), the local rendezvous (owned by the ISP) can choose the one that is cheaper for him, who at some point would not be the best for the end-user. So, there you have a tussle between what is best for the user and what is best for the ISP.


Ultimately, and thinking solely of what is best for the consumer of the content, it may well be that a local cache in the ISP may not optimise performance – unless properly managed and kept up to date. In trying to reduce their costs from the interconnectivity provider, the ISP may not be serving their customers as well as they should.


FG2D: So, you mean that the local cache may not be the best for the end-user; maybe the CDN cache is the best.

FG2F: Or, the ISP may have in his local cache a version other than that of the CDN cache, simpler, with less features, or video not in fine coding.


In summarising the different aspects of cache location, the FG2 moderator identifies the main contention: if, in keeping transit costs down, the ISP chooses to cache content locally, then they should also maintain the currency of that cache if the consumer’s interests are to be best served as well.

FG2F: Of course, you have an advantage if you have your caches close to the end-user.

FG2B: This is the last asset of the ISPs; the last assets of them are that they know the location of the users, they have DNS, and they can ... This is the last frontier.

FG2D: This is also why we have there as a separate role the location of the caches; because it has some value.

FG2Mod: […] FG2D has identified some tussles. One that I remember is the freshness of the content; the content provider has an incentive to always provide the most updated content to the end-users, but that implies cost increase for the ISP, because of the transit traffic.


Whether the content is properly managed and kept current or not is only one part of the story, however. As mentioned in passing already, there is another important factor: name resolution

FG2D: […] So, the name resolution role is a central role; all of the stakeholders have interests related to that.

It’s one thing for an ISP to have a local content cache, but they need to know where the consumer is located as well. Depending on where the user is, there may be not benefit to a local cache anyway. The CDN might just as well offer its own delivery network. But even then, the ISP still maintains important control: for they can locate their users.


FG2B: There are explicit Internet exchange points; Google can find everybody everywhere, so you do not need the interconnectivity provider.

FG2H: Some CDNs are also doing that. They deploy their own cable and fiber.

FG2B: The latest reports show also that there is also much traffic crossing tier-1s which is not created by the content providers.


FG2B: This is the last asset of the ISPs; the last assets of them are that they know the location of the users, they have DNS, and they can ...

Discussion in FG2 was therefore lively, and in general many stakeholders were prepared to engage at least in exploring the issues. There was clear evidence that the various carriers will often seek to use pricing to their own advantage. However, the contention between carrier and CDN would be a spill-over that may not lead to a totally satisfactory resolution. On the one hand, the CDN and ISP may go head to head and seek to encroach on each other’s business, the CDN by installing its own delivery network, the ISP by caching content locally. This is not necessarily the optimal solution for the content consumer though, unless a local content is cache is properly managed and kept current. In the final analysis though, whether the cache is local or content comes directly from the CDN, power resides with the DNS: they identify where the consumers are located, and this needs to be taken into account.

At the same time, however, there is a fundamental issue. In trying to accommodate different content and service delivery scenarios, with CDNs and ISPs alternately competing as well as co-operating to deliver the best solutions either for themselves or their end-user customers, the stakeholders focus only on maintaining quality of content and reducing transit costs, respectively; what they miss is how the big interconnectivity providers respond. Moving traffic is now just a commodity:

FG2D: That's true, but currently in the Internet ... tier-1 providers, interconnectivity providers have business cases going down.  Pure connectivity is a commodity service, and what actually they pay is some kind of Quality of Service on content delivery, and then, interconnectivity providers have attacked this, they have started to provide CDN services.


In response, the interconnectivity providers start to develop their own services:


[FG2D continues] So, you (an interconnectivity provider) can provide Akamai CDN services and also have the interconnectivity network. For example, AT&T have their own CDN service. This is one way. But in just a basic interconnectivity network there isn't so much business.


This move may be spurred on by the caching architectures themselves, in as much as they reduce the already constrained revenue from interconnectivity services. But there seems to be a more challenging issue: what is the incentive for Tier-1 providers to engage at all:


[FG2D continues] All these caching architectures, they probably decrease the value of pure interconnectivity. This is nothing new for interconnectivity providers, probably they are not willing to oppose this, because they can't do much. They just have to find a way to also move to this scenario.

FG2H: They cannot block it. They can[not] avoid deploying anything; this is the kind of problem we have today. There are no incentives for tier-1s to participate in this content exchange activity. They are in sleeping mode.. it is not obvious why they should upgrade the routers. There is no business case for them to do that, so maybe they will lose out again.


They appear to be significant if not essential stakeholders especially for long distance connection and particularly in light of problems for cache location and its interdependence on DNS services. At the same time, however, there seems to be no incentive for them to engage. Involving all stakeholders is fundamentally important; incentivising them to participate fully, however, is another issue. It may mean that tussle analysis needs to include an additional step not only to identify but also motivate stakeholders.