Archiv des Autors: tmb

Über tmb

tmb.nginet.de

The Future Internet Week in Ghent – From the G16, over the Industry Group, FIRA and EFII, to EFIA

This year closes with the Future Internet Assembly as part of the Ghent Future Internet Week. One more the event gathered the majority of European research projects/community around several sessions, workshops, exhibitions, and and and.

Presence of *common suspects* was harnessed to ran an ad-hoc meeting between FIRA (Future Internet Research Alliance) and EFII (Eurpean Future Internet Initative) members. As chairman of FIRA I had the pleasure to present vision and ambition of FIRA and to engage into discussions with EFII members in order to evaluate a potential collaboration of both organizations. Bottom line, this is was and still is the objective of the two communities, despite the number of differences to be sorted out. This is good news.

The meeting reinforced a conviction that complexity of such an joint venture must not be underestimated and after a poorly managed first attempt only a sound framework with clearly defined rules, roles, competencies, objectives, etc will create trust for moving forward. This was proposed and it now remains to be seen if reasonable agreement can be reached that represents the very different stakeholders and related objectives.

At the end the European research community seeks for a „Future Internet“ that is fundamentally different and enables Europe to take the lead in future Internet-based economies. Commonly agreed is that such an Future Internet will differ from the current one by consolidation and extensive collaboration across the different sectors, ICT and non-ICT, and from a technology, business, and policy perspective.

Such a vision implies not only technology transformation but mind sets as well. The Internet keeps proving to favor those that accept its openness and global nature instead of those trying to sustain protective approaches. Successful will be those ones, that endorse and assimilate the Internet as an opportunity to collaborate – isnt that what the Internet is all about?

The so-called G16, then the Industry Group, nowadays EFII, as well as FIRA are contemporary witnesses of this change and its pains. For the past two years the two organizations struggled with finding common ground for a multitude of reasons. Still there is a perspective, not least by the implementation of the FI-PPP, which was original motivation to the G16 and then EFII, as well as later FIRA. Lessons learned are that many players were finally able to put heads together and work for the greater good by letting „local“ agendas aside (to some extent). If eventually successful? Spring 2011 will tell; as usual writing/submitting proposals is a though job – but the origin of a real challenge at best.

One must not ignore the human dimension in all of this. Sincere consideration of the above leads to a change that has to happen at individual level. FIRA and EFII have proven this. Today I asked to stand down as chairman of FIRA given that original perspectives to founding members were reasonably met. Commitment to these was always a high personal priority to me. This was confirmed and implementation is already on the way. If this exercise will be repeated, I do believe, it could eventually be the natal hour of EFIA, the European Future Internet Alliance.

IEEE Globecom 2010

IEEE GLOBAL COMMUNICATIONS CONFERENCE (IEEE GLOBECOM) is one of the flagship conferences of the IEEE Communications Society and high up on my annual conference must-attends.

This year, IEEE Globecom 2010, is held in Miami, Florida and lines-up seamlessly in the hall of fame of this conference series. With 2500 attendees on-site it sets a new record and as usual it features a very comprehensive program with a good number of high-profile speakers from business as well as academia.

Keynote by Yoshihiro Obata, CTO of eAccess Ltd in Japan
A very interesting talk, excellent presentation with a very good mix of industry/company background/insight and technological/research challenges. This is the style of talks you look for at IEEE Globecom.

Here is what Mr Obata had to tell:

– Traditionally, Telco services were controlled by operators (e.g. SMS). With IP services control moves towards devices/applications
– And terminal are no any longer provided by the operator, huge variety in devices, competition high (e.g. Apple vs Google)

– Smart-phones turn signaling (traffic) into a huge issue for operators. As control went from network to devices operators cant control / police users effectively. This essentially prevents M2M introduction

– Highest expenses are still with the backbone, eAccess flat rate offers were only possible since they own a backbone, especially in wireless networks is the backbone cost what matters; base stations are not relatively inexpensive

– Volume and characteristic of traffic by corporate users does not cause trouble, i.e. corporate users behave as they follow a certain (manageable) pattern (e.g. peak traffic).

– Mobile vs Fixed: The peak (busy hour) in mobile networks is broad (TMB: statistically stationary) versus traffic in fixed networks (ie DSL) shows very sharp/short peaks (instationary) -> TMB: This has consequences to admission control!

– Reasoning: mobile terminals/services are simpler to use, by potentially more singles and younger users, which are attached for longer periods to their terminals; In contrast, Internet services over fixed (cable, DSL, etc) access require a greater effort to start, in particular the terminal (PC, laptop, etc) and hence users start-use-shut.

– On traffic patterns: 300K (2-5%) users take 50% of the capacity for peer-to-peer traffic, still no issue for state-of-the-art technlogy, annoying though, but the network needs to be sized for full capacity anyways.

– On business in general, telcos need to adapt to change as meanwhile nearly 30% of the user spending goes to the terminal and this takes a major part of the overall budget

– A new service in Japan is „Pocket WiFi, WiFi allows terminals to concurrently access the network with one subscription. This gives meanwhile three options for mobile operators – hotspots, mirco cells, pocket wifi – still unclear which will predominate

Kevin Fall (Intel) WSN Forum
– Observation on WSNs – mostly worried with power consumption, use essentially the same network architecture as any other devices, people mostly use them for trivial scenarios (room temperature monitoring)

– Programming WSNs as essembles instead can be a basis for innovative scenarios

– Issues: disconnection, addressing (location/ID, address space)

– Some ideas/solutions: DTN (storage/caching), use URIs for addressing/naming anything

– Info-networking (content-centric or data-centric networking) that put data/information in the center of design, architecture, operations instead of hosts

Edward Knightly (Rice Uni) WSN Forum
Edward, how was giving a keynote at my BWA workshop in 2008, talked about „sensing“ in general and took WSNs into the vehicular, smart grid, and eHealth domain. Nothing really new, some of the slides are indeed known for a while (eHealth). What was new though, is that he is promoting „Visible Light Communication“ as a technology for vehicular communications.

H. Atarashi (NTT DOCOMO) 4G Operator Perspectives
– DOCOMO to deploy LTE comercially in Dec 2010, initially over legacy 3G infrastructure, terminals will support dual-mode

– 3 deployment scenarios, remote-radio-head, cabinet-type, indoor

– Remote radio head: base stations (eNodeB) are deployed somewhere and connect over fiber to the …

– ~1000BS by end of 2010, 5000 by end of 2011, 15000 by end of 2012 (40% POP coverage)

China Mobile
– 564m subscribers, ~500000 GSM base stations
– LTE deployment in 2011, several trials conducted with several manufacturers involved (terminal + network), LTE-TD meets all expectations

COMCAST IPv6 Forum
– CDNs are starting migration strategies this year (2011)
– Mind that this involves many aspects, way beyond the network, e.g. OS, Apps, OSS tools, CRM, Accounting, BSS in general
– To wait is a risk: v6 introduction takes time, Google needed 3 years
– And there will be more NAT to come in the meantime
– But 90% of v6-readiness can be achieved without turning v6 on!
– How to save cost? Put v6-readiness in your product strategy (TMB: that“s easy said ..) and mind that a customer may need to turn NAT on in order to access your content
– But isn“t v6 broken? No, that“s mostly an issue on your consumer-side and mind, ISP-NAT does not scale and add complexity/unwanted control
– The today challenge of v6 is not so much technology, it“s training of field personell, sales, support, etc
– Comcast is virtually v6 ready

Nokia IPv6 Forum
– Symbian is v6 ready since quite a while
– NAT versus v6, keep-alive versus idle but connected -> NAT drains your mobile“s battery
– Operators will not switch on Voice over LTE in the near future
– More details on NAT: keep-alive commonly in 40sec-5min intervals, can decrease your standby time from days to hours, many different/imcompatible tunneling, very different NATs (home, office, hotspots, ISP-NAT, etc) in terms of traversal mechanisms, frequently poor quality code, mind multi-level NAT (cascades)
– T-Mobile and Nokia run v6-trial in the USA, Nokia supports cell+wifi v6 in the N900 dual-stack.

Some random notes
– JND theory, „just noticable distortion“, widely used theory for picture quality evaluation (subjective)

– Wireless network usage is not uniform, one practical example shows 15% of the cells generate 50% of total traffic

– Most of the traffic in the future is expected to come from indoor environments

Comments on the Future Internet Public-Private Partnership (FI PPP)

Roughly two years of preparation, from idea, over program design, to call for proposals for the „Future Internet Public-Private Partnership“. The deadline was passed last Thursday, 2nd of October 2010. It remains to be seen whether the enormous investments eventually pay off.

I believe in the European definition of a „Holistic Future Internet“. It is a rather broad concept; „Networks of the Future“, „The Internet of Things“, „The Internet of Services“, „Security“, „Cloud Computing“, and „Media & Content“. But it is not the range of topics alone, what makes the difference is that these domains are not considered in isolation. Instead, this vision of „Future Internet“ is a consolidation of these domains into one global Internet-scale platform. Objective is to turn the Internet into a an open eco-system with low entry barriers and support for innovation in infrastructure as well as application domains. And this far beyond the ICT sector.

This is very different when compared to US-style „Future Internet“ research, that is primarily focused on Internet communication architectures (c.f. NETS, FIND, GENI). While the European vision may appeal more complete and more universal, the US definition is more concise and warrant for potentially streamlined progress towards the ultimate objective, the next Internet. It is hard to quantify in terms of investment (social and monetary captial), but one predominant obstacle in European Future Internet research is definitively the „prerogative of interpretation“.

The FI PPP is a proper tool with a reasonable vision and capable provisions. The ultimate challenge, however, is to get the idea of an „open platform“ penetrating beyond technology in order to gain support by business strategists. The past has shown that technology alone does not suffice: 25,583 IEEE papers on QoS versus XXX deployments?.

In any case, FI PPP preparations already achieved one significant result, namely the ICT sector entering a (painful) process of collaboration towards this idea of an open Internet-scale eco-system. This process is still at the very beginning and who knows if a beneficial continuation will result. Yet an ambitious platform is there and enough evidence for significant economic potential should be a good motivation.

At the end, „gain is frequently related to risks taken“ and Europe is commonly perceived being too conservative, especially when compared to the US. The FI PPPs 300M€ investment prove otherwise.

NETWORKING INNOVATIONS OVER VIRTUALIZED INFRASTRUCTURES (NOVI)

NOVI is new research project that aims at developing a federation of „Experimental Facilities“. This is indeed needed looking at the very many „test beds, pilot sites, experimental infrastructures and facilities“ that were built in the past with public funds and co-innovation research all over Europe. Without knowing numbers, there must be hundreds if not thousands of such sites that allow to experiment with network architectures and more recently also compute and storage systems.

What remains to be seen is whether a cohesive/coherent federation will lead to an upsurge of usage, especially by industrial research organisation. From this perspective, the issue is less scale and simplicity but rather legal aspects related to running code on nodes/hosts that are operated by someone in the „experimental cloud“.

Some time ago I had a closer look into PlanetLab, perhaps the pioneer in this domain, and the concepts behind truly appeal. Technically, the conceptual proximity to Infrastructure-as-a-Service offerings, like the one by Amazon or Akamai, is intriguing. And usage statistics indicate significant interesst by the community. The biggest disadvantage in my view was/is the lack of support for reproducability of experiments as resources are not explicitely granted and isolated. Essentially, one can conduct an experiment at large-scale and realistic conditions but each and anyone remains unique and hence (striclty) uncomparable.

This is supposted to be different for GENI, which is a pretty large-scale infrastructure for a „Future Internet“ (communication architecture). Some claim it will turn sooner or later, once the Future Internet Architecture is identified, into the Future Internet, just like the ARPA/DARPA Net did orignially.

So something that is worth to keep an eye on.

But along with Cloud Computing (Infrastructure-as-a-Service) offerings the IT domain is also looking more into this domain and OpenCirrus is a perfect example for this. It remains a somewhat semi-pubilc resource but allows to experiment with IaaS down to the virtualization layers (and slightly beyond).

It“s hard to keep track of this domain, indeed. In Europe there is a whole research theme / community after it – Future Internet Research & Experimentation – but the needed for experimentation seems to be there. Actually I know too little of actual usage stats and it was something that called experimental facilities in question over the past.

But now I got invited to the advisory board of the NOVI project and I am truly looking forward to gaining deeper insight in this domain.

The GEYSERS Project

The GEYSERS project met in Zurich for its first General Assembly. One year on track, good progress, first results.

GEYERS is a project worth to keep an eye on. A strong technical vision aims at bringing together the IT and Telco world with a clean architecture for „Telco+IT“ fulfillment (Connectivity, Storage, Computing in a Service-oriented Design).

At first sight this is not fundamentally new. Yet GEYSERS definition goes beyond pure Telco-based (GMPLS) provisioning with some (GRID-like) IT resources at edges plus some Web-Service interfaces.

By defining an SML-layer, that is based on Service-oriented Infrastructure concepts, IT-standards, and SLA-based service composition and orchestration, GEYSERS reference architecture turns into a complete Infrastructure-as-a-Service framework for public/private Cloud Computing with a IT-northbound / Telco-southbound interface. All this based on accepted concepts and respective open standards in the IT plus Telco domain.

For more details check the architecture reference model at the FP7 GEYSERS Website and the technical specs at GEYSERS Tech-Specs

Do not miss the GEYSERS Video!

Mobile Cloud Computing

Is there something like „Mobile Cloud Computing“? (a question I am after since early 2009)

A quick Internet research provides evidence that MCC might indeed get away from being a „bwc“ (buzz word combination) and turn into something substantial.

Fundamental Reflections on MCC
– MCC basic elements: Mobile Device, Mobile Network, Cloud Computing (IaaS/PaaS), Cloud Serivce (SaaS)
– Mobile Devices: The dominant share will remain low-end, with very limited resources
– Mobile Networks: Shannon/Nyquist, channel characteristics/impairments
– Cloud Computing: Depends on communications, provides virtually unlimted resources on-demand, …
– Users look for something that may be called „on-service, on-demand, on-(any)-device“

A Few Resources
Mobile Cloud Computing Demo
ABI Research : Enterprise Mobile Cloud Computing
ABI Research: Mobile Cloud Applications

In Quest of Online / Cloud Storage

[Update 26th of December]
Today I found the solution, Use Dropbox without Gnome. Special thanks to the author of this post.

The tool is trivial to install, very straight forward to use, just excellent. Here is a copy of the instructions:

1. Download the closed source Dropbox Linux client from http://www.getdropbox.com/download?plat=lnx.x86 (x86_64 for 64 bit)
2. Extract the contents and you should get a .dropbox-dist folder out of the archive. Move the folder to $HOME
3. Run ~/.dropbox-dist/dropboxd.

In the meantime I got SuSe+WebDav+Dolphin working for MyDrive.ch. Only rather small files are supported, though. I didn““““t figure out what“s the exact size/limit, mostly since I prefer a truly „public“ folder and MyDrive.ch supports only a „guest user“ (you need to share this login).

[Initial post, 31st of October]
My web hoster asks for more money for better service. Fine, business as usual.

But I just don““t need a whole bundle of services. What I need is more online storage. These are the results of some web research

A few requirements
1 – Public accessible folder
2 – WebDav, FTP, or whatever tool for convenient access (upload) from Linux (KDE, Suse)
3 – 1GB+ storage
4 – Free

Dropbox
OK: 1, 3, 4, Partially ok – 2 (requires Nautilus File Manager but I perfer Dolphin, which has native WebDav and split-view support), Popular service based on Amazon AWS

Google Docs
OK: 1, 3, 4, Not OK: 2, Well-known integrated services, only one GB free

Gmail Drive
OK: 3, 4, Not OK: 1, 2, Well-known integrated services, no public access though and subject to all the mail analysis by Google (guess that““s the same for Google Docs)

Box Net
OK: 1, 3, 4, Not OK: 2, Online storage pioneer

MyDrive
OK: 2, 3, 4, Not OK: 1, Nice Swiss-native service

SMEStorage
OK: 1, 4 Not OK: 2, 3, No closer look as storage tiny

MS SkyDrive
OK: 1, 3, 4 Partially OK: 2, Cryptic configuration under Linux, requires a MS-compatible tool to extract WebDav addresses

Intel European Research and Innovation Conference (ERIC) 2010 (Braunschweig)

I had the pleasure of speaking about „European Future Internet Research“ at the ERIC 2010 in Braunschweig.

With much regret I arrived late and had to leave early. Due to could only attend the session I am was talking in „Digital Europe / Open Innovation“. But the session was once more a great experience.

Dr. Eddie O’Connor, Founder and Chief Executive and industry pioneer and veteran with 30 years of experience, delivered an eye-opening speech about renewable energy (offshore windfarming) in Europe and elsewhere. The sheer numbers were stunning to me (yes we know that we run out of energy / fosil fuel) but one got to see some figures to really grasp how fast. A few takeaways:
+ Onshore windfarms approach their capacity limits simply because appropriate locations are getting scarce.
+ Compared with growth of global population traditional energy generation will fail in the very near future – Eddie promotes 50% wind energy by 2050
+ But for that a „super grid“ is needed to connect the many offshore farms needed for that, this implies many new technologies that allow to transport electricity over large distances (can only happen in DC mode)

For more on that see Eddie““““s blog and Mainstream Renewable Powers (check the companies fund raising track record !)