Profile
Search
Register
Log in
Net Neutrality
View previous topic | View next topic >

Post new topic Reply to topic
Strange Famous Forum > Social stuff. Political stuff. KNOWMORE

Author Message
redball



Joined: 12 May 2006
Posts: 6871
Location: Northern New Jersey
 Reply with quote  

The same groups that lobby against net neutrality lobby against public wifi. The reason why we need a net neutrality law is because without it we'll get a million instances where a non-neutral net will negatively affect the little guy's content for every one instance that it negatively affects a legitimate service. Which becomes easier to codify and control, then? The neutral net with specific exceptions or the non-neutral net with few legitimate instances? I'm betting the neutral net.

If tier-one is hurting so badly then they could simply raise prices for connectivity, which would trickle down across the board. Or we could put more money into their research. Make it a DARPA challenge or something.
Post Tue May 27, 2008 10:41 pm
 View user's profile Send private message Visit poster's website
jakethesnake
guy who cried about wrestling being real


Joined: 03 Feb 2006
Posts: 6311
Location: airstrip one
 Reply with quote  

http://dontstayvirgin.movielol.org/main3.php
Post Wed May 28, 2008 7:14 am
 View user's profile Send private message AIM Address Yahoo Messenger
Jascha



Joined: 31 Mar 2005
Posts: 3936
Location: Seoul, SK
 Reply with quote  

jakethesnake wrote:
http://dontstayvirgin.movielol.org/main3.php


haha, that's great.
Post Wed May 28, 2008 7:24 am
 View user's profile Send private message Visit poster's website MSN Messenger
barlow



Joined: 30 Jun 2002
Posts: 1100
Location: Leeds, UK
 Reply with quote  

redball wrote:
Eh, that graphic is fairly backward. The current threat to net neutrality is that service providers want to charge websites, not the end users, for preferred routing and bandwidth allocation.



Wrong!! (Well, in the UK at least, and possibly worldwide)
The service providers are looking at charging both.

The UK broadband market based its economics on the "all you can eat buffet" model.

Most customers come in and have what is considered to be a small amount of food.
One in a hundred customers come in and eat 14 platefulls.
One in a thousand customers come in and 14 platefulls and steal the furniture.
The company makes its money out of the majority of customers, and has enough food left to feed the greedy customers.

This has worked fine so far, with only a relatively small number of users running bandwidth intensive applications.
However, there has been a recent surge in TV streaming, led by BBC iPLayer, and now people who previously just checked their email and ebay are regularly downloading 600Mb programmes.

This leaves the providers with a choice.
They can deal with the content providers, either to pay the ISP or to invest in content delivery technologies.
Even this doesn't solve the problem. Delivery to the ISP's network will reduce transit/peering, but will not reduce traffic over the core network.
Without going to the DLE, the bulk of the cost will still exist.
There is currently a stand off between the ISPs and the service providers, and this doesn't look like it is going anywhere soon.

The second choice is to charge the customer for the products/services/technologies or the bandwidth they use.

As people no longer believe they should pay for their internet access (in the UK you get if free with your TV/landline/mobile phone/bag of crisps), there seems to be no way of getting away from a low/no cost entry level product.

There will inevitabley be two models. You have full access to everything and pay for what you consume (a la carte) or you pay a fixed fee, and have unlimited access to a subset of the menu (buffet).

The chances of that graph becoming reality are more probable than possible.

When these products launched there was no youtube,myspace, facebook etc....
Until there are new networks (in most cases years away) the service providers will have to look at how they make the most out of their existing networks whilst pissing off the minimum number of customers.
Post Wed May 28, 2008 9:12 am
 View user's profile Send private message
redball



Joined: 12 May 2006
Posts: 6871
Location: Northern New Jersey
 Reply with quote  

barlow wrote:
redball wrote:
Eh, that graphic is fairly backward. The current threat to net neutrality, in the U.S., is that service providers want to charge websites, not the end users, for preferred routing and bandwidth allocation.




There, I fixed it for you. I was taking a U.S. centered stance on the issue. There are plenty of countries that have far worse neutrality problems than either the U.S. or the U.K.. Once you start talking about maintaining neutrality throughout the world the topic changes drastically and most countries will have their own problems.

It should be noted, that the graphic I was talking about seems targeted at the U.S.. I stand behind what I said, this country currently is not facing a threat like that for consumers.
Post Wed May 28, 2008 9:41 am
 View user's profile Send private message Visit poster's website
Mark in Minnesota



Joined: 02 Jan 2004
Posts: 2019
Location: Saint Louis Park, MN
 Reply with quote  

The Register has an interesting article right now covering what the Wall Street Journal is characterizing as a tectonic shift in support/opposition to the net neutrality movement. The Register piece, which I quote below, breaks down Google's approach from a technical perspective.

http://www.theregister.co.uk/2008/12/15/richard_bennett_no_new_neutrality/print.html


Quote:

Network Neutrality, the public policy unicorn that's been the rallying cry for so many many on the American left for the last three years, took a body blow on Sunday with the Wall Street Journal's disclosure (http://online.wsj.com/article/SB122929270127905065.html) that the movement's sugar-daddy has been playing both sides of the fence.

The Journal reports that Google "has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content."

Google claims that it’s doing nothing wrong, and predictably accuses the Journal of writing a hyperbolic piece that has the facts all wrong. It's essentially correct. Google is doing nothing that Akamai doesn’t already do, and nothing that the ISPs and carriers don't plan to do to reduce the load that P2P puts on their transit connections.

Caching data close to consumers is sound network engineering practice, beneficial to users and network operators alike because it increases network efficiency. More people are downloading HDTV files from Internet sources these days, and these transactions are highly repetitive. While broadcast TV can deliver a single copy of “Survivor” to millions of viewers at a time, Internet delivery requires millions of distinct file transfers across crowded pipes to accomplish the same end: this is the vaunted end-to-end principle at work.

There’s nothing wrong with Google's proposed arrangement, and quite a lot right with it. The main beneficiary is YouTube, which accounts for some 20 per cent of the Internet’s video traffic and was recently upgraded to a quasi-HD level of service. Taking YouTube off the public Internet and moving it directly to each ISP’s private network frees up bandwidth on the public Internet. Google’s not the only one doing this, and in fact so many companies are escaping the public Internet that researchers who measure Internet traffic at public peering points, such as Andrew Odlyzko, are scratching their heads in wonderment that the traffic they can measure only increases at 50 per cent a year. Researchers who study private network behavior see growth rates closer to 100 oer cent per year, and caching systems like Google’s and Akamai’s make this kind of traffic distribution possible.

While there’s nothing to see here of a technical nature, the political impact of this revelation is study in contrasts.

Cache from Chaos

Rick Whitt, Google's chief lobbyist and spin doctor, was pressed into service Sunday night to deflect the Journal’s claim that the search monopoly has abandoned its commitment to the Neutrality cause, which he did by issuing a rebuttal-by-blog:

"All of Google's colocation agreements with ISPs ... are non-exclusive. ... Also, none of them require (or encourage) that Google traffic be treated with higher priority than other traffic. In contrast, if broadband providers were to leverage their unilateral control over consumers' connections and offer colocation or caching services in an anti-competitive fashion, that would threaten the open Internet and the innovation it enables."

Whitt makes some great points, and as a bonus, some of them are even true. But he’s trying to change the subject. Google is making exactly the kind of deal with ISPs that it has consistently tried to ban in law and regulation. One of the blog posts that Whitt cites in defense of Google’s alleged consistency makes this very clear. The post, titled What Do We Mean By 'Net Neutrality'?, advocates a ban on the following ISP practices:

* Levying surcharges on content providers that are not their retail customers;
* Prioritizing data packet delivery based on the ownership or affiliation (the who) of the content, or the source or destination (the what) of the content; or
* Building a new "fast lane" online that consigns Internet content and applications to a relatively slow, bandwidth-starved portion of the broadband connection

Google’s co-location agreement violates all three principles if any money changes hands - and the latter two in any circumstance. Placing content close to the consumer raises its delivery priority relative to content housed on the public Internet. This is the case simply because each hop that the content has to make from one router to the next is an opportunity for congestion and loss, the result of which is a slowdown in the rate at which TCP will transmit. While the Google system reduces the load on the public Internet, it pushes Google’s traffic to the head of the delivery queue at the last minute, as a consequence of its relative immunity to loss.

If the caching system didn’t have an advantage over public Internet delivery, there would be no reason to deploy it.


This reminds me of a conversation I had with an Akamai sales rep earlier this year, talking about a technology they offer that would let me route VPN and application traffic between my data-centers and my remote offices using their edge network instead of using the public Internet, because their edge network is often able to (through Akamai's own proprietary traffic-shaping algorithms) do better than the Internet itself -- an approach similar to the one described in a paper Wide-Scale Data Stream Management by Dionysios Logothetis and Kenneth Yocum (UCSD Department of Computer Science & Center for Networked Systems) that they presented at USENIX 2008 earlier this year.

The Register article makes a really telling point with the last line -- Google is only doing this because there's value in doing it. At first I wondered if there wasn't a net neutrality concern there: Are public Internet providers intentionally underinvesting in the core network in order to create these kinds of revenue opportunities at the edge?

But the more I think about it, the more I see this question differently: It's possible that doing this kind of edge infrastructure is simply cheaper and more performant than upgrading the core network would be, and that this approach would offer a competitive advantage even against a thoroughly underutilized backbone. If this kind of strategy is only viable because the backbone is underprovisioned or misconfigured, there really is a net neutrality concern there -- but if the strategy is viable on a fully healthy backbone, this isn't a net neutrality issue, it's just clever engineering.

As always with discussions of this issue, more transparency would help us to better answer these questions.
Post Mon Dec 15, 2008 2:38 pm
 View user's profile Send private message Send e-mail
Mark in Minnesota



Joined: 02 Jan 2004
Posts: 2019
Location: Saint Louis Park, MN
 Reply with quote  

Google posted a response to the Wall Street Journal article as well:
http://feedproxy.google.com/~r/blogspot/MKuf/~3/e-WRaNEGluU/net-neutrality-and-benefits-of-caching.html

The Register article linked this but I hadn't read it yet. Interesting response, and basically makes it clear that the people behind the WSJ article didn't know what they were talking about.
Post Mon Dec 15, 2008 11:52 pm
 View user's profile Send private message Send e-mail
albeeyap2



Joined: 10 Jul 2004
Posts: 1258
Location: Inland Empire CA
 Reply with quote  

FCC loses key ruling on Internet `neutrality'

http://news.yahoo.com/s/ap/20100406/ap_on_hi_te/us_tec_internet_rules_11
Post Wed Apr 07, 2010 12:03 pm
 View user's profile Send private message Send e-mail Visit poster's website
Mark in Minnesota



Joined: 02 Jan 2004
Posts: 2019
Location: Saint Louis Park, MN
 Reply with quote  

This was honestly a ruling they needed to lose. Their intentions were good but the direction they're heading isn't sustainable unless their policy goals are developed out of some more significant grounding statute.
Post Wed Apr 07, 2010 12:20 pm
 View user's profile Send private message Send e-mail

Post new topic Reply to topic
Jump to:  
Goto page Previous  1, 2
All times are GMT - 6 Hours.
The time now is Tue Oct 21, 2014 12:59 pm
  Display posts from previous:      


Powered by phpBB: © 2001 phpBB Group
Template created by The Fathom
Based on template of Nick Mahon