Discussion:
[e2e] Protocols breaking the end-to-end argument
Jaime Mateos
2009-10-23 10:26:54 UTC
Permalink
Hi,
I'm working on a project about the current challenges the Internet is
presenting to the end-to-end argument. I'd be interested to know about
any protocols, currently in use, that break the end-to-end principle and
the context where they are used. So far the only one I've found is TCP
PEP that seems to be in use in satellite networks (Internetworking and
computing over satellite networks, Yongguang Zhang -
http://books.google.ie/books?id=3pkI6OWUsRAC&pg=PA170&lpg=PA170&dq=criticisms+of+end+to+end+principle&source=bl&ots=OVbMYc5Iso&sig=Tir1Xi4vxRG5HG2ieGCgl2STWcA&hl=en&ei=vL7TSor9Bs2z4QbW8_H_Ag&sa=X&oi=book_result&ct=result&resnum=5&ved=0CBQQ6AEwBA#v=onepage&q=criticisms%20of%20end%20to%20end%20principle&f=false)


There also seems to be a number of research projects such as Split TCP
and LTP-T that I've come across. I'm also interested in these but not to
the same degree as in protocols that are currently in use today.

Thanks,
Jaime Mateos
Jeroen Massar
2009-10-23 10:58:59 UTC
Permalink
Post by Jaime Mateos
Hi,
I'm working on a project about the current challenges the Internet is
presenting to the end-to-end argument. I'd be interested to know about
any protocols, currently in use, that break the end-to-end principle and
the context where they are used.
Everything that needs an NAT helper, thus any protocol that embeds
addresses or ports, thus most games, everything that has a listening
port where the listening port is not on a public IP or firewalled away.

Greets,
Jeroen
rick jones
2009-10-24 01:49:12 UTC
Permalink
Post by Jeroen Massar
Post by Jaime Mateos
Hi,
I'm working on a project about the current challenges the Internet is
presenting to the end-to-end argument. I'd be interested to know about
any protocols, currently in use, that break the end-to-end
principle and
the context where they are used.
Everything that needs an NAT helper, thus any protocol that embeds
addresses or ports, thus most games, everything that has a listening
port where the listening port is not on a public IP or firewalled away.
Isn't the sense incorrect there? I always thought it was the NAT
itself, and its need for helpers that was in opposition to the quasi-
mythical end-to-end principle?

rick jones
Wisdom teeth are impacted, people are affected by the effects of events
Richard Bennett
2009-10-24 03:23:21 UTC
Permalink
People who are interested in the evolution, refinement, application, and
re-definition of end-to-end arguments, principles, doctrines, dogmas,
and guidelines may enjoy my paper, "Designed for Change: End-to-End
Arguments, Internet Innovation, and the Net Neutrality Debate",
available at http://www.itif.org/index.php?id=294 along with a video of
a nice discussion of the stagnation of Internet protocol development
with Dave Farber, John Day, Chris Yoo, Bill Lehr, and yours truly.

I think Jaime's usage, "breaking end-to-end", is common in today's IETF,
where people tend to regard end system function placement as a default,
and the caveats of the Arguments are pretty much ignored. This kind of
reduction is to be expected, given the way that complex ideas tend to be
simplified by time.

The best discussion I've seen of function placement in a datagram
network to this day is found in Louis Pouzin's mongraph on the CYCLADES
network, _Cyclades Computer Network: Towards Layered Network
Applications_, Elsevier Science Ltd (September 1982). The book is out of
print, but it's available through interlibrary loan from several
institutions in the US. Pouzin takes a very pragmatic and empirical
approach to function placement, where later engineers tended to come
from first principles. The worst treatment is David Isenberg's second
"stupid network" paper, "Dawn of the Stupid Network"; it's much more
doctrinaire than "Rise of the Stupid Network" by the same dude.

A couple of great critiques of "End-to-End Args" are RFC 1958 and Tim
Moors' "A Critical Review of End-to-End Arguments in System Design",
http://www.ee.unsw.edu.au/~timm/pubs/02icc/published.pdf. Moors shows
that the Saltzer, Reed, and Clark argument for end-to-end placement is
both circular and inconsistent with the FTP example that is supposed to
demonstrate it. But the tres amigos of e2e were writing in 1981 when
network engineering was mostly a matter of intuition, so what do you
expect?

One of the more interesting unresolved questions about "End-to-End Args"
is why it was written in the first place. Some people see it as a salvo
in the ISO protocol wars, others as an attack in BBN's ARPANET, some as
an attempt to criss the divide between engineering and policy, and there
are probably other theories as well.

The Blumenthal and Clark "Brave New World" paper was very influential
because it lit the fire under Larry Lessig that got him storming around
about "protecting the Internet" from all the threats to stagnation and
freedom. There's a fairly clear path from Lessig's reaction to "Brave
New World" and the immoderate regulatory climate in the US today that's
so hostile to Internet progress.

RB
Post by rick jones
Post by Jeroen Massar
Post by Jaime Mateos
Hi,
I'm working on a project about the current challenges the Internet is
presenting to the end-to-end argument. I'd be interested to know about
any protocols, currently in use, that break the end-to-end principle and
the context where they are used.
Everything that needs an NAT helper, thus any protocol that embeds
addresses or ports, thus most games, everything that has a listening
port where the listening port is not on a public IP or firewalled away.
Isn't the sense incorrect there? I always thought it was the NAT
itself, and its need for helpers that was in opposition to the
quasi-mythical end-to-end principle?
rick jones
Wisdom teeth are impacted, people are affected by the effects of events
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
David P. Reed
2009-10-24 14:12:24 UTC
Permalink
Since the moderator did not find a problem with Bennett's posting, I
will request his leave to address Bennett's ouvre and in particular this
particular posting in a more direct manner, since he has walked into
this *technical* forum with a variety of outrageous claims directed at
the motives of me and my co-authors.
Post by Richard Bennett
One of the more interesting unresolved questions about "End-to-End
Args" is why it was written in the first place. Some people see it as
a salvo in the ISO protocol wars, others as an attack in BBN's
ARPANET, some as an attempt to criss the divide between engineering
and policy, and there are probably other theories as well.
Richard Bennett spends a fair amount of his writing imputing motives to
people, and then using those motives to somehow impugn their credibility.
The above paragraph is such an example. (Please note that I am just
stating a fact about his writing style. You can read the paper he
submitted for lots of examples. He has also imputed that Vint Cerf and
Bob Kahn "stole" the ideas for the Internet from Pouzin without proper
credit.

Now I don't know if he can read the minds of Jerry Saltzer, Dave Clark,
or myself in writing the original paper. However the paragraph quoted
above is about the most ridiculous claim I have ever heard. We wrote
the paper as an attempt to contribute to the art of architecting the
Internet, as I believe most of the people on this list would
understand. However, Bennett has no shame. He does, however act as a
troll.
Richard Bennett
2009-10-24 20:01:41 UTC
Permalink
Don't get so emotional David, it doesn't make you look good. I never
said that Cerf and Kahn stole CYCLADES without proper credit; I give
examples of the credit they did give in order to prove the line of
influence from CYCLADES to TCP/IP, and quoted Cerf on the help that
Gerard LeLann provided to the Stanford team. I note that *your* paper
doesn't cite Pouzin, which is something that certainly miffed at least
one of your co-authors; my sentence is something like "the E2E Args
authors didn't seem to have been aware of CYCLADES" which is based on a
failure to reference.

I think it's unfortunate that the Internet community was already trying
to erase Pouzin from history in 1981, when his contribution was so
monumental. Let's give credit where it's due.

And as I've said already, I think the question of motivation and timing
is interesting, and don't claim to know the answer. Seems like this is a
good place to ask the question is all.

RB
Post by David P. Reed
Since the moderator did not find a problem with Bennett's posting, I
will request his leave to address Bennett's ouvre and in particular
this particular posting in a more direct manner, since he has walked
into this *technical* forum with a variety of outrageous claims
directed at the motives of me and my co-authors.
Post by Richard Bennett
One of the more interesting unresolved questions about "End-to-End
Args" is why it was written in the first place. Some people see it as
a salvo in the ISO protocol wars, others as an attack in BBN's
ARPANET, some as an attempt to criss the divide between engineering
and policy, and there are probably other theories as well.
Richard Bennett spends a fair amount of his writing imputing motives
to people, and then using those motives to somehow impugn their
credibility.
The above paragraph is such an example. (Please note that I am just
stating a fact about his writing style. You can read the paper he
submitted for lots of examples. He has also imputed that Vint Cerf
and Bob Kahn "stole" the ideas for the Internet from Pouzin without
proper credit.
Now I don't know if he can read the minds of Jerry Saltzer, Dave
Clark, or myself in writing the original paper. However the
paragraph quoted above is about the most ridiculous claim I have ever
heard. We wrote the paper as an attempt to contribute to the art of
architecting the Internet, as I believe most of the people on this
list would understand. However, Bennett has no shame. He does,
however act as a troll.
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
L***@surrey.ac.uk
2009-10-23 11:47:00 UTC
Permalink
Hey, I wrote a chapter of that book...

Do look into the Bundle Protocol, which ignore the end-to-end
principle and control loops in its design. See our 'Bundle of
Problems' paper for more on this:
http://www.ee.surrey.ac.uk/Personal/L.Wood/publications/
The Bundle Protocol has similar problems/oversights as LTP-T.

Carlo Caini's group has drawn parallels between
DTN work and TCP PEPs, pointing out that what TCP PEPs do
on the quiet (break the end-to-end control loop into separate
loops) is what things like bundle hops + convergence layers
or http proxy caches do more explicitly and visibly. See e.g.
his IWSSC'09 paper:
"TCP, PEP and DTN Performance on Disruptive Satellite Channels."

L.

<http://www.ee.surrey.ac.uk/Personal/L.Wood/><***@surrey.ac.uk>



-----Original Message-----
From: end2end-interest-***@postel.org on behalf of Jaime Mateos
Sent: Fri 2009-10-23 11:26
To: end2end-***@postel.org
Subject: [e2e] Protocols breaking the end-to-end argument

Hi,
I'm working on a project about the current challenges the Internet is
presenting to the end-to-end argument. I'd be interested to know about
any protocols, currently in use, that break the end-to-end principle and
the context where they are used. So far the only one I've found is TCP
PEP that seems to be in use in satellite networks (Internetworking and
computing over satellite networks, Yongguang Zhang -
http://books.google.ie/books?id=3pkI6OWUsRAC&pg=PA170&lpg=PA170&dq=criticisms+of+end+to+end+principle&source=bl&ots=OVbMYc5Iso&sig=Tir1Xi4vxRG5HG2ieGCgl2STWcA&hl=en&ei=vL7TSor9Bs2z4QbW8_H_Ag&sa=X&oi=book_result&ct=result&resnum=5&ved=0CBQQ6AEwBA#v=onepage&q=criticisms%20of%20end%20to%20end%20principle&f=false)


There also seems to be a number of research projects such as Split TCP
and LTP-T that I've come across. I'm also interested in these but not to
the same degree as in protocols that are currently in use today.

Thanks,
Jaime Mateos
David P. Reed
2009-10-23 14:20:47 UTC
Permalink
I'd reframe the statement, just because I would actually like the term
"end-to-end argument" to continue to mean what we defined it to mean,
rather than what some people have extended it to mean.

So I think what you are looking for is a set of examples that
demonstrate functions that are "best done inside the network".

If you read the original paper, there is no claim whatever that says either:

1) that all functions should be done at the edges. (this radical
proposition, however, is one that guides some of my personal
interests in researching how far one can go. But that's a "Reed
research guideline" not an architecture argument.)

2) that one should never include optimizations of functions that
must (to be correct) be done at the edges, in the network.

Yet each interpretation above (and some others) are used occasionally.

Here's an example that challenges 2) and 1) but not the original
argument: where should congestion measurement be done, in order to
support congestion control?

Congestion *exists* only inside the network, by definition. So it must
be measured in the network.

However, where should *control* of congestion happen? That's a very
different story. It can't happen at the places where it is measured...
because congestion is an emergent phenomenon that depends on details at
the edges, AND on routing decisions (and traffic engineering and
investment decisions, as well, at slower rates of change). The answer
would be easy if there were one perfect place to do it. Of course, the
network itself makes that hard.

Today's Internet offers a variety of measures of congestion: measured
changes of RTT end-to-end at each of the hosts that share a bottleneck
subpath for active traffic, packet drops, packet-pair tests, marks such
as ECN, SNMP-if-it-had-a-MIB, ...

It also offers a variety of ways to mitigate congestion: get one or more
senders to slow down, get the sender to recode using more compression,
force some of the traffic to an alternate path, etc.

Choices of how to implement the congestion management function (which
includes traffic engineering as a subroutine) can be informed by the
"end-to-end argument" if you break the function down into subfunctions.

But this is not a problem with the "end-to-end argument". It is a
problem with TCP RTP and other protocols over IP, and routers that we
have today.

We have, for example, ECN as a tool implemented by routers. Turning it
on probably would help a reasonable amount. ECN itself is a solution to
congestion *measurement*, not mitigation. Measurement in the router,
communicated by ECN to all who share the bottleneck path, is clearly a
function "in the network". And yet it satisfies the end-to-end argument!

Lest we think that congestion control is the only area where *careful
thinking* is informed by end-to-end arguments about function placement,
there are many that fit the original argument. Blocking hostile DDOS
attacks is another. It's hard to imagine that anyone could argue that
DDOS against a target could be prevented solely outside the network.

However, *prosecution* of the offenders is clearly not a function that
can be done inside the network. Similarly, it would be silly to burden
a router with the job of collecting evidence for the prosecutor. There
are actually two kinds of DDOS attacks:

1) against the network itself,

2) against a particular end host (or hosts).

The former can be detected reliably by the network elements involved.
The latter must be defined by the host itself... since it is the host
who desires or doesn't desire a lot of traffic aimed at it.

Let's look at the latter, only. It would be silly for the operator of
the network to have to look at packets flowing to a web server to detect
that many SYNs are sent, but the 3rd step of the handshake is
uncompleted. The server is the only reliable place to verify that its
time is being wasted by many open connnections.

Yet responding to the DDOS attack may be helped by disconnecting the
sources. This has to be a network function on a large scale. And
tracing back to the source may be a network function.
Post by L***@surrey.ac.uk
Hey, I wrote a chapter of that book...
Do look into the Bundle Protocol, which ignore the end-to-end
principle and control loops in its design. See our 'Bundle of
http://www.ee.surrey.ac.uk/Personal/L.Wood/publications/
The Bundle Protocol has similar problems/oversights as LTP-T.
Carlo Caini's group has drawn parallels between
DTN work and TCP PEPs, pointing out that what TCP PEPs do
on the quiet (break the end-to-end control loop into separate
loops) is what things like bundle hops + convergence layers
or http proxy caches do more explicitly and visibly. See e.g.
"TCP, PEP and DTN Performance on Disruptive Satellite Channels."
L.
-----Original Message-----
Sent: Fri 2009-10-23 11:26
Subject: [e2e] Protocols breaking the end-to-end argument
Hi,
I'm working on a project about the current challenges the Internet is
presenting to the end-to-end argument. I'd be interested to know about
any protocols, currently in use, that break the end-to-end principle and
the context where they are used. So far the only one I've found is TCP
PEP that seems to be in use in satellite networks (Internetworking and
computing over satellite networks, Yongguang Zhang -
http://books.google.ie/books?id=3pkI6OWUsRAC&pg=PA170&lpg=PA170&dq=criticisms+of+end+to+end+principle&source=bl&ots=OVbMYc5Iso&sig=Tir1Xi4vxRG5HG2ieGCgl2STWcA&hl=en&ei=vL7TSor9Bs2z4QbW8_H_Ag&sa=X&oi=book_result&ct=result&resnum=5&ved=0CBQQ6AEwBA#v=onepage&q=criticisms%20of%20end%20to%20end%20principle&f=false)
There also seems to be a number of research projects such as Split TCP
and LTP-T that I've come across. I'm also interested in these but not to
the same degree as in protocols that are currently in use today.
Thanks,
Jaime Mateos
Dave CROCKER
2009-10-23 15:28:22 UTC
Permalink
Post by David P. Reed
I'd reframe the statement, just because I would actually like the term
"end-to-end argument" to continue to mean what we defined it to mean,
rather than what some people have extended it to mean.
Interesting. My sense of things is that the term is not actually defined all
that concretely or consistently and that this has made it difficult to use the
term constructively.

Can you or anyone else point to a definition that

a) gives meaningful technical definition of "end to end", sufficient to make
differential conformance assessments reasonable.

b) provide any basis for believing that that definition has broad use within
the technical community?

Absent the ability to satisfy this query, we ought to consider an effort to move
towards being able to.

d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
David P. Reed
2009-10-23 15:41:57 UTC
Permalink
I'd suggest reading the paper where it was originally defined. Given
that the three authors AND a crew of peer reviewers touched every word
of the definition, it's pretty darned specific.
Post by Dave CROCKER
Post by David P. Reed
I'd reframe the statement, just because I would actually like the
term "end-to-end argument" to continue to mean what we defined it to
mean, rather than what some people have extended it to mean.
Interesting. My sense of things is that the term is not actually
defined all that concretely or consistently and that this has made it
difficult to use the term constructively.
Can you or anyone else point to a definition that
a) gives meaningful technical definition of "end to end",
sufficient to make differential conformance assessments reasonable.
b) provide any basis for believing that that definition has broad
use within the technical community?
Absent the ability to satisfy this query, we ought to consider an
effort to move towards being able to.
d/
David P. Reed
2009-10-23 17:52:57 UTC
Permalink
Sorry - I figured everyone on this list knew the paper itself, since
it's cited all over the place, so I was being a little bit terse.
Anyway, one place you can get the original paper text is online at
http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf .

We also wrote a followup paper in the "active networks" era that tries
to carefully explain how the same approach can be helpful in thinking
about "active networks":
http://web.mit.edu/Saltzer/www/publications/endtoend/ANe2ecomment.html
(this was published in IEEE Networking, or some other IEEE pub, as I
recall).
Some will remember that "active networking" was viewed as an idea that
made the end-to-end argument "obsolete" - I personally think that that
was a conclusion based on a misunderstanding about what we meant - and
this second paper refines the point we made in the first paper.

Saltzer, Clark, and I have separately extended and adapted the original
ideas. Perhaps the most interesting recent idea is Dave Clark's
unpublished talk and note which focuses on a "Trust-to-Trust principle"
that I have urged him to write up. I don't think it is published yet.

Dave and Marjorie Blumenthal have also written a paper on a range of
areas where policy functions might best be done in the network. I don't
have a link to it, but here's a citation. M. Blumenthal, D.
Clark,/Rethinking the Design of the Internet: The End-to-end Arguments
vs. the Brave New World/, ACM Transactions on Internet Technology,
1(1):70-109, August 2001 .

I can't help adding: Of course there are lots of people who use the word
"end-to-end" when they mean, for example, "TCP is perfect". (I'm not
one of them: I have about 40,000 complaints with TCP and IP, so it's
especially galling to be accused of claiming that TCP is the best of all
possible protocols - often as a straw man. TCP's merely good enough,
IMHO, to apply a different and older argument: if it ain't broke, don't
fix it. But by all means experiment with improvements and alternatives).
David,
I'm asking to explore this carefully and inclusively.
Since you are putting a reference forward, what is the citation to it?
d/
Post by David P. Reed
I'd suggest reading the paper where it was originally defined. Given
that the three authors AND a crew of peer reviewers touched every
word of the definition, it's pretty darned specific.
Lloyd Wood
2009-10-23 19:02:59 UTC
Permalink
Post by David P. Reed
Sorry - I figured everyone on this list knew the paper itself,
since it's cited all over the place, so I was being a little bit
terse. Anyway, one place you can get the original paper text is
online at http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf
.
Worth stressing that there are actually multiple revisions of that
paper.

J. Saltzer, D. Reed and D. Clark, ‘End-to-End Arguments in System
Design’, Second International Conference on Distributed Computing
Systems (April 1981) pages 509-512.

J. Saltzer, D. Reed and D. Clark, ‘End-to-End Arguments in System
Design’, ACM Transactions in Computer Systems, pp. 277-288, November
1984.
http://doi.acm.org/10.1145/357401.357402

The version at Saltzer's webpages above is a third version, with page
numbering 1-10, but its footnote
on the first page is helpful at pointing out different versions.

http://en.wikipedia.org/wiki/End-to-end_principle
could be better...

L.

DTN work: http://info.ee.surrey.ac.uk/Personal/L.Wood/saratoga/

<http://info.surrey.ac.uk/Personal/L.Wood/><***@surrey.ac.uk>
David P. Reed
2009-10-23 21:33:43 UTC
Permalink
Wikipedia article is not definitive. In particular, none of the 3
authors wrote the wikipedia article. In general, wikipedia does well
at some things, but I wouldn't trust it to read authors' words more
clearly than the authors themselves.

In particular: there was never an "end-to-end principle". So if you get
the title wrong, why should we trust you to get the details right?

Indeed the original paper was presented at a conference, selected for
ACM TOCS (and revised to their standards), and the last, online version
is slightly different. There was also a version that was circulated
prior to the 1981 conference among peers and friends - as was the
convention in the computer systems community - some of the examples in
the 1981 version were suggested during that phase.
Post by David P. Reed
Sorry - I figured everyone on this list knew the paper itself, since
it's cited all over the place, so I was being a little bit terse.
Anyway, one place you can get the original paper text is online at
http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf .
Worth stressing that there are actually multiple revisions of that paper.
J. Saltzer, D. Reed and D. Clark, ‘End-to-End Arguments in System
Design’, Second International Conference on Distributed Computing
Systems (April 1981) pages 509-512.
J. Saltzer, D. Reed and D. Clark, ‘End-to-End Arguments in System
Design’, ACM Transactions in Computer Systems, pp. 277-288, November
1984.
http://doi.acm.org/10.1145/357401.357402
The version at Saltzer's webpages above is a third version, with page
numbering 1-10, but its footnote
on the first page is helpful at pointing out different versions.
http://en.wikipedia.org/wiki/End-to-end_principle
could be better...
L.
DTN work: http://info.ee.surrey.ac.uk/Personal/L.Wood/saratoga/
Lloyd Wood
2009-10-23 23:20:27 UTC
Permalink
Post by David P. Reed
In particular: there was never an "end-to-end principle". So if you
get the title wrong, why should we trust you to get the details right?
because "end-to-end argument principle" is appalling grammar. The word
"principle" appears multiple times in the paper, including the
abstract and conclusions.

"No one gets angry at a mathematician or a physicist whom he or she
doesn't understand, or at someone who speaks a foreign language, but
rather at someone who tampers with your own language." -- Jacques
Derrida

http://mercury.lcs.mit.edu/~jnc/tech/end_end.html

DTN work: http://info.ee.surrey.ac.uk/Personal/L.Wood/saratoga/

<http://info.surrey.ac.uk/Personal/L.Wood/><***@surrey.ac.uk>
Matthias Bärwolff
2009-10-24 05:46:05 UTC
Permalink
Post by Lloyd Wood
Post by David P. Reed
In particular: there was never an "end-to-end principle". So if you
get the title wrong, why should we trust you to get the details right?
because "end-to-end argument principle" is appalling grammar. The word
"principle" appears multiple times in the paper, including the
abstract and conclusions.
The word "principle" appears *only* in the abstract, plus in the second
and the penultimate sentence of the 1984 paper. The content of the
paper, however, is very much about arguments (as in debate), not
principle (as in strict and not to argue with), maybe not even so much
about argument (as in "one logical conclusion to an irrefutable reasoning").

With all respect for the authors, we all know how abstracts are
typically written: It is often the very last thing on one's mind, even
though it should be the very first (although or possibly just because
written at the end of the process). Often, they are either totally
redundant (just repeating phrases from the content), or they exaggerate
things in a bid to draw the reader to the content. Rarely do they
capture precisely the essence of a paper.

An aside: I'd be interested to see the the 1981 version, and whether it
is much different from the 1984 one. Does anyone have it?

Matthias
--
Matthias Bärwolff
www.bärwolff.de
Richard Bennett
2009-10-24 06:17:03 UTC
Permalink
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
It's the one I reference in my paper:
<a class="moz-txt-link-freetext" href="http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf">http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf</a><br>
<br>
Matthias B&auml;rwolff wrote:
<blockquote cite="mid:***@cs.tu-berlin.de" type="cite">
<pre wrap="">
Lloyd Wood wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On 23 Oct 2009, at 22:33, David P. Reed wrote:
</pre>
<blockquote type="cite">
<pre wrap="">In particular: there was never an "end-to-end principle". So if you
get the title wrong, why should we trust you to get the details right?
</pre>
</blockquote>
<pre wrap="">because "end-to-end argument principle" is appalling grammar. The word
"principle" appears multiple times in the paper, including the
abstract and conclusions.
</pre>
</blockquote>
<pre wrap=""><!---->
The word "principle" appears *only* in the abstract, plus in the second
and the penultimate sentence of the 1984 paper. The content of the
paper, however, is very much about arguments (as in debate), not
principle (as in strict and not to argue with), maybe not even so much
about argument (as in "one logical conclusion to an irrefutable reasoning").

With all respect for the authors, we all know how abstracts are
typically written: It is often the very last thing on one's mind, even
though it should be the very first (although or possibly just because
written at the end of the process). Often, they are either totally
redundant (just repeating phrases from the content), or they exaggerate
things in a bid to draw the reader to the content. Rarely do they
capture precisely the essence of a paper.

An aside: I'd be interested to see the the 1981 version, and whether it
is much different from the 1984 one. Does anyone have it?

Matthias

</pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC</pre>
</body>
</html>
William Allen Simpson
2009-10-23 12:39:48 UTC
Permalink
Post by Jaime Mateos
I'm working on a project about the current challenges the Internet is
presenting to the end-to-end argument. I'd be interested to know about
any protocols, currently in use, that break the end-to-end principle and
the context where they are used.
You could add the Broadcom chip sets to your list. Not a protocol per se,
but they inexplicably "handle" TCP segmentation. Usually used in a host
(bad enough in my opinion), but could create utter havoc in a router.

So far, I've noticed:

NetXtreme II 1 Gigabit
Tigon 3

When I recently proposed actually checking for correct TCP option sizes,
the Linux driver's author says:

You're being way too anal here, and adding these checks to
drivers would be just a lot of rediculious bloat. [sic]
David P. Reed
2009-10-23 15:38:09 UTC
Permalink
Post by William Allen Simpson
You could add the Broadcom chip sets to your list. Not a protocol per se,
but they inexplicably "handle" TCP segmentation. Usually used in a host
(bad enough in my opinion), but could create utter havoc in a router.
NetXtreme II 1 Gigabit
Tigon 3
This is an interesting observation, but I don't understand what you mean.

Explain "handling TCP segmentation" please? Exactly what chips do
that? What exactly do they do in the chip?

The chips might do IP fragmentation, but I find it hard to see how they
could do TCP segmentation, unless of course they are acting as a host.
Nothing wrong with a chipset being a host, too (perhaps to present a
web, ssh or SNMP interface).
rick jones
2009-10-24 01:47:52 UTC
Permalink
Post by David P. Reed
Post by William Allen Simpson
You could add the Broadcom chip sets to your list. Not a protocol per se,
but they inexplicably "handle" TCP segmentation. Usually used in a host
(bad enough in my opinion), but could create utter havoc in a router.
NetXtreme II 1 Gigabit
Tigon 3
This is an interesting observation, but I don't understand what you mean.
Explain "handling TCP segmentation" please? Exactly what chips do
that? What exactly do they do in the chip?
The chips might do IP fragmentation, but I find it hard to see how
they could do TCP segmentation, unless of course they are acting as
a host. Nothing wrong with a chipset being a host, too (perhaps to
present a web, ssh or SNMP interface).
Perhaps he is referring to chips which provide TCP/Transport
Segmentation Offload - aka TSO - the functionality that allows the
stack to hand the chip a chunk of data > the MTU, along with the
initial TCP/IP headers and the connection's on the wire MSS, and then
have the chip otherwise statelessly segment that larger chunk of data
into MSS-sized segments for transmission on the wire/fibre/etc.

If that is the functionality of which he speaks, it is in virtually
every contemporary 1GbE card I can think of (but my thoughts cannot
span the entirety of the space I suspect). Also, virtually every 10G
NIC out there offers the same functionality.

And if that upsets him, we better not tell him about the 10G NICs also
doing receive offload... :)

rick jones

BTW, I do not believe that any router actually has TSO happen to TCP
segments contained within the IP datagrams passing through it -
although there have been issues in Linux with LRO (Large Receive
Offload, distinct from General Receive Offload) when the system was
acting as either a router or a bridge - because TSO doesn't happen in
that path :)

http://homepage.mac.com/perfgeek
William Allen Simpson
2009-10-24 12:06:08 UTC
Permalink
Post by rick jones
Perhaps he is referring to chips which provide TCP/Transport
Segmentation Offload - aka TSO - the functionality that allows the stack
to hand the chip a chunk of data > the MTU, along with the initial
TCP/IP headers and the connection's on the wire MSS, and then have the
chip otherwise statelessly segment that larger chunk of data into
MSS-sized segments for transmission on the wire/fibre/etc.
It is indeed. Since the hardware driver is unaware of many things,
such as path MTU, this is one of its serious impediments.

Sure, there are measurements that show several percentage points less
CPU, but in most cases we're not CPU bound. I'm not sure what problem
it's solving, other than a checkbox to differentiate commodity products.

Worst of all, this stuff is all implemented in unmodifiable,
proprietary firmware.
Post by rick jones
If that is the functionality of which he speaks, it is in virtually
every contemporary 1GbE card I can think of (but my thoughts cannot span
the entirety of the space I suspect). Also, virtually every 10G NIC out
there offers the same functionality.
Gaah! I only knew about the Broadcom chips, as discussed on the NetBSD
lists a few years back, and didn't know this disease had spread.
Post by rick jones
And if that upsets him, we better not tell him about the 10G NICs also
doing receive offload... :)
I'd heard of it, but thought that was pretty uniformly rejected. Heck,
the most basic TCP decision points would be impossible to implement,
revise, or test.
Post by rick jones
BTW, I do not believe that any router actually has TSO happen to TCP
segments contained within the IP datagrams passing through it - although
Only recently trying to decipher the Linux stack, but it all appears to
go through the same queue, routed packets included. If the box receives
a jumbogram on one interface, it can be re-segmented out another, and
I've not found any support for PMTUD or ECN or anything.
Post by rick jones
there have been issues in Linux with LRO (Large Receive Offload,
distinct from General Receive Offload) when the system was acting as
either a router or a bridge - because TSO doesn't happen in that path :)
Again, I'm not as familiar with Linux-only terminology. A quick Google
turns up "Generic Receive Offload", and that appears to be explicitly
designed to merge segments in routers, and re-segment out the other side:

http://lwn.net/Articles/311357/

I'm pretty sure this is contrary to the end-to-end [argument, principle,
what-have-you].... And covered by a passel of patents.
David P. Reed
2009-10-24 14:34:14 UTC
Permalink
Because I sense this thread might be interesting, and should be
separated from the trolling going on in the original thread, I changed
the title.

TCP offload is interesting. We actually addressed this kind of thing is
the "Active Networks" vs. end-to-end paper. Function placement at the
architectural level actually can be discussed with regard to "design
time" and "implementation time" versions of the arguments. For example,
if I am an "end host" but I do some of my functions on "attached support
processors" (excuse the "I" as metaphor for the main OS and CPU), that
may be quite clean architecturally - IF the communication between me and
the attached support processor is one that makes it act as part of
"me". So one could consider it part of the "end", even if it is in a
middlebox: the distinction is that it is under my sole control (so it
acts as a fully controlled module).

The end-to-end argument in the paper says that such acceleration can be
in the network, if indeed it merely accelerates a function that is in
all endpoints. However, the argument asks that we consider whether the
improvement overall is really worth it.

I leave it to the community of architects (not the chip designers, who
have a bias to believe that every "feature" is a differentiating
advantage) to decide whether the benefit of this particular thing is
really worth the potential inflexibility it creates - in particular the
risk that the chip will do the wrong thing on the forwarding path, and
the risk that the TCP spec will change in a way that makes the chip's
popularity a barrier to innovation.

It sounds as if there is a chance that, due to how one of the TOS chips
works, the portion of TCP that it implements is not strictly an "agent"
of the host TCP stack running on the host processor, but instead based
on "pattern recognition" that cannot be turned off. (I haven't read the
spec, so maybe it is more subtle than that).

That would result in a serious bug - if the chip is used by a low-level
forwarding path, perhaps an ethernet bridge or an IP routing layer, the
"optimization" would by accident be applied to TCP segments having
nothing to do with the host. This clearly makes using such chips in
general boxes like Linux boxes, that can perform ethernet bridging, IP
forwarding, etc. QUITE problematic! So perhaps they need to be marked
as *inappropriate* for general use. But that is because they really are
buggy for that use. (again, I haven't read the spec).
rick jones
2009-10-25 18:31:07 UTC
Permalink
Post by David P. Reed
Because I sense this thread might be interesting, and should be
separated from the trolling going on in the original thread, I
changed the title.
TCP offload is interesting. We actually addressed this kind of
thing is the "Active Networks" vs. end-to-end paper. Function
placement at the architectural level actually can be discussed with
regard to "design time" and "implementation time" versions of the
arguments. For example, if I am an "end host" but I do some of my
functions on "attached support processors" (excuse the "I" as
metaphor for the main OS and CPU), that may be quite clean
architecturally - IF the communication between me and the attached
support processor is one that makes it act as part of "me". So one
the distinction is that it is under my sole control (so it acts as a
fully controlled module).
The end-to-end argument in the paper says that such acceleration can
be in the network, if indeed it merely accelerates a function that
is in all endpoints. However, the argument asks that we consider
whether the improvement overall is really worth it.
I leave it to the community of architects (not the chip designers,
who have a bias to believe that every "feature" is a differentiating
advantage) to decide whether the benefit of this particular thing is
really worth the potential inflexibility it creates - in particular
the risk that the chip will do the wrong thing on the forwarding
path, and the risk that the TCP spec will change in a way that makes
the chip's popularity a barrier to innovation.
It sounds as if there is a chance that, due to how one of the TOS
chips works, the portion of TCP that it implements is not strictly
an "agent" of the host TCP stack running on the host processor, but
instead based on "pattern recognition" that cannot be turned off.
(I haven't read the spec, so maybe it is more subtle than that).
I don't interact much with "big TOE" functionality (full stateful
protocol offload - TCP Offload Engine) but the "little toe" stateless
stuff - ChecKsum Offload (CKO), TSO, LRO. I have yet to encounter a
card/chip/NIC where those little toes cannot be cut-off (to spite what
I'm not sure :) LRO (distict from GRO perhaps) may be one where it is
either on or off. CKO and TSO are ones where they can be on, off, or
on and ignored by the stack.

rick jones
Wisdom teeth are impacted, people are affected by the effects of events
William Allen Simpson
2009-10-26 12:53:18 UTC
Permalink
Post by rick jones
Post by David P. Reed
I leave it to the community of architects (not the chip designers, who
have a bias to believe that every "feature" is a differentiating
advantage) to decide whether the benefit of this particular thing is
really worth the potential inflexibility it creates - in particular
the risk that the chip will do the wrong thing on the forwarding path,
and the risk that the TCP spec will change in a way that makes the
chip's popularity a barrier to innovation.
I don't interact much with "big TOE" functionality (full stateful
protocol offload - TCP Offload Engine) but the "little toe" stateless
stuff - ChecKsum Offload (CKO), TSO, LRO. I have yet to encounter a
card/chip/NIC where those little toes cannot be cut-off (to spite what
I'm not sure :) LRO (distict from GRO perhaps) may be one where it is
either on or off. CKO and TSO are ones where they can be on, off, or on
and ignored by the stack.
TCP Checksum Offload (shouldn't that be TCO or CSO?) has been done for
years. I've always been a bit leery of it, as my many experiences with
bus problems indicates that the checksum should be calculated as close to
the CPU as possible....

TCP Segment Offload (TSO) -- large TCP segments are broken into smaller
ones -- wouldn't be a problem where the stack always feeds the chip
properly-sized PMTU segments. For routing a LAN jumbogram into a WAN,
that's broken! The drivers had better be smart enough to honor "Don't
Fragment" (DF), even though that technically only applies to IP. Best to
turn it off for all routed packets. Does your implementation?

TCP Large Receive Offload (LRO) -- small TCP segments are combined into
larger ones -- is an unmitigated disaster. The sender has no ability to
turn it off, and no idea that it's happening. Assuming it leaves
SYN-bearing segments untouched, I'd still think that breaks almost every
existing Ack-bearing TCP option.

In either of the latter cases, I don't see how PAWS Timestamps or the MD5
Authentication Option would ever work.
rick jones
2009-10-26 15:12:58 UTC
Permalink
Post by William Allen Simpson
TCP Checksum Offload (shouldn't that be TCO or CSO?)
It wouldn't be the former for it is used for UDP as well. As for CSO
vs CKO, a (stinking?) rose by any other name I suppose. The feature
got named CKO, I'm content to leave it as such.
Post by William Allen Simpson
TCP Segment Offload (TSO) -- large TCP segments are broken into smaller
ones -- wouldn't be a problem where the stack always feeds the chip
properly-sized PMTU segments.
I may have misunderstood your wording, but if TCP has already
segmented, there wouldn't be much if any offloading. The stack hands
the chip everything it needs to know to make properly-sized segments
on each large send. (IIRC Solaris experimented with Multi Data
Transmit (MDT - a joy of a search term...) where they did have TCP do
all the segmentation and all it did was pass a list of multiple
segments in one go (not unlike packet trains in say HP-UX 8.07 IP
fragmentation and elsestacks I suspect) but that "poor man's TSO" I
don't think went very far even though it could give a little boost
over a non-TSO-capable NIC. "All" the NICs can do TSO now. (TSO
itself is sometimes referred to as "poor man's Jumbo Frames" and we
would circle-back to a de jure MTU that has remained unchanged for
decades....)
Post by William Allen Simpson
For routing a LAN jumbogram into a WAN,
that's broken! The drivers had better be smart enough to honor "Don't
Fragment" (DF), even though that technically only applies to IP.
Best to
turn it off for all routed packets. Does your implementation?
I do not have my own TCP/IP stack :) I interact to varying degrees
with the stacks of others. Based on that experience, the decision to
do TSO is on a send by send basis. TCP sets-up the send to be either
TSO or non-TSO, the driver does the appropriate thing to the packet
descriptor(s) to inform the NIC. While I cannot say that I've gone
looking for the code in Linux and elsewhere, unless IP tries to set it
up on a routed datagram, and I do not believe it does, TSO will not be
applied as the datagram leaves via the egress interface.
Post by William Allen Simpson
TCP Large Receive Offload (LRO) -- small TCP segments are combined into
larger ones -- is an unmitigated disaster. The sender has no
ability to
turn it off, and no idea that it's happening. Assuming it leaves
SYN-bearing segments untouched, I'd still think that breaks almost every
existing Ack-bearing TCP option.
You must really like the HP-UX and Solaris (and any other Mentat-
derived stack's) ACK avoidance heuristics :) Another example of
customer LAN/MAN needs/desires coming-up against what is felt to be
necessary for the big-I Internet. IIRC the Solaris stack does attempt
to make a distinction between local and remote when deciding to (not)
apply the ACK avoidance heuristic. Both have mechanisms to evolve up
to their levels of avoidance and devolve back to the chapter-and-verse
ack-every-other behaviour suggested by the RFCs in the presence of
anomalies. Both can be controlled completely (on, off, degree) by the
system administrator.
Post by William Allen Simpson
In either of the latter cases, I don't see how PAWS Timestamps or the MD5
Authentication Option would ever work.
PAWS Timestamps need-not (should not?) be unique from segment to
segment, only from window to window or transmission to retransmission
yes? So, on the sending side, since the host TCP is very much in
control, if a sequence of N segments would have a PAWS increment in
the middle, TCP can split the large send into two at that point.

I do not know if GRO (or the card-based LRO) does the opposite on the
way in, but I could easily see (and not actually) them checking and
asking "is this timestamp the same as the previous" when making
coalescing decisions.

rick jones
Wisdom teeth are impacted, people are affected by the effects of events
rick jones
2009-10-24 16:24:23 UTC
Permalink
Post by William Allen Simpson
Post by rick jones
Perhaps he is referring to chips which provide TCP/Transport
Segmentation Offload - aka TSO - the functionality that allows the
stack to hand the chip a chunk of data > the MTU, along with the
initial TCP/IP headers and the connection's on the wire MSS, and
then have the chip otherwise statelessly segment that larger chunk
of data into MSS-sized segments for transmission on the wire/fibre/
etc.
It is indeed. Since the hardware driver is unaware of many things,
such as path MTU, this is one of its serious impediments.
WRT PathMTU, the implementations with which I am familiar have the
stack telling the NIC the on-the-wire size (what I tend to call the
effective MSS) to use on each "large send" where that effective MSS is
updated based on PathMTU information as/if it arrives.
Post by William Allen Simpson
Sure, there are measurements that show several percentage points less
CPU, but in most cases we're not CPU bound. I'm not sure what problem
it's solving, other than a checkbox to differentiate commodity
products.
When the functionality was introduced in the 1GbE NICs it was to allow
them to be driven at link-rate with the then-contemporary CPUs, not
only for easily dismissed (well, not IMO :) things like netperf
TCP_STREAM, but also for items customers actually did like file
transfers, or clustered database traffic, etc. (ie if you can't get
there with netperf, you ain't going to get there with FTP)

Now, this may be a place where my world starts to diverge from the
rest of the end2end community's - indeed many of my employer's
customers do things across the big-I Internet, but they do far more
across their corporate LANs and intranets. I can see where being CPU
bound talking across the big-I Internet is perhaps rare, but being CPU
bound when talking across the corporate 1 Gig LAN was not rare. And
essentially we have One Protocol to Rule Them All...

Yes, CPUs today are "faster" than at the dawn of 1 Gig Ethernet. We
are also at the dawn (perhaps a little past, depends I suppose on
one's deployment longitude) of 10 Gig Ethernet. Bless their hearts,
when a customer upgrades their network from one speed to the next,
they care little about Amdahl's Law etc and get quite agitated when
one cannot achieve link-rate on the next higher speed. Well, they
might give you a generation's worth of lee-way, but by the time the
second generation of the NIC arrives, their expectations are pretty
firm. If your solution cannot achieve link-rate, your solution is not
selected.

TSO and GRO, like Jumbo Frames, can be thought of as the inevitable
"inter-reaction" between customer expectations and a de jure network
MTU size that has remained unchanged since the dawn of Ethernet. Or,
put another way, we have begun treating the Ethernet MTU as damaged
and routed around it.
Post by William Allen Simpson
Post by rick jones
And if that upsets him, we better not tell him about the 10G NICs
also doing receive offload... :)
I'd heard of it, but thought that was pretty uniformly rejected.
Heck,
the most basic TCP decision points would be impossible to implement,
revise, or test.
"LRO" (multiple segment coalescing done in the chip and an uber frame
hitting the host with the intermediate headers stripped) has been
rejected in Linux-land in favor of GRO, which preserves the arriving
segment boundaries via some clever linking of buffers (and perhaps
some header-data split but I'm fuzzy there).
Post by William Allen Simpson
Post by rick jones
BTW, I do not believe that any router actually has TSO happen to
TCP segments contained within the IP datagrams passing through it -
although
Only recently trying to decipher the Linux stack, but it all appears to
go through the same queue, routed packets included. If the box receives
a jumbogram on one interface, it can be re-segmented out another, and
I've not found any support for PMTUD or ECN or anything.
Post by rick jones
there have been issues in Linux with LRO (Large Receive Offload,
distinct from General Receive Offload) when the system was acting
as either a router or a bridge - because TSO doesn't happen in that
path :)
Again, I'm not as familiar with Linux-only terminology. A quick Google
turns up "Generic Receive Offload", and that appears to be explicitly
http://lwn.net/Articles/311357/
I'm pretty sure this is contrary to the end-to-end [argument,
principle,
what-have-you]....
You are supposed to be ignoring the code-path behind the curtain :)

rick jones

http://homepage.mac.com/perfgeek
Noel Chiappa
2009-10-23 16:58:35 UTC
Permalink
My sense of things is that the term is not actually defined all that
concretely or consistently
Sorry, I disagree. The original Saltzer/Clark/Reed paper does a pretty
good job, I think - as well as one can do with a broad architectural
concept, which is inherently not as susceptible to precise definition as,
say, an algorithm.
this has made it difficult to use the term constructively.
No, people being bozos and not using the term _as it wss originally
defined_ are what has made its use problematic.

Noel
Jon Crowcroft
2009-10-24 08:41:01 UTC
Permalink
one of the problems is language evolution/erosion

for some people
an end-to-end _argument_
is an argument for everything
being in the end point
as opposed to the more
nuanced
meaning of the aforesaid paper(s)
in which it is a
set of dynamic debates
which set a tension
between whether you put something
in the end,
in the end,
or not
(i.e. in the intermediate).

the "argument" then is not a polemic
but a method or process (or dialectic)
that can and should be
dynamically reapplied
as technology and the environment
evolve.
Post by Noel Chiappa
My sense of things is that the term is not actually defined all that
concretely or consistently
Sorry, I disagree. The original Saltzer/Clark/Reed paper does a pretty
good job, I think - as well as one can do with a broad architectural
concept, which is inherently not as susceptible to precise definition as,
say, an algorithm.
this has made it difficult to use the term constructively.
No, people being bozos and not using the term _as it wss originally
defined_ are what has made its use problematic.
Noel
cheers

jon
Noel Chiappa
2009-10-24 18:00:15 UTC
Permalink
Post by Richard Bennett
The best discussion I've seen of function placement in a datagram
network to this day is found in Louis Pouzin's mongraph on the CYCLADES
network, _Cyclades Computer Network: Towards Layered Network
Applications_, Elsevier Science Ltd (September 1982).
I'm not sure I'm totally on board with that "best" attribute, but the work of
Pouzin et al was an _extremely_ important step in the evolution of networking,
and often does't get the credit it deserves (e.g. one of the papers I read in
preparing to respond to this email didn't mention it, in reviewing the
philosophical development of the architecture of TCP/IP).
Post by Richard Bennett
Tim Moors' "A Critical Review of End-to-End Arguments in System
Design", http://www.ee.unsw.edu.au/~timm/pubs/02icc/published.pdf.
A good and interesting paper; thanks for bring it to my attention. I do think
it goes off the beam in a couple of places, though.

For one, NATs became widespread mostly a result of flaws in the original
engineering (too small an address space) and architecture (too few namespaces,
leading to difficulty in supporting things like provider independence). NATs
are not inherently desirable, and would not, I think, have
evolved/proliferated had TCP/IP avoided those (in hindsight, now obvious)
mistakes.

For another, the current routing architecture has been driven much more by
factors such as technical hysteresis (both personnel familiarity with the
existing distributed computation model, as well as 'if it isn't broken, don't
fix it') and 'alligator' syndrome (as in 'when you're up to your
you-know-what in alligators [growing the network, in this case], you don't go
looking for more not-immediately-important fights').

Still, those are nits in the overall sweep of the paper.
Post by Richard Bennett
Moors shows that the Saltzer, Reed, and Clark argument for end-to-end
placement is both circular and inconsistent with the FTP example that
is supposed to demonstrate it.
I didn't see that at all.
Post by Richard Bennett
One of the more interesting unresolved questions about "End-to-End
Args" is why it was written in the first place. Some people see it as a
salvo in the ISO protocol wars, others as an attack in BBN's ARPANET,
some as an attempt to criss the divide between engineering and policy
I don't know whether to be amused or outraged by this nonsense.

I will settle for observing that you probably haven't interacted much with
Jerry - because had you done so, it would have been utterly obvious to you
that overwhelmingly his most important motivation in writing the paper was
his deep commitment to improving the art of system architecture.

Dave Reed is here to defend himself, and as to Dave Clark, I would be prepared
to bet pretty much any stakes that he'd be in the front rank in acclaiming the
ARPANet as a huge step forward in information networking.

The reference to the "ISO protocol wars" is completely mystifying, as the
architecture of the ISO stack (at least, the CLNP/TP4 flavour, which was the
subset which gave TCP/IP the best 'run for their money') is basically
identical to that of TCP/IP (modulo disagreements on certain arcane points,
such as exactly what kind of abstract entities the names at the various levels
refer to - a subject wholly unrelated to the end-end debate).

Noel
Richard Bennett
2009-10-24 19:54:22 UTC
Permalink
Post by Noel Chiappa
Post by Richard Bennett
Moors shows that the Saltzer, Reed, and Clark argument for end-to-end
placement is both circular and inconsistent with the FTP example that
is supposed to demonstrate it.
I didn't see that at all.
Moors points out that TCP error detection and recovery is an end-system
function, but not really an endpoint function in the file transfer
example. The file transfer *application* is the endpoint, so placing the
error detection and recovery function in TCP is actually putting it in
an intermediate system level. This becomes clear when we recognize that
TCP is often implemented in hardware or in firmware running on a CPU
that lives on an interface card. The paper goes to great lengths to show
that host-based TCP is immune to problem induced at MIT by a bad 1822
interface card, but it was very common engineering practice in the
mid-80s to implement TCP on an interface card that had the same
vulnerability as the 1822 card. Excelan and Ungermann-Bass built these
systems and they were very popular. They designed in a competent level
of data integrity at the bus interface, so it wasn't necessary to rely
on software to detect bus problems. So it's at least ironic that the
end-to-end argument on the data integrity basis was mooted by practice
by the time the 1984 version of the paper was published.

Because the file transfer program doesn't do its own data integrity
checking but relies on TCP to do it, it's not really an example of
endpoint placement at all; in fact, it's a "partial implementation".
Post by Noel Chiappa
Post by Richard Bennett
One of the more interesting unresolved questions about "End-to-End
Args" is why it was written in the first place. Some people see it as a
salvo in the ISO protocol wars, others as an attack in BBN's ARPANET,
some as an attempt to criss the divide between engineering and policy
I don't know whether to be amused or outraged by this nonsense.
I don't know why this question should get anybody upset, it's just a
question about the context and motivation of the paper in the first
place. None of the authors was part of the inner circle of the Internet
protocol design at the time the paper was written, although Clark was
either the Chief Architect of the Internet or on his way to becoming
same. I would have expected Cerf and Kahn to write something explaining
the architectural decisions they made in adapting the framework to
their system, but their failure to do so meant someone else had to do
it. Why these three people and why this particular time? It's never been
explained.
Post by Noel Chiappa
I will settle for observing that you probably haven't interacted much with
Jerry - because had you done so, it would have been utterly obvious to you
that overwhelmingly his most important motivation in writing the paper was
his deep commitment to improving the art of system architecture.
Dave Reed is here to defend himself, and as to Dave Clark, I would be prepared
to bet pretty much any stakes that he'd be in the front rank in acclaiming the
ARPANet as a huge step forward in information networking.
The reference to the "ISO protocol wars" is completely mystifying, as the
architecture of the ISO stack (at least, the CLNP/TP4 flavour, which was the
subset which gave TCP/IP the best 'run for their money') is basically
identical to that of TCP/IP (modulo disagreements on certain arcane points,
such as exactly what kind of abstract entities the names at the various levels
refer to - a subject wholly unrelated to the end-end debate).
The "ISO protocol wars" I am referring to took place within the context
of the ISO development community between the CLNP that the datagram
people wanted and CONS, the connection-oriented service that was
proposed by the European PTTs. CONS was an extension of the line of
development that went from ARPANET to X.25, which CLNP was on the line
that went from CYCLADES through DECnet. The naming and addressing logic
in CLNP and its predecessors was very different from TCP/IP but
consistent with these others. The reasons that ISO didn't succeed are
well-documented in John Day's "Patterns in Network Architecture."

Like it or not, Noel, there was a lot of friction between the Network
Working Group and BBN over the control BBN had over the ARPANET
protocols inside the IMP. The interesting problems of the day in
protocol design were all behind the curtain to the people who used the
ARPANET, and that's frustrating to engineers. Nobody disagrees that
ARPANET was a huge first step in packet switching; but by 1981, people
were well into the second step, and the closed implementation of the
lower three layers was a problem.
Post by Noel Chiappa
Noel
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
David P. Reed
2009-10-24 22:39:28 UTC
Permalink
I don't know why I waste my time explaining to Richard Bennett what he
Post by Richard Bennett
Post by Noel Chiappa
Post by Richard Bennett
Moors shows that the Saltzer, Reed, and Clark argument for end-to-end
placement is both circular and inconsistent with the FTP example that
is supposed to demonstrate it.
I didn't see that at all.
Moors points out that TCP error detection and recovery is an
end-system function, but not really an endpoint function in the file
transfer example. The file transfer *application* is the endpoint, so
placing the error detection and recovery function in TCP is actually
putting it in an intermediate system level. This becomes clear when we
recognize that TCP is often implemented in hardware or in firmware
running on a CPU that lives on an interface card. The paper goes to
great lengths to show that host-based TCP is immune to problem induced
at MIT by a bad 1822 interface card, but it was very common
engineering practice in the mid-80s to implement TCP on an interface
card that had the same vulnerability as the 1822 card. Excelan and
Ungermann-Bass built these systems and they were very popular. They
designed in a competent level of data integrity at the bus interface,
so it wasn't necessary to rely on software to detect bus problems. So
it's at least ironic that the end-to-end argument on the data
integrity basis was mooted by practice by the time the 1984 version of
the paper was published.
Because the file transfer program doesn't do its own data integrity
checking but relies on TCP to do it, it's not really an example of
endpoint placement at all; in fact, it's a "partial implementation".
OK. This is incredibly simple to understand. In the end-to-end
argument paper, we describe a program called "careful file transfer",
whose goal is to ensure that the file received is a proper copy of the
source. We use this "careful file transfer" example as a pedagogical
device.

The paper carefully does not claim that TCP or FTP over TCP satisfy the
end-to-end argument required for the function "careful file transfer".
There was a reason: FTP/TCP does not do so.

Now, RB claims that Moors's paper somehow says the argument is
inconsistent with the FTP example. Well, no. It is consistent with
the actual example we use, which is not FTP/TCP.

Bennett may have joined late this particular discussion. If so, he
missed my earlier posting that said that the end-to-end argument did not
say "TCP is best". It was not a defense of TCP at all (unless you
accept his mind-reading of the authors' intent to somehow write the
paper to be part of some fight that Bennett imagines was going on).

The end-to-end argument paper was not a paper about TCP or IP or any
particular implementation of any protocol, except insofar as it was
inspired by architectural discussions in the design, and was cited quite
frequently by IETF architects later as they considered designs happening
afterwards. It was about a way to think about architectural questions
- one that was used frequently and heavily in the original TCP and IP
design process, and as noted in the paper, in a number of other
processes we were aware of and had been involved in.
Richard Bennett
2009-10-24 23:50:50 UTC
Permalink
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
There's no doubt about the fact that Saltzer was and still is regarded
as one of the brightest lights in the system architecture firmament,
and that in particular his seminal paper on naming and addressing was
one of the most cogent pieces of its kind ever written. It's
unfortunate that the structure of Saltzer's thinking isn't reflected in
the organization of Internet protocols, naming, and addressing and that
he wasn't able to pass his brilliance along to all of his students.<br>
<br>
RB<br>
<br>
David P. Reed wrote:
<blockquote cite="mid:***@reed.com" type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<font face="Helvetica, Arial, sans-serif">I don't know why I waste my
time explaining to Richard Bennett what he misreads, but here goes:</font><br>
<br>
On 10/24/2009 03:54 PM, Richard Bennett wrote:
<blockquote cite="mid:***@bennett.com" type="cite"><br>
<br>
Noel Chiappa wrote: <br> <blockquote type="cite">&nbsp;&nbsp;&nbsp; &gt; From: Richard Bennett
<a moz-do-not-send="true" class="moz-txt-link-rfc2396E"
href="mailto:***@bennett.com">&lt;***@bennett.com&gt;</a> <br>
<br>
&nbsp;&nbsp;&nbsp; &gt; Moors shows that the Saltzer, Reed, and Clark argument for
end-to-end <br>
&nbsp;&nbsp;&nbsp; &gt; placement is both circular and inconsistent with the FTP
example that <br>
&nbsp;&nbsp;&nbsp; &gt; is supposed to demonstrate it. <br>
<br>
I didn't see that at all. <br>
&nbsp; </blockquote>
Moors points out that TCP error detection and recovery is an end-system
function, but not really an endpoint function in the file transfer
example. The file transfer *application* is the endpoint, so placing
the error detection and recovery function in TCP is actually putting it
in an intermediate system level. This becomes clear when we recognize
that TCP is often implemented in hardware or in firmware running on a
CPU that lives on an interface card. The paper goes to great lengths to
show that host-based TCP is immune to problem induced at MIT by a bad
1822 interface card, but it was very common engineering practice in the
mid-80s to implement TCP on an interface card that had the same
vulnerability as the 1822 card. Excelan and Ungermann-Bass built these
systems and they were very popular. They designed in a competent level
of data integrity at the bus interface, so it wasn't necessary to rely
on software to detect bus problems. So it's at least ironic that the
end-to-end argument on the data integrity basis was mooted by practice
by the time the 1984 version of the paper was published. <br>
<br>
Because the file transfer program doesn't do its own data integrity
checking but relies on TCP to do it, it's not really an example of
endpoint placement at all; in fact, it's a "partial implementation". <br>
<br>
</blockquote>
OK.&nbsp; This is incredibly simple to understand.&nbsp;&nbsp; In the end-to-end
argument paper, we describe a program called "careful file transfer",
whose goal is to ensure that the file received is a proper copy of the
source.&nbsp; We use this "careful file transfer" example as a pedagogical
device.<br>
<br>
The paper carefully does not claim that TCP or FTP over TCP satisfy the
end-to-end argument required for the function "careful file transfer".
There was a reason: FTP/TCP does not do so.<br>
<br>
Now, RB claims that Moors's paper somehow says the argument is
inconsistent with the FTP example.&nbsp;&nbsp; Well, no.&nbsp;&nbsp; It is consistent with
the actual example we use, which is not FTP/TCP.<br>
<br>
Bennett may have joined late this particular discussion.&nbsp; If so, he
missed my earlier posting that said that the end-to-end argument did
not say "TCP is best".&nbsp; It was not a defense of TCP at all (unless you
accept his mind-reading of the authors' intent to somehow write the
paper to be part of some fight that Bennett imagines was going on).<br>
<br>
The end-to-end argument paper was not a paper about TCP or IP or any
particular implementation of any protocol, except insofar as it was
inspired by architectural discussions in the design, and was cited
quite frequently by IETF architects later as they considered designs
happening afterwards.&nbsp;&nbsp; It was about a way to think about architectural
questions - one that was used frequently and heavily in the original
TCP and IP design process, and as noted in the paper, in a number of
other processes we were aware of and had been involved in.<br>
<br>
<br>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC</pre>
</body>
</html>
David Andersen
2009-10-25 01:56:13 UTC
Permalink
Hi, Richard, and everyone -

Having read the e2e argument paper a number of times (I, like half the
networking faculty I know, use it as a discussion point in my graduate
networks and distributed systems courses), I think it's worth taking a
step back from this debate a bit, which has clearly extended beyond
the realm of the purely technical.

(The preceding paragraph may be taken, correctly, as a doubtless
unsuccessful plea to abandon the ad hominem attacks on all sides of
this argument in favor of what is, actually, a fun discussion.)

In particular, I believe that your argument is taking a (perhaps
deliberately) overly strong interpretation of what was actually
written in the e2e paper, which makes a somewhat subtle argument that
is devoid of absolute black and white insistence about the placement
of functionality. This seems like a very common misinterpretation --
it even shows up in the subject of this thread: "breaking" the end-to-
end argument, as if it was a law writ in stone by the hand of the
Internet Gods.

DPR's representation of the e2e argument in this discussion is
entirely in keeping with the text of the paper, which included no
mention of TCP providing reliability. In fact, it's become
increasingly clear over time that a careful file transfer system -- or
a careful storage system -- probably has to implement exactly the type
of strong error checking alluded to in the e2e paper. See, for
instance, a lot of recent work on long-term digital data preservation,
or, more commercially, the content-hash based storage systems now
provided by major vendors such as EMC.

TCP offload does not particularly seem to disagree with the point of
the e2e argument, as it was stated.

"It would be too simplistic to conclude that the lower levels
should play no part in obtaining reliability."

and the argument in the paper makes it very clear that there are
legitimate performance reasons for performing functions -- even
duplicating them -- at lower levels; one should simply make such
optimizations with full awareness of the consequences. Offload is a
great example: It's a very useful performance enhancement with
today's 10GE networks, and it may cause you some difficulties if you
want to take advantage of later enhancements to TCP. It's an
engineering tradeoff, pure and simple, which is all the "argument" is
about. So are 802.11 retransmissions, with which you're undoubtedly
familiar. Great, needed performance optimization -- they're an
excellent illustration of the "when you _should_ use the middle"
aspect of the paper.

-Dave Andersen
Post by Richard Bennett
There's no doubt about the fact that Saltzer was and still is
regarded as one of the brightest lights in the system architecture
firmament, and that in particular his seminal paper on naming and
addressing was one of the most cogent pieces of its kind ever
written. It's unfortunate that the structure of Saltzer's thinking
isn't reflected in the organization of Internet protocols, naming,
and addressing and that he wasn't able to pass his brilliance along
to all of his students.
RB
Post by David P. Reed
I don't know why I waste my time explaining to Richard Bennett what
Post by Richard Bennett
Post by Jaime Mateos
Post by Richard Bennett
Moors shows that the Saltzer, Reed, and Clark argument for
end-to-end
Post by Richard Bennett
placement is both circular and inconsistent with the FTP
example that
Post by Richard Bennett
is supposed to demonstrate it.
I didn't see that at all.
Moors points out that TCP error detection and recovery is an end-
system function, but not really an endpoint function in the file
transfer example. The file transfer *application* is the endpoint,
so placing the error detection and recovery function in TCP is
actually putting it in an intermediate system level. This becomes
clear when we recognize that TCP is often implemented in hardware
or in firmware running on a CPU that lives on an interface card.
The paper goes to great lengths to show that host-based TCP is
immune to problem induced at MIT by a bad 1822 interface card, but
it was very common engineering practice in the mid-80s to
implement TCP on an interface card that had the same vulnerability
as the 1822 card. Excelan and Ungermann-Bass built these systems
and they were very popular. They designed in a competent level of
data integrity at the bus interface, so it wasn't necessary to
rely on software to detect bus problems. So it's at least ironic
that the end-to-end argument on the data integrity basis was
mooted by practice by the time the 1984 version of the paper was
published.
Because the file transfer program doesn't do its own data
integrity checking but relies on TCP to do it, it's not really an
example of endpoint placement at all; in fact, it's a "partial
implementation".
OK. This is incredibly simple to understand. In the end-to-end
argument paper, we describe a program called "careful file
transfer", whose goal is to ensure that the file received is a
proper copy of the source. We use this "careful file transfer"
example as a pedagogical device.
The paper carefully does not claim that TCP or FTP over TCP satisfy
the end-to-end argument required for the function "careful file
transfer". There was a reason: FTP/TCP does not do so.
Now, RB claims that Moors's paper somehow says the argument is
inconsistent with the FTP example. Well, no. It is consistent
with the actual example we use, which is not FTP/TCP.
Bennett may have joined late this particular discussion. If so, he
missed my earlier posting that said that the end-to-end argument
did not say "TCP is best". It was not a defense of TCP at all
(unless you accept his mind-reading of the authors' intent to
somehow write the paper to be part of some fight that Bennett
imagines was going on).
The end-to-end argument paper was not a paper about TCP or IP or
any particular implementation of any protocol, except insofar as it
was inspired by architectural discussions in the design, and was
cited quite frequently by IETF architects later as they considered
designs happening afterwards. It was about a way to think about
architectural questions - one that was used frequently and heavily
in the original TCP and IP design process, and as noted in the
paper, in a number of other processes we were aware of and had been
involved in.
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
David P. Reed
2009-10-24 22:45:23 UTC
Permalink
Post by Richard Bennett
Like it or not, Noel, there was a lot of friction between the Network
Working Group and BBN over the control BBN had over the ARPANET
protocols inside the IMP. The interesting problems of the day in
protocol design were all behind the curtain to the people who used the
ARPANET, and that's frustrating to engineers. Nobody disagrees that
ARPANET was a huge first step in packet switching; but by 1981, people
were well into the second step, and the closed implementation of the
lower three layers was a problem.
This is both irrelevant, and bizarre. Again, Bennett focuses on imputed
motivations to impugn people's professional actions. There was no
friction that mattered - protocols were not designed to carry out
"anger". Since Bennett was not there, I can only assume he is talking
to some very angry people who were there.

In any case, lecturing Noel Chiappa, who has more experience with the
Internet and networking by far seems to be an odd thing to try to do.
I'd suggest people look at Bennett's resume at
http://www.bennett.com/resume.pdf. You might find his claims that he
was responsible for some of the most important IEEE protocols a bit
interesting. I take no position on the claims.
Richard Bennett
2009-10-24 23:46:42 UTC
Permalink
As usual, an attempt to discuss ideas in a forum inhabited by David Reed
quickly becomes an exercise in scurrilous personal attack; my role in
shaping IEEE 802 standards from 1984 to the present is a matter of
historical record than be discovered by any conscientious person in a
matter of minutes.

On the subject of BBN's standing in the early Internet community, I'll
simply note that the term "Big Bad Neighbor" was a common usage that I
did not coin myself, and Steve Crocker's comments in RFC 1 had a
well-understood subtext.

RB
Post by David P. Reed
Post by Richard Bennett
Like it or not, Noel, there was a lot of friction between the Network
Working Group and BBN over the control BBN had over the ARPANET
protocols inside the IMP. The interesting problems of the day in
protocol design were all behind the curtain to the people who used
the ARPANET, and that's frustrating to engineers. Nobody disagrees
that ARPANET was a huge first step in packet switching; but by 1981,
people were well into the second step, and the closed implementation
of the lower three layers was a problem.
This is both irrelevant, and bizarre. Again, Bennett focuses on
imputed motivations to impugn people's professional actions. There
was no friction that mattered - protocols were not designed to carry
out "anger". Since Bennett was not there, I can only assume he is
talking to some very angry people who were there.
In any case, lecturing Noel Chiappa, who has more experience with the
Internet and networking by far seems to be an odd thing to try to do.
I'd suggest people look at Bennett's resume at
http://www.bennett.com/resume.pdf. You might find his claims that he
was responsible for some of the most important IEEE protocols a bit
interesting. I take no position on the claims.
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
Durga Prasad Pandey
2009-10-25 16:02:28 UTC
Permalink
Post by Richard Bennett
I don't know why this question should get anybody upset, it's just a
question about the context and motivation of the paper in the first place.
None of the authors was part of the inner circle of the Internet protocol
design at the time the paper was written, although Clark was either the
Chief Architect of the Internet or on his way to becoming same. I would have
expected Cerf and Kahn to write something explaining the architectural
decisions they made in adapting the  framework to their system, but their
failure to do so meant someone else had to do it. Why these three people and
why this particular time? It's never been explained.
RB,

Is your next email going to be about(humor me..) how a secret
underground cult (now called NASA) funded a young clerk called
Einstein to produce theories of relativity so they could "stage" the
spage age and eventually perform a cultish ritual on the moon?
(Actually I could have sold this storyline to Dan Brown for gazillions
of dollars.)

You quite obviously love conspiracy theories + juicy gossip and were
probably looking for some in your first email on this thread. Now you
are going to ridiculous lengths to explain yourself. :)
Post by Richard Bennett
Why these three people and why this particular time? It's never been explained.
This is a flagship question of conspiracy theorists.
Post by Richard Bennett
One of the more interesting unresolved questions about "End-to-End Args" is why it was written in the first place. Some people see it as a salvo in the ISO protocol wars, others as an attack in BBN's >>ARPANET, some as an attempt to criss the divide between engineering and policy, and there are probably other theories as well.
Conspiracy theories you mean?

One could ask these questions of almost any paper ever published. The
subtle thing they do is gently cast aspersions on the authors'
motivations. That's not a good thing to be trying to do.

The one good thing that your questions did is provoke detailed
responses from different people, some of which were very informative
to me(having been conceived well after TCP/IP, I do not have the
unique historical experience a lot of people on this list do). I liked
Dave Anderson's summary of the e2e paper too.

Durga
Durga Prasad Pandey
2009-10-25 16:18:25 UTC
Permalink
Actually, after having read Noel's latest email, I realize my email
was redundant(he articulated all I meant to say, and more, much more,
umm, articulately..). Though I love the coincidence in reference to
Einstein. (btw, coincidences are fertile grounds for conspiracy
theories!).
Post by Durga Prasad Pandey
Post by Richard Bennett
I don't know why this question should get anybody upset, it's just a
question about the context and motivation of the paper in the first place.
None of the authors was part of the inner circle of the Internet protocol
design at the time the paper was written, although Clark was either the
Chief Architect of the Internet or on his way to becoming same. I would have
expected Cerf and Kahn to write something explaining the architectural
decisions they made in adapting the  framework to their system, but their
failure to do so meant someone else had to do it. Why these three people and
why this particular time? It's never been explained.
RB,
Is your next email going to be about(humor me..) how a secret
underground cult (now called NASA) funded a young clerk called
Einstein to produce theories of relativity so they could "stage" the
spage age and eventually perform a cultish ritual on the moon?
(Actually I could have sold this storyline to Dan Brown for gazillions
of dollars.)
You quite obviously love conspiracy theories + juicy gossip and were
probably looking for some in your first email on this thread. Now you
are going to ridiculous lengths to explain yourself.  :)
Post by Richard Bennett
Why these three people and why this particular time? It's never been explained.
This is a flagship question of conspiracy theorists.
Post by Richard Bennett
One of the more interesting unresolved questions about "End-to-End Args" is why it was written in the first place. Some people see it as a salvo in the ISO protocol wars, others as an attack in BBN's >>ARPANET, some as an attempt to criss the divide between engineering and policy, and there are probably other theories as well.
Conspiracy theories you mean?
One could ask these questions of almost any paper ever published. The
subtle thing they do is gently cast aspersions on the authors'
motivations. That's not a good thing to be trying to do.
The one good thing that your questions did is provoke detailed
responses from different people, some of which were very informative
to me(having been conceived well after TCP/IP, I do not have the
unique historical experience a lot of people on this list do). I liked
Dave Anderson's summary of the e2e paper too.
Durga
--
“Never doubt that a small group of thoughtful, committed citizens can
change the world. Indeed, it's the only thing that ever has.”
-- Margaret Mead
John Day
2009-10-25 02:16:35 UTC
Permalink
Post by Noel Chiappa
The reference to the "ISO protocol wars" is completely mystifying, as the
architecture of the ISO stack (at least, the CLNP/TP4 flavour, which was the
subset which gave TCP/IP the best 'run for their money') is basically
identical to that of TCP/IP (modulo disagreements on certain arcane points,
such as exactly what kind of abstract entities the names at the various levels
refer to - a subject wholly unrelated to the end-end debate).
Cmon Noel, you know better than that. That was never what the
protocol wars were about.

It was not a war between CLNP/TP4 and TCP/IP, but a war between
(CLNP/TP4; TCP/IP) and X.25. The argument at the time by the PTTs
was that a Transport Protocol was unnecessary. Our argument, of
course, was that it was absolutely necessary. This was the big
argument from about 1976 to 1985. This is primarily what the
end-to-end paper discusses and tries to create a "higher moral
ground" by creating a more general (and hence more fundamental)
principle to base the debate on.

It was only later that the unwashed in the IETF turned it into a CLNP
vs IP war.

Take care,
John
Jon Crowcroft
2009-10-25 09:40:00 UTC
Permalink
This is exactly right -

UCL were very much at the
"centre" of this at the turning point
(turning of NCP and on TCP/IP) and relaying from the new
internet world to/from the X.25 (and other) worlds -

The CLNP/TP4 paer of the ISO thing came to a small
degree from the DEC
(whose ideas show up again later in
TCP congestion control and later still in ECN)
which was a computer science approach to networking,
not telco at all.

The "war" was between telco
(connection oriented, reliable, ordered)
and computer science
(connectionless, best effort)

on the one hand
1. Sheer complicatedness of X.25
(both on paper and in reality)
made it hard to get right,
and the lack of losses on newer links (LANs) +
X.25 interpretations and implmentations'
failure to mask dynamic routing from transport,
and therefore from applications,
meant it was no longer justifiable to build
such complicated networks
which were also getting cheaper, albeit slowly.

on the other
2. Increase in end system capability
(e.g. mini computers like PDPs and LSIs,
and shortly after that, workstations)
meant, contrariwise,
the end system effort in TCP was justifiable.


likewise
3.
A little too late, some smart switch people build
x.25 systems that did VC on the edge,
but datagrams within and went fast,
but missed the curve (e.g. netcom switches).
Some of those people showed up again doing ATM
(e.g. ipsilon), understanding a VC service, but
internal dynamics with pnni etc might work - again
a little too late (and cell switching didn't have the
switch speed up/cost reduction they needed to beat routers)

TP4 (which my colleagues did experiments with)
was a neat piece of design, but has nothing much
to do with any of the main protocol wars ...

The other war story people might be getting confused by
when mentioning CLNP is the IPng NSAP/CLNP fiasco...
again not part of the e2e arguments part of ISO v. DARPA
but its own later skirmish between newer players.

To be strictly fair then,
while the bogey man in many a tee-shirt slogan was ISO,
it was the ITU (or CCITT as was) and the
telco mind set that was connection oriented networking
(with reliable link and network service)
specifically that was the focus of that "war"
or as I prefer to see it, a debate that played out
in markets and in operations -

Everyone has their own particular
turning point tale I'm sure,
but when we built the "shoestring"
IP service for UK academics,
this was a clear point for me that
we were able to make the
particular end2end choice visible

Around that time too,
various governments turned off their default
GOSIP (government OSI procurement) policies...

Much has flowed over many bridges since then:-)

A sad recent error has been
EU statements that everyone doing
next gen internet research should be
trying to converge on IPv6, but
that's a whole other rant...


cheers
j.
Post by John Day
It was not a war between CLNP/TP4 and TCP/IP, but a war between
(CLNP/TP4; TCP/IP) and X.25. The argument at the time by the PTTs
was that a Transport Protocol was unnecessary. Our argument, of
course, was that it was absolutely necessary. This was the big
argument from about 1976 to 1985. This is primarily what the
end-to-end paper discusses and tries to create a "higher moral
ground" by creating a more general (and hence more fundamental)
principle to base the debate on.
It was only later that the unwashed in the IETF turned it into a CLNP
vs IP war.
Take care,
John
cheers

jon
Jaime Mateos
2009-10-25 13:00:12 UTC
Permalink
Post by Jon Crowcroft
A sad recent error has been
EU statements that everyone doing
next gen internet research should be
trying to converge on IPv6, but
that's a whole other rant...
That's a rant I'm interested about. Where in your opinion does IPv6,
specially features such as flow label support, fit in the end to end
argument?
Jon Crowcroft
2009-10-25 13:25:38 UTC
Permalink
ok, as you asked.
but this is euro-centric so others on the list might
not want to read on...

the next gen research programmes should be
about ideas that will postdate the internet -

IPv6 is, errm, around ~15 year old idea -

A few things came along in the last 15, 10, 5 years
that are already stressing out the basics -
of the entire type of networking that the subject line
discussion is about...

Lots of people can make their lists,
but in no particular order, my
problem with the entire approach to
nets predicated on packets and links and nodes
comes out of the pressure from
1. trying to do multihop, multiantennae radios
2. dealing with net coding

3. dealing with a future pretty soon now
where there are 10 billion mobile
devices and because of critical infrastructure,
we want to be organisationally (and not just topo/geo)
multihomed, and multipath (for resiliance of
access and flow resilience)

4. Coping with sub-lambda multiplexing
on 100Gbps optical paths... is another
strange vector (I suppose gMPLS afficionados
believe they have this one under control, but
I don't)


This had already start to creep in with
the internet of things, (sorry, I know thats just buzzword)
social nets, and content centric networking...

But new paradigms for traffic patterns
loosely starting with
content based networking,
now with very large scale rendezvous of
user contributed media and user interest...

The tension between authentic source identification,
(and sink), provenance of content,
and the requirements for privacy,
also puts a lot of pressure on the
net and application architecture
particularly when the matchmaking indixes
for this stuff have to be rebuilt faster and faster (see
what fb, imdb, amazon, etc etc)
have to do every night:)
(so they can get their targetted advertisement revenue
that lets us all use this stuff so cheap)


So where is the action?
That would be telling:-)

Flow labs and end-to-end arguments?
well, I guess what I am trying to say above
is that we are rapdily seeing the sublimation
of the entire naive idea of an "end point"
in network, transport and application terms
so I can't answer your specific query...

cheers
Post by Jaime Mateos
Post by Jon Crowcroft
A sad recent error has been
EU statements that everyone doing
next gen internet research should be
trying to converge on IPv6, but
that's a whole other rant...
That's a rant I'm interested about. Where in your opinion does IPv6,
specially features such as flow label support, fit in the end to end
argument?
cheers

jon
Jaime Mateos
2009-10-25 14:15:54 UTC
Permalink
Post by Jon Crowcroft
Flow labs and end-to-end arguments?
well, I guess what I am trying to say above
is that we are rapdily seeing the sublimation
of the entire naive idea of an "end point"
in network, transport and application terms
so I can't answer your specific query...
Do you mean we need to redefine the "end points" beyond the conventional
meaning of hosts, but keep applying the end to end argument with its
bias towards simpler, more flexible networks; or that we should do away
with end to end, and recognize that the new demands/pressures you list
above can only be met with network based protocols?
Cheers,
Jaime
Jon Crowcroft
2009-10-25 16:39:34 UTC
Permalink
Jaime,

maybe - i'm too staid and stuck in my ways to come up with
new stuff, but it worries me

i) that we use simple graphs and
labeling to describe things (wireless) that isnt

ii) there are ways to do content centric networking
(couple of papers comin up in CoNeXT this year in Rome)
mapped into ipv6 which only slightly fly in the face of the
ipv6 address structuring conventiosn (there are always ways
to hash object id's into a bit number space) but that doesn't
address some of the things one might do with time shifting

iii) I'm aware that there are people workin on "fuzzy end
points" - perhaps this is just the mist around the edge of
clouds...

iv) i'm not sure people take on board what new technology
like multicore do to your OS/protocol stack
or like Terminator does to your code safety/security
enough...

and lots more stuff
Post by Jaime Mateos
Post by Jon Crowcroft
Flow labs and end-to-end arguments?
well, I guess what I am trying to say above
is that we are rapdily seeing the sublimation
of the entire naive idea of an "end point"
in network, transport and application terms
so I can't answer your specific query...
Do you mean we need to redefine the "end points" beyond the conventional
meaning of hosts, but keep applying the end to end argument with its
bias towards simpler, more flexible networks; or that we should do away
with end to end, and recognize that the new demands/pressures you list
above can only be met with network based protocols?
cheers

jon
John Day
2009-10-25 03:31:12 UTC
Permalink
Post by Noel Chiappa
The reference to the "ISO protocol wars" is completely mystifying, as the
architecture of the ISO stack (at least, the CLNP/TP4 flavour, which was the
subset which gave TCP/IP the best 'run for their money') is basically
identical to that of TCP/IP (modulo disagreements on certain arcane points,
such as exactly what kind of abstract entities the names at the various levels
refer to - a subject wholly unrelated to the end-end debate).
And to add to my previous note, CLNP didn't even exist when the e2e
paer was written.
Dave CROCKER
2009-10-26 14:52:33 UTC
Permalink
Post by Noel Chiappa
For one, NATs became widespread mostly a result of flaws in the original
engineering (too small an address space) and architecture (too few namespaces,
leading to difficulty in supporting things like provider independence). NATs
are not inherently desirable, and would not, I think, have
evolved/proliferated had TCP/IP avoided those (in hindsight, now obvious)
mistakes.
The name "NAT" certainly justifies the claim that they were created to resolve
an issue with addressing. And given what they do to an address, they certainly
affect end-to-end behaviors.

But there is pretty strong indication that something like them would have been
needed anyhow. For example... Back when CIDR was being deployed -- and repeated
in the Tussles in Cyberspace paper -- it was observed that the way addresses are
structured locks a user into their provider. NATs fix that (if you don't have
any publicly-visible servers inside your net.) In other words, NATs have
important administrative benefits that need to be acknowledged.

The interface between an organization and the outside world needs a potentially
sophisticated boundary device, no matter how wonderful the Net's addressing
scheme.

Whether the legitimate services of such a boundary device impinges on E2E
principles is a worthy discussion, but we ought to be careful not to dismiss the
topic with the usual, quick wave of the E2E flag over the limited and rotten
corpse of the address-translation-is-bad assertion.

d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
Noel Chiappa
2009-10-25 03:20:10 UTC
Permalink
Wow, you all have been busy. I'll have to get to some tomorrow, but this one
for now...
Post by Noel Chiappa
The reference to the "ISO protocol wars" is completely mystifying, as
the architecture of the ISO stack (at least, the CLNP/TP4 flavour,
which was the subset which gave TCP/IP the best 'run for their money')
is basically dentical to that of TCP/IP
Cmon Noel, you know better than that. That was never what the protocol
wars were about.
John Day
2009-10-25 03:48:00 UTC
Permalink
Read Abbate; TCP/IP versus CLNP/TP4 was a real set-to. As a TCP/IP backer, I
was far more worried about CLNP/TP4 than I ever was about X.25, which was
clearly a rusty assegai in a world of repeating rifles.
Read Abbate? You must be kidding. Why would I use 3rd hand sources?

You seem to forget, I was there. Starting with getting datagrams in
the first X.25 Recommendations (1976) thru the fight to get
connectionless into OSI. The debates around the IONL, getting CLNP
done, ensuring that it named the node so it would be a true Internet
protocol rather than a subnet protocol masquerading as an Internet
protocol.

As I said before, in 1982 there was no CLNP. In fact, TP4 didn't
exist either accept as a revised draft of INWG 96/ CYCLADES TS. The
connectionless addendum to ISO Reference Model wouldn't even be
approved as a draft for another year.

It seems that you and Abbate have been looking in the wrong end of
the telescope.

It would be a little hard for the e2e paper in 1982 to be about the
CLNP/TP4 vs TCP/IP debate if half the debate didn't exist.
Unwashedly yours, :-)
Noel
Noel Chiappa
2009-10-25 03:59:04 UTC
Permalink
Post by John Day
It would be a little hard for the e2e paper in 1982 to be about the
CLNP/TP4 vs TCP/IP debate if half the debate didn't exist.
Umm, I didn't say that the E2E paper was about the CLNP/TP4 vs TCP/IP 'debate'
(competition would be more accurate, I think). You seem to be conflating my
comments on two entirely separate points.

One was that for _some_ of us, the only 'ISO protocol war' we saw was the
CLNP/TP4 vs TCP/IP competition.

The other was that the E2E paper was not principally intended as firepower in
the TCP/IP vs X.25 debate, but rather was more intended to be exactly what
the passage of time has shown it to be - a contemplation of the underlying
fundamentals of functionality placement, one which would be of lasting value.

Noel
William Allen Simpson
2009-10-25 08:29:30 UTC
Permalink
Post by Noel Chiappa
One was that for _some_ of us, the only 'ISO protocol war' we saw was the
CLNP/TP4 vs TCP/IP competition.
As an implementor during that period, I have to agree with Noel.

Back in the late '70s, we used X.25 for transmission because that's the
only thing that AT&T (via Telenet, with an 'e') would sell us. For
satellite data to move from field stations, as a practical matter we
were constrained by availability.

Just as weather data only came in 5-bit baudot coding. I programmed a
dedicated Alpha Micro (in reality, a minicomputer) to translate to 7-bit
ASCII. Then, to move it from the Alpha Micro to the Perkin-Elmer
Interdata 7/16, I used IBM bisync.

Before I'd ever heard of TCP/IP, I simply rolled my own "higher level"
packet format, to have a commonality over both X.25 and bisync. But it
was clear that had to have its own checksum. Folks seem to forget that
I-O buses were very unreliable. Corrupted data and dropped interrupts
were common. An independent transmission layer was crucial.

As I've mentioned recently, it wasn't until later that Merit decided to
implement TCP/IP. Remember, Merit had its own protocol stack as far
back as the '60s. ARPA was a relative late-comer around here....

It wasn't until CLNP that I ever perceived a 'war' (or competition). To
me, it was always apparent that the war wasn't with the protocols per
se, but rather the corporate entities that wanted to control pricing.

When I worked on Michigan's NSFnet bid, we took the money from state
budget line items that were using ISO protocols. To the politicians,
the potential savings over dedicated telco lines were a big plus. We
were still in the aftermath of the Reagan Recession.

And that was another element that is often overlooked. I know this is
less relevant outside the US, but the ISO corporate proponents were
primarily Republicans. The NSFnet proponents were primarily Democrats,
who were very interested in competition, cost savings, and leveling the
playing field.
Post by Noel Chiappa
The other was that the E2E paper was not principally intended as firepower in
the TCP/IP vs X.25 debate, but rather was more intended to be exactly what
the passage of time has shown it to be - a contemplation of the underlying
fundamentals of functionality placement, one which would be of lasting value.
Admittedly, I didn't read the paper until much later, so it had little
influence on my thinking. I vaguely remember a "well, duh" reaction, as
surely every implementor in the field already knew from experience.

But it has continued as an enduring touchstone to help explain these
fundamentals to successive generations.
John Day
2009-10-25 11:55:57 UTC
Permalink
Post by Noel Chiappa
Post by John Day
It would be a little hard for the e2e paper in 1982 to be about the
CLNP/TP4 vs TCP/IP debate if half the debate didn't exist.
Umm, I didn't say that the E2E paper was about the CLNP/TP4 vs TCP/IP 'debate'
(competition would be more accurate, I think). You seem to be conflating my
comments on two entirely separate points.
No, you are the one who is conflating. The original point that I
made was about the context in which the e2e paper was written.What
was happening leading up to it.
Post by Noel Chiappa
One was that for _some_ of us, the only 'ISO protocol war' we saw was the
CLNP/TP4 vs TCP/IP competition.
This may be how it appeared in your corner of the world and when you
appeared in the discussion. By the time you got to MIT, this had
been going on for some time.

The protocol wars began in 1975 (or thereabouts) when we learned that
CCITT was developing X.25 and we tried to get datagrams put it into
it.

In fact, the TCP vs "TP4", i.e. CYCLADES TS, debate had been over for
4 years at that point. THAT debate had been carried out between 1974
and late 1977 in IFIP WG6.1 (INWG) where several Transport protocols
(not just those two) were discussed and analyzed. WG6.1 was managing
at least a couple of meetings a year at that time. There were many
participants from the US (APRANet/Internet) and Europe. The result
was INWG96 published at Danthine's conference in early 1978. As far
as we were concerned that ended the transport protocol discussion.
(Note that INWG 96 was authored by people from all sides of the
discussion: Cerf (Internet), MacKenzie (Internet), Scantlebury
(NPL), and Zimmermann (INRIA). As far as we were concerned there
never was a TCP/TP4 war, that issue was resolved before the standards
battles started in earnest.

Of course, now we know that neither was the best choice and that
delta-t was far superior to both.

As far as the "IP" discussion, we were pretty happy with that. The
only things we knew we needed to do for a world wide protocol was a
bigger address field and fix the problem that had arisen in 1972 with
Tinker AFB. We needed to name the node rather than the interface.
No one saw this as a big deal. XNS, CYCLADES, DECNet, EUNet had all
done that. (I always contend that the ARPANET didn't so much get it
wrong as we had a lot of other things to worry about and it was
first.)

This is why I was so shocked when the IETF refused to name the node
in 1992. We had known about the issue for 20 years. Everyone else
had by then fixed it. That was when it became clear that the IETF
had become more a craft guild than an engineering group and relied
more stock in tradition. Actually, it had begun before that but as
they say the third time is the charm.

But again here we were still to attached to the beads-on-a-string
model that we thought we had refuted. There was a further
simplification that we couldn't see at that point.
Post by Noel Chiappa
The other was that the E2E paper was not principally intended as firepower in
the TCP/IP vs X.25 debate, but rather was more intended to be exactly what
the passage of time has shown it to be - a contemplation of the underlying
fundamentals of functionality placement, one which would be of lasting value.
It certainly spends a lot of space arguing that point. In fact, in
the period leading up to its publication what other issue of what to
put in the network vs the hosts was being debated? As I said, I have
always seen the e2e paper as an attempt to create a more general
principle to refute that hop-by-hop error control could supplant e2e
error control. And by having a more general principle that when
other things were proposed for "in the network" there would be
something we could point to.

I am afraid that your (and Abbate's) perspective on what game was
afoot was very narrow and taken out of context with the war as a
whole.
David P. Reed
2009-10-25 12:43:24 UTC
Permalink
Post by John Day
Post by Noel Chiappa
The other was that the E2E paper was not principally intended as firepower in
the TCP/IP vs X.25 debate, but rather was more intended to be exactly what
the passage of time has shown it to be - a contemplation of the underlying
fundamentals of functionality placement, one which would be of lasting value.
It certainly spends a lot of space arguing that point. In fact, in
the period leading up to its publication what other issue of what to
put in the network vs the hosts was being debated?
I always find the logic of this sort amazing: in my rhetoric class it
was called "post hoc, ergo propter hoc" argumentation. That is:
because X happened and it appeared to support Y, then Y was the reason X
happened. ("this was the result, therefore this was the reason")

John Day asks the question "what other issue" to replace actual
exploration. Since there have been numerous times in the past when he
could have just asked Jerry, Dave, or me, yet he persists in his belief,
perhaps he has some reason to imply that we would not tell him why we
wrote the paper. In fact, we have made a variety of statements as to
why we wrote it, in informal but public places. Yet he holds onto a belief.

Now in some circles the desire to hold onto a belief about others'
actions and motivations despite overwhelming evidence to the contrary is
understood to come close to that of conspiracy theorists. I don't
understand Day's obsession with this point. Since it has infected
Richard Bennett's writings, and is being repeated to the FCC in policy
debates supported by his "think tank" employer, it probably should be
recognized for what it is: a false belief.
Post by John Day
As I said, I have always seen the e2e paper as an attempt to create a
more general principle to refute that hop-by-hop error control could
supplant e2e error control. And by having a more general principle
that when other things were proposed for "in the network" there would
be something we could point to.
I am afraid that your (and Abbate's) perspective on what game was
afoot was very narrow and taken out of context with the war as a whole.
David P. Reed
2009-10-25 12:16:03 UTC
Permalink
Post by Noel Chiappa
Post by John Day
It would be a little hard for the e2e paper in 1982 to be about the
CLNP/TP4 vs TCP/IP debate if half the debate didn't exist.
Umm, I didn't say that the E2E paper was about the CLNP/TP4 vs TCP/IP 'debate'
The end-to-end paper was not written to be a part of any war
whatsoever. I say that with knowledge of all of the 3 co-authors'
intents and motivations.

I *now* am beginning to understand what mentality seems to be behind
Bennett's informants' views of history.

It's an oddly American cultural thing to identify evolutionary and
economic competitions as "wars". We have the "war on drugs", for
example. Somehow the "wars" become ends in themselves: winnning vs. losing.

But studying the competitions rarely provides insight into real issues.
Did the Battle of Gettysburg really tell us anything about either the
causes: an economy based on human slavery and "free market" ideals (the
American plantation south) vs. an economy based on industrialization,
internal market growth, and protectionist/imperialist approaches (the
American northeast)? (who won that battle didn't even stop the
argument...) Did the Battle of Gettysburg predict the creation of "Jim
Crow laws"? Did it prevent them?

I do remember a wide variety of battles about elements of networking,
including implementations of architectures. I briefly was involved in
the "token ring" vs. "bus" argument about physical infrastructure for
LANs - not as an advocate of either side - and I found it completely
bizarre. The idea on the "token ring" side was that somehow its
puissant "quality of service" would make the end-to-end communications
work, while the "unreliable" CSMA/CD would never be "carrier grade".
Bushwah - and I gently pointed that out in my portion of the original
paper I wrote about LANs with Clark and Pogran for IEEE Proceedings:
LANs would be parts of internets, and the QoS properties of a single LAN
would not transcend that LAN's scope.

That argument was an implied end-to-end argument: if you want to get a
specific service quality goal satisfied, putting the implementation of
that function in (every part of every one of the) subnetworks is not a
good design.

However, one can imagine achieving it that way. The resulting
architecture would be very inflexible, and costly to all of those who
*don't* need that particular extreme service quality goal. It would,
for example, make it hard to migrate the path of any communication while
it is happening.

In any case, the rather silly battle over CSMA/CD vs. token-controlled
access was pretty meaningless in the context of any kind of
internetworking: PUP or TCP or even the Cyclades thing.

But the "QoS" logic of that day pervades the debate even now. It's so
easy for those who don't build networks to wave their hands and say that
(for example) Verizon can provide QoS on the Internet, when what is
meant is that Verizon provides some latency control on *it's small
segment of the path* to all interesting endpoints.

This is a rhetorical device called synecdoche - in which the attributes
of a part are assumed to carry through to the whole. The idea that
Verizon can create QoS for the Internet by creating QoS for its part is
a logical fallacy. But it is one that humans continue to fall prey to.

So to summarize this longer diversion: the war between token ring and
bus-Ethernet teaches us little, and subsequent success of Ethernet as a
term teaches us very little about architecture: in fact, by shrinking
the collision domain to a hub, and then later replacing it with a switch
in the core, the Ethernet has moved towards the deterministic arbitrated
structure of the token ring, along with the manageability of the
"star-shaped" ring concept that Saltzer promoted as the topology.

So let's not study wars and battles. Let's study architectural
principles and their application.
L***@surrey.ac.uk
2009-10-25 16:10:11 UTC
Permalink
Post by David P. Reed
So let's not study wars and battles. Let's study architectural
principles and their application.
So it is a principle after all, then?

(It strikes me that if one is aspiring to be Darwin one shouldn't
also have to be Dawkins.)

<http://www.ee.surrey.ac.uk/Personal/L.Wood/><***@surrey.ac.uk>
David P. Reed
2009-10-25 17:13:48 UTC
Permalink
Post by L***@surrey.ac.uk
Post by David P. Reed
So let's not study wars and battles. Let's study architectural
principles and their application.
So it is a principle after all, then?
(It strikes me that if one is aspiring to be Darwin one shouldn't
also have to be Dawkins.)
To be precise, the end-to-end argument refers to a class of arguments,
with a few free variables. Jerry and I (and perhaps Dave) have long
been students of a thing called "rhetoric". In classical rhetoric (a
la Aristotle), one discusses what kinds of arguments are valid. Logic
is a part of rhetoric, but rhetoric as a whole includes many other
aspects of argumentation.

Linking rhetoric to formal logic, the end-to-end argument would be one
of a set of "rules of inference" - acceptable combinators that take
other factors into account and provide new valid statements.

Substituting for the free variables in the end-to-end argument provides
a large set of implied valid statements.

Now rhetoric includes the mechanisms for argumentation where there is
not one "truth". In fact, in general, rhetoric handles cases where one
line of argumentation supports a particular decision, and another line
supports a different decision. Therefore, rhetoric has been part of the
general set of tools that lawyers use for argumentation - there is no
guarantee that all laws are consistent, nor are their set of valid
arguments complete in the sense of deciding every case.

Architecture is like that: it is why good systems architects need to
understand rhetoric in all of its glory.

Examples of rhetoric:

The much misunderstood "ad hominem argument" - which is an argument that
makes claims based on who is making a particular claim. The term has
been redefined in popular culture to mean "insult", but in fact it was
never that.

The argument "post hoc ergo propter hoc". That is the idea that
"correlation equals causation" - John Day's argument that because of the
date of publication, Jerry Dave and I must have invented the argument as
part of some contemporaneous battle about network adoption.

etc.

I commend study of rhetoric to all. It helps decode the rather strange
logics that pervade discourse today.
Noel Chiappa
2009-10-25 15:03:03 UTC
Permalink
[Apologies to all for dipping backwards a ways into the stream, but this
message contained what I felt was an important point to take on, and I didn't
have time/energy to do so last night.]
Post by Richard Bennett
Post by Noel Chiappa
Post by Richard Bennett
Moors shows that the Saltzer, Reed, and Clark argument for end-to-end
placement is both circular and inconsistent with the FTP example that
is supposed to demonstrate it.
I didn't see that at all.
Moors points out that TCP error detection and recovery is an end-system
function, but not really an endpoint function in the file transfer
example.
This is all true, but I still don't see (for reasons such as that the
real-world FTP isn't the 'reliable FTP' the paper talks about) that it amounts
to "[the] argument for end-to-end placement is both circular and inconsistent
with the .. example that is supposed to demonstrate it", which is what I
disagreed with.

But this is not important, I would prefer to move on to a more important point.
Post by Richard Bennett
Post by Noel Chiappa
Post by Richard Bennett
One of the more interesting unresolved questions about "End-to-End
Args" is why it was written in the first place. Some people see it as
a salvo in the ISO protocol wars, others as an attack in BBN's
ARPANET, some as an attempt to criss the divide between engineering
and policy
I don't know whether to be amused or outraged by this nonsense.
I don't know why this question should get anybody upset, it's just a
question about the context and motivation of the paper in the first
place.
The problem is that it's like asking why Einstein wrote his thermionic
emissions paper. Even in a purely 'history of science' way, this question is
orders of magnitude less important than the question of the correctness of
the technical content itself.

And hoping that knowing exactly (even if one could know such a thing) _why_
it was written will tell you _anything_ about how correct it is, in and of
itself, is utterly misguided. Down that path lies "Transgressing the
Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity".

To even bring the question of 'why was it written' up in a discussion about
the correctness of the technical content is, in a sense, to attack the very
ideals of technical debate, which is _supposed_ to focus on the technical
content, and leave all else (including people, and motivations), out of it.
Post by Richard Bennett
None of the authors was part of the inner circle of the Internet
protocol design at the time the paper was written, although Clark was
either the Chief Architect of the Internet or on his way to becoming
same.
Say what? Reed was still deeply involved in Internet work at that point (he'd
been one of the people making the case to split TCP up into IP and TCP), as
was Clark (who was writing prototype TCP code, and papers based on the
lessons he learned doing so). And the two of them were extremely close
professional colleagues of Saltzer, with offices basically next door to each
other on the 5th floor. And although Jerry didn't go to meetings or write
code, as a key member of that research group, he was definitely thinking
about the things the rest were working on - as can be seen by his other
papers from that time period.
Post by Richard Bennett
I would have expected Cerf and Kahn to write something explaining the
architectural decisions they made in adapting the framework to their
system
I don't know Vint well enough to say with any confidence, but to hazard a
guess, but that kind of deep design philosophy thing just doesn't seem like
the kind of thing he'd go for - I think his focus was at a different level.
Post by Richard Bennett
Why these three people and why this particular time?
Why them? Because they were very close professional colleagues who habitually
thought at that level (i.e. design philosophy). Why then? Because at that
point they'd been thinking about networking for a while, and had gotten to
the point where they could usefully apply the kind of systems architecture
analysis that group was known for.
Post by Richard Bennett
It's never been explained.
The fact that you could even bother to ask that question, or think that the
answer is of any interest other than in a 'history of science' way, is very
illuminating.
Post by Richard Bennett
there was a lot of friction between the Network Working Group and BBN
over the control BBN had over the ARPANET protocols inside the IMP.
Sure, but that was long before the period when 'End-End Arguments' was being
turned out.
Post by Richard Bennett
The interesting problems of the day in protocol design were all behind
the curtain to the people who used the ARPANET, and that's frustrating
to engineers. ... by 1981, people were well into the second step, and
the closed implementation of the lower three layers was a problem.
Huh? Even by 1978 the people active in the Internet project treated the
ARPANet as a black box which we didn't really have much interest in, and
didn't really concern ourselves with it.

Frankly, for most of us, we were up to our asses in alligators getting all
these various new technologies (LANs, etc) up and running, and didn't have a
lot of time to worry about anything else anyway. What little time we did have
for deep thinking went to things like how to better organize OS software to
deal with networking, etc, etc.

Noel
Noel Chiappa
2009-10-25 15:24:34 UTC
Permalink
This is why I was so shocked when the IETF refused to name the node in
1992. ... That was when it became clear that the IETF had become more a
craft guild than an engineering group and relied more stock in
tradition.
Some of us still have the scars on our foreheads from that one... :-)
There was a further simplification that we couldn't see at that point.
To get back to actual technical content, that would be...? (I'm sure it's
obvious, but it's not coming up for me...)

Noel
Noel Chiappa
2009-10-25 15:44:14 UTC
Permalink
Post by Richard Bennett
On the subject of BBN's standing in the early Internet community, I'll
simply note that the term "Big Bad Neighbor" was a common usage that I
did not coin myself, and Steve Crocker's comments in RFC 1 had a
well-understood subtext.
I think you're confusing the "early Internet community" with the 'early
ARPANet community'.

The view of the various divisions at BBN (since at one point the Internet work
was being done in a different division of BBN from that responsible for the
ARPANet) by other workers in the early Internet community was a complex, and
hence lengthy, one - and also off-scope for this list (may I suggest the
'Internet-history' list if you really want to explore the topic).


It seems to me that the 'end-end design ideas' have gotten mixed up in what
is, at the bottom, a fight over how to divide up the economic pie of
communication networks.

This is not an unknown occurrence - scientific work on things like the size
of the Artic ice-sheet, and discovery of new fossil species, has equally
become wound up in disputes which are far larger.

I don't have any pithy response to that, and any longer comment would also be
off-topic, so I will simply make the observation and leave it there.

Noel
Richard Bennett
2009-10-26 02:04:49 UTC
Permalink
Post by Noel Chiappa
It seems to me that the 'end-end design ideas' have gotten mixed up in what
is, at the bottom, a fight over how to divide up the economic pie of
communication networks.
You mean the end-to-end design ideas have gotten mixed up in a fight
over not changing how the economic pie is currently divided. 37 years of
networking history boils down to this:

1. Pouzin designs CYCLADES as a layered system of protocols in order to
experiment with some interesting ideas about reliability, performance,
and routing; it's all based on datagrams.

2. Pouzin and Kahn share some ideas and Internet ends up following the
same design as CYCLADES, modulo addressing. DECNet, XNS, and TP4/CLNP
follow.

3. End-to-End Args proposes applying the notion of smart, reliable
endpoints communicating over unreliable comms system to all sorts of
other things as a rhetorical trick.

4. Internet eventually becomes an open (public) system.

5. RFC 1958 says "let's not descend into dogma."

6. Clark and Blumenthal's Brave New World says "end to end still has value."

7. Lessig reads Brave New World as saying "capitalism is corrupting the
Internet; Save End-to-End!"

8. Moors points out that E2E Args never did describe the Internet.

9. AT&T admits to being a capitalist entity.

10. Google's MCI vets worry that telcos will put them out of business
like they did MCI unless end-to-end is law.

11. Public interest groups push for end-to-end law.

12. FCC asks: "What's wrong with descending into dogma? That's what we do."

13. Angry old hippies go "Right on, FCC, your daddy's Internet is good
enough for you!"

And that's where we are today.

RB
L***@surrey.ac.uk
2009-10-26 06:19:29 UTC
Permalink
Post by Richard Bennett
3. End-to-End Args proposes applying the notion of smart, reliable
endpoints communicating over unreliable comms system to all sorts of
other things as a rhetorical trick.
Reed's comment on rhetoric bringing structure to logical argument
has nothing to do with a paper doing engineering analysis and
drawing insights from commonalities.

(I would submit that to fully understand its arguments and internalize
its concepts, an engineering mindset is required.)

<http://www.ee.surrey.ac.uk/Personal/L.Wood/><***@surrey.ac.uk>
Richard Bennett
2009-10-26 09:07:20 UTC
Permalink
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
If the word "trick" brings offense, we can just as easily substitute
the alternative "exercise" with no loss of meaning, old sport. If one
cared to understand the nature of rhetoric within the sphere of
philosophy, one could do worse than read "Zen and the Art of Motorcycle
Maintenance" of course; taking on Aristotle without the help of
professional guidance might easily lead one astray.<br>
<br>
Logical argument isn't particularly in need of structure by external
means when it's done correctly, of course.<br>
<br>
RB<br>
<br>
<a class="moz-txt-link-abbreviated" href="mailto:***@surrey.ac.uk">***@surrey.ac.uk</a> wrote:
<blockquote
cite="mid:***@EVS-EC1-NODE4.surrey.ac.uk"
type="cite">
<meta http-equiv="Content-Type" content="text/html; ">
<meta name="Generator"
content="MS Exchange Server version 6.5.7653.38"> <title>RE: [e2e] Protocols breaking the end-to-end argument</title> <!-- Converted from text/plain format --> <p><font size="2">&gt; 3. End-to-End Args proposes applying the
notion of smart, reliable<br>
&gt; endpoints communicating over unreliable comms system to all sorts
of<br>
&gt; other things as a rhetorical trick.<br>
<br>
Reed's comment on rhetoric bringing structure to logical argument<br>
has nothing to do with a paper doing engineering analysis and<br>
drawing insights from commonalities.<br>
<br>
(I would submit that to fully understand its arguments and internalize<br>
its concepts, an engineering mindset is required.)<br>
<br>
&lt;<a moz-do-not-send="true"
href="http://www.ee.surrey.ac.uk/Personal/L.Wood/">http://www.ee.surrey.ac.uk/Personal/L.Wood/</a>&gt;<a class="moz-txt-link-rfc2396E" href="mailto:***@surrey.ac.uk">&lt;***@surrey.ac.uk&gt;</a><br>
<br>
</font>
</p>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC</pre>
</body>
</html>
David P. Reed
2009-10-26 14:23:35 UTC
Permalink
...
Post by Richard Bennett
13. Angry old hippies go "Right on, FCC, your daddy's Internet is good
enough for you!"
What a warped interpretation of history... it discredits itself, in my
opinion only, of course. If this were a forum for discussing policy
matters, I'd engage in debunking it. It is not such a forum, however.
This is a research forum, loosely associated with the IRTF, and
Bennett's comments (true or not) have not contributed to this forum.

With regard to historical analysis, Bennett (and his informants) are
welcome to write an article for ACMs Annals in the History of Computing,
where actual historians apply peer review to such claims and submissions.

The use of the phrase "rhetorical trick" is offensive to me personally.
Bennett persists in this claim, and John Day surprisingly (to me) joins
him in this warped idea that our paper was written as a move in a battle
(war) that some would claim was relevant to today. That it offends me
personally doesn't matter that much in the scheme of things - certainly
Bennett's strange historical analysis of "causation" wouldn't stand a
test against facts.

However, it is clear we must take Bennett seriously: Jon Peha (FCC Chief
Technologist), Robert Pepper (former FCC senior exec and Cisco senior
exec), Rob Atkinson (progressive political activist and friend of Blair
Levin), and several Congress members (including Darryl Issa) support the
organization that employs Bennett as a Senior Fellow where he makes this
set of claims (ITIF) *as part of his job*. That organization claims to
be a "non-partisan think tank" devoted to research and analysis. So I'd
suggest that this analysis be subjected to rigorous review - but NOT on
an IRTF list. Perhaps Bennett's claim (made in his online resume) that
he was "responsible" for major networking standards, "including ... WiFi
and UWB" will also be reviewed rigorously, again, not on this list, but
elsewhere, perhaps by the FCC. Many people on this list know some of
the people who are given credit for 802.11 in the community -- you're
welcome to ask them about Bennett.
Richard Bennett
2009-10-26 17:20:13 UTC
Permalink
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Excellent response, David Reed. Don't forget that the FCC's Notice of
Proposed Rulemaking on
Internet regulation quotes me re: the fact that Pouzin invented the
framework that we find in the Internet protocols and four other systems
of that era.<br>
<br>
BTW, I'm not participating on this e-mail list as part of my job, but
I'm sure David Reed will go ahead and try to get me fired again for
having the nerve to question his reasoning; it's been about a week
since he did that, so it's probably time again. <br>
<br>
Meanwhile, I'm revising my "Designed for Change" paper for publication.
The discussion about rhetoric and all has been very illuminating
regarding the motivation for the paper what has been claimed to fuel
the debate on Internet regulation, but to me the paper seems to be a
more a creature of some of the fashions of its age (RISC and all that
sort of thing.) Some applications have worked out well, others not; do
we know why?<br>
<br>
RB<br>
<br>
David P. Reed wrote:
<blockquote cite="mid:***@reed.com" type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<title></title>
On 10/25/2009 10:04 PM, Richard Bennett wrote:
<blockquote cite="mid:***@bennett.com" type="cite">37
years of networking history boils down to this: <br>
</blockquote>
...<br>
<blockquote type="cite">13. Angry old hippies go "Right on, FCC, your
daddy's Internet is good enough for you!" <br>
<br>
</blockquote>
<br>
<br>
<font face="Helvetica, Arial, sans-serif">What a warped
interpretation
of history... it discredits itself, in my opinion only, of course.&nbsp; If
this were a forum for discussing policy matters, I'd engage in
debunking it.&nbsp; It is not such a forum, however.&nbsp; This is a research
forum, loosely associated with the IRTF, and Bennett's comments (true
or not) have not contributed to this forum.</font><br>
<br>
With regard to historical analysis, Bennett (and his informants) are
welcome to write an article for ACMs Annals in the History of
Computing, where actual historians apply peer review to such claims and
submissions.<br>
<br>
The use of the phrase "rhetorical trick" is offensive to me
personally.&nbsp; Bennett persists in this claim, and John Day surprisingly
(to me) joins him in this warped idea that our paper was written as a
move in a battle (war) that some would claim was relevant to today.&nbsp;
That it offends me personally doesn't matter that much in the scheme of
things - certainly Bennett's strange historical analysis of "causation"
wouldn't stand a test against facts.<br>
<br>
However, it is clear we must take Bennett seriously: Jon Peha (FCC
Chief Technologist), Robert Pepper (former FCC senior exec and Cisco
senior exec), Rob Atkinson (progressive political activist and friend
of Blair Levin), and several Congress members (including Darryl Issa)
support the organization that employs Bennett as a Senior Fellow where
he makes this set of claims (ITIF) *as part of his job*.&nbsp; That
organization claims to be a "non-partisan think tank" devoted to
research and analysis.&nbsp; So I'd suggest that this analysis be subjected
to rigorous review - but NOT on an IRTF list.&nbsp; Perhaps Bennett's claim
(made in his online resume) that he was "responsible" for major
networking standards, "including ... WiFi and UWB" will also be
reviewed rigorously, again, not on this list, but elsewhere, perhaps by
the FCC.&nbsp; Many people on this list know some of the people who are
given credit for 802.11 in the community -- you're welcome to ask them
about Bennett.<br>
<br>
</blockquote>
</body>
</html>
David P. Reed
2009-10-26 17:44:43 UTC
Permalink
Note: I did not and have never tried to get Bennett fired. I will say
that it is my opinion (openly held) that ITIF gains little from
employing him as a spokesperson, and that Bennett has several times
expressed the idea that my employment at MIT is somehow not to his liking.
Post by Richard Bennett
Excellent response, David Reed. Don't forget that the FCC's Notice of
Proposed Rulemaking on Internet regulation quotes me re: the fact that
Pouzin invented the framework that we find in the Internet protocols
and four other systems of that era.
BTW, I'm not participating on this e-mail list as part of my job, but
I'm sure David Reed will go ahead and try to get me fired again for
having the nerve to question his reasoning; it's been about a week
since he did that, so it's probably time again.
Meanwhile, I'm revising my "Designed for Change" paper for
publication. The discussion about rhetoric and all has been very
illuminating regarding the motivation for the paper what has been
claimed to fuel the debate on Internet regulation, but to me the paper
seems to be a more a creature of some of the fashions of its age (RISC
and all that sort of thing.) Some applications have worked out well,
others not; do we know why?
RB
Post by David P. Reed
...
Post by Richard Bennett
13. Angry old hippies go "Right on, FCC, your daddy's Internet is
good enough for you!"
What a warped interpretation of history... it discredits itself, in
my opinion only, of course. If this were a forum for discussing
policy matters, I'd engage in debunking it. It is not such a forum,
however. This is a research forum, loosely associated with the IRTF,
and Bennett's comments (true or not) have not contributed to this forum.
With regard to historical analysis, Bennett (and his informants) are
welcome to write an article for ACMs Annals in the History of
Computing, where actual historians apply peer review to such claims
and submissions.
The use of the phrase "rhetorical trick" is offensive to me
personally. Bennett persists in this claim, and John Day
surprisingly (to me) joins him in this warped idea that our paper was
written as a move in a battle (war) that some would claim was
relevant to today. That it offends me personally doesn't matter that
much in the scheme of things - certainly Bennett's strange historical
analysis of "causation" wouldn't stand a test against facts.
However, it is clear we must take Bennett seriously: Jon Peha (FCC
Chief Technologist), Robert Pepper (former FCC senior exec and Cisco
senior exec), Rob Atkinson (progressive political activist and friend
of Blair Levin), and several Congress members (including Darryl Issa)
support the organization that employs Bennett as a Senior Fellow
where he makes this set of claims (ITIF) *as part of his job*. That
organization claims to be a "non-partisan think tank" devoted to
research and analysis. So I'd suggest that this analysis be
subjected to rigorous review - but NOT on an IRTF list. Perhaps
Bennett's claim (made in his online resume) that he was "responsible"
for major networking standards, "including ... WiFi and UWB" will
also be reviewed rigorously, again, not on this list, but elsewhere,
perhaps by the FCC. Many people on this list know some of the people
who are given credit for 802.11 in the community -- you're welcome to
ask them about Bennett.
Craig Partridge
2009-10-25 19:51:17 UTC
Permalink
Post by William Allen Simpson
It wasn't until CLNP that I ever perceived a 'war' (or competition). To
me, it was always apparent that the war wasn't with the protocols per
se, but rather the corporate entities that wanted to control pricing.
I find this historical discussion interesting because it suggests some
very different vantage points.

My vantage point was close to John Day's. I was at BBN (arrived in 1983)
and there were internal debates of TCP/IP vs. TP0/X.25 with the additional
twist that folks like John and Ross Callon were working to develop TP4/CLNP
to compete with TP0/X.25.

We had cheerful debates over CLNP vs. IP. (I read a draft or two of the
CLNP specs for Ross -- which led to an odd statement later during the
protocol wars where I was asked if I'd read the CLNP spec and the answer,
honestly, was "no" -- I'd read the earlier drafts... and I got pilloried for
commenting on CLNP without reading it that bit of stupidity on my part).

But the big fight was TCP/IP vs. TP0/X.25 (or, more truthfully, my recollection
was the fight was TCP vs. X.25 -- where to put the smarts...)

Thanks!

Craig
William Allen Simpson
2009-10-26 13:54:09 UTC
Permalink
Post by Craig Partridge
Post by William Allen Simpson
It wasn't until CLNP that I ever perceived a 'war' (or competition). To
me, it was always apparent that the war wasn't with the protocols per
se, but rather the corporate entities that wanted to control pricing.
I find this historical discussion interesting because it suggests some
very different vantage points.
Yes. Mine was much closer to the view of Noel Chiappa, where he says:

# Frankly, for most of us, we were up to our asses in alligators getting all
# these various new technologies (LANs, etc) up and running, and didn't have a
# lot of time to worry about anything else anyway. What little time we did have
# for deep thinking went to things like how to better organize OS software to
# deal with networking, etc, etc.
#
But I'd also the experience of actually trying to order links. It was
quite difficult. Then, AT&T required putting these little gray boxes on
every data line, and charging double (or more) the usual voice rate. We
took the box apart, and discovered that we could build the same thing for
less than 35 cents retail, but they were charging hundreds of dollars per
year (forever).

So, for me, the Green decision couldn't have come soon enough. And we
did everything under the sun to avoid AT&T links. Maybe that's the
underlying reason that X.25 wasn't a competitor, as we were using it as
little as possible.
Post by Craig Partridge
My vantage point was close to John Day's. I was at BBN (arrived in 1983)
and there were internal debates of TCP/IP vs. TP0/X.25 with the additional
twist that folks like John and Ross Callon were working to develop TP4/CLNP
to compete with TP0/X.25.
By 1981, I'd left the University for a small startup that did front-end data
concentrators (using the HP 21MX with nicely re-programmable microcode for
handling data). Over a couple of years, we did a couple of dozen protocols,
none of which were X.25.

By 1983, I'd become a full-time consultant. Auto companies and suppliers,
political campaigns -- none of them ever expressed any interest in X.25.
Lots of serial multi-point cabling connected to front-ends talking to
back-ends over various proprietary data channels or (thick) ethernet in
electronically noisy, chaotic environments.
Post by Craig Partridge
But the big fight was TCP/IP vs. TP0/X.25 (or, more truthfully, my recollection
was the fight was TCP vs. X.25 -- where to put the smarts...)
Believe me, there is nothing that can better reinforce the absolute
necessity for end-to-end transmission control than heterogeneous networking
on a factory floor over multiple hops, or with a satellite link in the
middle. Nothing else actually works! More important, nothing else is
testable by a factory electrician or (usually mechanical) engineer that
has to debug and fix the link. The smarts has to be located in the CPE.

If folks at BBN were still talking about X.25, they were way behind the
curve, or were badly infected with severe standard-committee-itis.
Craig Partridge
2009-10-26 14:10:33 UTC
Permalink
Post by William Allen Simpson
If folks at BBN were still talking about X.25, they were way behind the
curve, or were badly infected with severe standard-committee-itis.
The latter.

Recall that part of BBN's networking team was paid by NIST to represent
the US at ISO/CCITT meetings. So BBN was simultaneously pushing ahead
on TCP/IP work AND working with ISO/CCITT on ISO standards.

Add to this that it was official US policy at the time that they'd
transition from TCP/IP to the OSI standards, and so there was much
effort inside BBN to try to get the OSI standards to a good enough
state to be transitioned to, and there was pushback from other
countries represented at ISO, and you have the internal debates.

Thanks!

Craig
Noel Chiappa
2009-10-27 01:13:10 UTC
Permalink
Post by Richard Bennett
Post by Noel Chiappa
It seems to me that the 'end-end design ideas' have gotten mixed up in
what is, at the bottom, a fight over how to divide up the economic pie
of communication networks.
You mean the end-to-end design ideas have gotten mixed up in a fight
over not changing how the economic pie is currently divided.
Err, 'the current division pattern' is one of the possible answers to the
question 'how should the economic pie of communication networks be divided
up', I would have thought. But this is not important, let me move on...
Post by Richard Bennett
End-to-End Args proposes applying the notion of smart, reliable
endpoints communicating over unreliable comms system to all sorts of
other things as a rhetorical trick.
'Distributed systems' is a long-standing area of interest in computing
'science' (but that's a different rathole), and it covers a far larger field
than simply communication networks (which is the subject area in which the
end-end analysis originated).

The design philosophy framework discussed in "End-to-End Arguments", while
originating in an analysis of communication networks, can viably be applied
to a wide variety of distributed systems (say, for instance, a distributed
file system which uses replication for robustness and performance). Calling
the application to such a system, in the larger problem domain, "a rhetorical
trick" seems considerably 'over the top' to me.

Moreover, your phraseology above ("End-to-End Args proposes applying the
notion ... to all sorts of other things as a rhetorical trick") seems to
imply that a principal feature of the paper are attempts to apply the
end-to-end argument to places outside the computerized information systems
domain. In fact, there is only paragraph, in the section entitled "History,
and application to other system areas" which deals with such examples. That
hardly qualifies as what your wording implies - which is that such
expansionary applications are a major thrust of the paper.

You're also incorrect to characterize even that paragraph as "applying the
notion of smart, reliable endpoints communicating over unreliable comms
system". The banking example certainly doesn't fit that rubric. It would be
more apt to describe the examples in that paragraph as examples of "functions
placed at low levels of a system [which] may be redundant or of little value
when compared with the cost of providing them at that low level" - that
phrase, of course, being from the paper's own abstract.
Post by Richard Bennett
Moors points out that E2E Args never did describe the Internet.
I think this is again a misreprentation of what the Moors paper says - but
we've been down that rabbit hole before.
Post by Richard Bennett
10. Google's MCI vets worry that telcos will put them out of business
like they did MCI unless end-to-end is law.
11. Public interest groups push for end-to-end law.
12. FCC asks: "What's wrong with descending into dogma? That's what we do."
Policy and engineering are two very different problem domains, and the former
is free to ignore the latter, if other external (i.e. non-engineering) factors
make that the preferable choice. (Within limits, of course; I treat that point
last.)

Just because an engineering discipline says 'X is the way to go', that
doesn't mean policy has to follow. As a made-up example, country Y might
decide that for non-engineering reasons, they prefer to ban gas turbines as
aircraft engines, in favour of piston engines (perhaps because they don't
like the high-pitched whine of turbines, say). However, clearly, if country Y
communally makes the decision to ban gas turbines, that's their call - even
at the same time that it remains a non-optimal decision from an _engineering_
perspective.

The interesting question, of course, is whether it's good public policy to
make policy choices that are a non-optimal decision from an engineering
perspective. Clearly in cases where that is done there will be a price to pay
(e.g. in the example above, slower planes, and higher fuel consumption), but
only the community in question can decide if those costs are worth the
benefits (e.g. getting rid of turbine whine). There is no general principle
one can apply in such cases, to decide whether or not to follow the optimal
engineering path; each will have to be decided on the overall merits.

So if some factions are calling for an 'end-to-end law' (whatever that might
be - my opinion would be that that is a poor name, although I can see where
it came from), that's a legitimate policy position, on a legitimate policy
decision. Engineering can only provide data to that debate, as to whether
it's the most efficient choice, or not - as in the gas turbine example.

The one place policy _cannot_ go is to _ignore hard constraints_. As Feynman
so elegantly put it (in his appendix to the 'Challenger' crash report'):

"For a successful technology, reality must take precedence over public
relations, for nature cannot be fooled."
Post by Richard Bennett
13. Angry old hippies go "Right on, FCC, your daddy's Internet is good
enough for you!"
The implication here seems to be that people who advocate a certain position
on how to divide up the network pie are also associated with a certain place
on the political spectrum - at least, in terms of their views on economics?

(Those who know me will no doubt be as amused as I am by the seeming
supposition that I might fall into this imaginary category - but I digress.)

I haven't actually stated here anything about my views on how to divide up the
network money pie. Actually, I don't really care about that issue: although I
do have views on what kind of service model the network should offer, they are
driven by other factors, such as engineering.


All of which should serve to make several points that everyone should retain -
that discussing whether or not the End-to-End princple is a good/valid
engineering point is separate from the question of what service model the
network should offer to its customers - and that people can have positions on
that latter question which are utterly not a result of their views (if any) on
the issue of how to divide up the network pie.

Some consequences in the latter will obviously be a _result_ of their position
on the service model, of course, but it should be noted that the effects on
the pie division are a _result_ of their position on the service model issue,
not the _cause_ of the latter - seemingly unlike that for some other people.

Noel
Richard Bennett
2009-10-27 03:33:51 UTC
Permalink
I agree with about all of this, and would simply note that I don't know
most of the people on this list well enough to say who's an angry old
hippie and who isn't, not that it matters. An awful lot of the net neut
issue is rehashing grievances against the defunct monopoly phone
companies, which is all fine if one is into that sort of thing, just not
quite relevant to the world we live in today.
Post by Noel Chiappa
Post by Richard Bennett
Post by Noel Chiappa
It seems to me that the 'end-end design ideas' have gotten mixed up in
what is, at the bottom, a fight over how to divide up the economic pie
of communication networks.
You mean the end-to-end design ideas have gotten mixed up in a fight
over not changing how the economic pie is currently divided.
Err, 'the current division pattern' is one of the possible answers to the
question 'how should the economic pie of communication networks be divided
up', I would have thought. But this is not important, let me move on...
Post by Richard Bennett
End-to-End Args proposes applying the notion of smart, reliable
endpoints communicating over unreliable comms system to all sorts of
other things as a rhetorical trick.
'Distributed systems' is a long-standing area of interest in computing
'science' (but that's a different rathole), and it covers a far larger field
than simply communication networks (which is the subject area in which the
end-end analysis originated).
The design philosophy framework discussed in "End-to-End Arguments", while
originating in an analysis of communication networks, can viably be applied
to a wide variety of distributed systems (say, for instance, a distributed
file system which uses replication for robustness and performance). Calling
the application to such a system, in the larger problem domain, "a rhetorical
trick" seems considerably 'over the top' to me.
Moreover, your phraseology above ("End-to-End Args proposes applying the
notion ... to all sorts of other things as a rhetorical trick") seems to
imply that a principal feature of the paper are attempts to apply the
end-to-end argument to places outside the computerized information systems
domain. In fact, there is only paragraph, in the section entitled "History,
and application to other system areas" which deals with such examples. That
hardly qualifies as what your wording implies - which is that such
expansionary applications are a major thrust of the paper.
You're also incorrect to characterize even that paragraph as "applying the
notion of smart, reliable endpoints communicating over unreliable comms
system". The banking example certainly doesn't fit that rubric. It would be
more apt to describe the examples in that paragraph as examples of "functions
placed at low levels of a system [which] may be redundant or of little value
when compared with the cost of providing them at that low level" - that
phrase, of course, being from the paper's own abstract.
Post by Richard Bennett
Moors points out that E2E Args never did describe the Internet.
I think this is again a misreprentation of what the Moors paper says - but
we've been down that rabbit hole before.
Post by Richard Bennett
10. Google's MCI vets worry that telcos will put them out of business
like they did MCI unless end-to-end is law.
11. Public interest groups push for end-to-end law.
12. FCC asks: "What's wrong with descending into dogma? That's what we do."
Policy and engineering are two very different problem domains, and the former
is free to ignore the latter, if other external (i.e. non-engineering) factors
make that the preferable choice. (Within limits, of course; I treat that point
last.)
Just because an engineering discipline says 'X is the way to go', that
doesn't mean policy has to follow. As a made-up example, country Y might
decide that for non-engineering reasons, they prefer to ban gas turbines as
aircraft engines, in favour of piston engines (perhaps because they don't
like the high-pitched whine of turbines, say). However, clearly, if country Y
communally makes the decision to ban gas turbines, that's their call - even
at the same time that it remains a non-optimal decision from an _engineering_
perspective.
The interesting question, of course, is whether it's good public policy to
make policy choices that are a non-optimal decision from an engineering
perspective. Clearly in cases where that is done there will be a price to pay
(e.g. in the example above, slower planes, and higher fuel consumption), but
only the community in question can decide if those costs are worth the
benefits (e.g. getting rid of turbine whine). There is no general principle
one can apply in such cases, to decide whether or not to follow the optimal
engineering path; each will have to be decided on the overall merits.
So if some factions are calling for an 'end-to-end law' (whatever that might
be - my opinion would be that that is a poor name, although I can see where
it came from), that's a legitimate policy position, on a legitimate policy
decision. Engineering can only provide data to that debate, as to whether
it's the most efficient choice, or not - as in the gas turbine example.
The one place policy _cannot_ go is to _ignore hard constraints_. As Feynman
"For a successful technology, reality must take precedence over public
relations, for nature cannot be fooled."
Post by Richard Bennett
13. Angry old hippies go "Right on, FCC, your daddy's Internet is good
enough for you!"
The implication here seems to be that people who advocate a certain position
on how to divide up the network pie are also associated with a certain place
on the political spectrum - at least, in terms of their views on economics?
(Those who know me will no doubt be as amused as I am by the seeming
supposition that I might fall into this imaginary category - but I digress.)
I haven't actually stated here anything about my views on how to divide up the
network money pie. Actually, I don't really care about that issue: although I
do have views on what kind of service model the network should offer, they are
driven by other factors, such as engineering.
All of which should serve to make several points that everyone should retain -
that discussing whether or not the End-to-End princple is a good/valid
engineering point is separate from the question of what service model the
network should offer to its customers - and that people can have positions on
that latter question which are utterly not a result of their views (if any) on
the issue of how to divide up the network pie.
Some consequences in the latter will obviously be a _result_ of their position
on the service model, of course, but it should be noted that the effects on
the pie division are a _result_ of their position on the service model issue,
not the _cause_ of the latter - seemingly unlike that for some other people.
Noel
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
Jon Crowcroft
2009-10-27 07:35:36 UTC
Permalink
Post by Richard Bennett
hippie and who isn't, not that it matters. An awful lot of the net neut
issue is rehashing grievances against the defunct monopoly phone
companies, which is all fine if one is into that sort of thing, just not
quite relevant to the world we live in today.
yes but...

it was relevant _again_ for a while because of cellular phone
companies reluctance to do anything interesting in the
internet-stylee, but after the iPhone did an end-run on
them, and android and half-dozen other smart phone
systems moved to an end-system app market independent
of provider, that problem went away...

for a while, i thought that ISPs might be evolving into
telcos too (its part of the territory)- but this
recent nanog talk encouraged me:

http://www.nanog.org/meetings/nanog47/presentations/Monday/Labovitz_Ob
serveReport_N47_Mon.pdf

btw, the "old hippie" tag is a bit off in terms of age and
location for some of us....for example, I'm a card carrying
london ex punk:)

(read "memoirs of a geezer" by jah wobble,
and you'll see that we are
closer in ethos to yippees, seein as we don't subscribe to
liberalism)

j.
David P. Reed
2009-10-27 13:32:35 UTC
Permalink
Is it the "aging" part or the "hippie" part that makes a person suspect?

I could give you a list of labels that have been applied to me, in
discussions of network technology, to conjure with as you like:

corporate-type, libertarian, communist, hippie, net-nutter, past-it,
ivory-tower, propellerhead, mere engineer, non-economist, naive,
fuzzy-minded, ...

I personally don't think such labels make a whit of difference.

In fact, I am a member of AAAS, AARP, ACLU, ACM, and that's only the
first four A's on the list of memberships. Does that make a difference?

It turns out that one of my ancestors (John Reed) arrived in Rhode
Island in 1615, and another came to the US on Ellis Island. One of my
ancestors was at the battle of Lexington and Concord, and another was a
Gibson Girl on 42nd street, having immigrated into the country. My
father can be seen in documentary footage shot on the USS Missouri
during the Korean Conflict, and was in charge of designing many of the
modern US Navy ships now in service.

Do those things make a difference?

I guess the fellow who is responsible for WiFi and UWB knows how to
judge ideas by the person's lifestyle.

And by the way, I'm neither aging (I'm 57, and can beat most people in
arm-wrestling, if nothing else), nor a classic "hippie" (my views in
those days tended toward a very different direction - a mixture of
systems design, AI, and math, mixed with stopping the Vietnam War and
creating a free market of ideas).

But so what if I were?
Jon Crowcroft
2009-10-27 14:43:15 UTC
Permalink
I may fall down on relevance
(c.f. Deirdre Wilson & Dan Sperberr Relevance Theory, 1985)
but I think I am going to have to refer you to youtube:



but anyhow, I suspect I am just as suspect as you
for what its worth...

this has to be set against the rabid free market libertarianism
fairly common in Internet Research circles from 1992 until the present
day, which is also extremely unrepresentative of the world...
Post by David P. Reed
Is it the "aging" part or the "hippie" part that makes a person suspect?
I could give you a list of labels that have been applied to me, in
corporate-type, libertarian, communist, hippie, net-nutter, past-it,
ivory-tower, propellerhead, mere engineer, non-economist, naive,
fuzzy-minded, ...
I personally don't think such labels make a whit of difference.
In fact, I am a member of AAAS, AARP, ACLU, ACM, and that's only the
first four A's on the list of memberships. Does that make a difference?
It turns out that one of my ancestors (John Reed) arrived in Rhode
Island in 1615, and another came to the US on Ellis Island. One of my
ancestors was at the battle of Lexington and Concord, and another was a
Gibson Girl on 42nd street, having immigrated into the country. My
father can be seen in documentary footage shot on the USS Missouri
during the Korean Conflict, and was in charge of designing many of the
modern US Navy ships now in service.
Do those things make a difference?
I guess the fellow who is responsible for WiFi and UWB knows how to
judge ideas by the person's lifestyle.
And by the way, I'm neither aging (I'm 57, and can beat most people in
arm-wrestling, if nothing else), nor a classic "hippie" (my views in
those days tended toward a very different direction - a mixture of
systems design, AI, and math, mixed with stopping the Vietnam War and
creating a free market of ideas).
But so what if I were?
cheers

jon
Richard Bennett
2009-10-27 21:27:31 UTC
Permalink
Seems to me that the real hippies were pretty much libertarians who
didn't trust The Man's Government any more than The Man's Corporations.
Internet Research circles are interesting, in that they consist of
people taking money from The Man's War Machine to spread peace, love,
and understanding, which may be the best form of national defense anyhow.

And in plain engineering circles, there seems to be a divide between
libertarians who don't trust authority period and socialists who view
society as a machine to be optimized and markets as too messy an
inefficient for the purpose. Of course, socialism is inconsistent with
net neutrality in principle, but nobody really understands that.

In the course of reading the 1984 version of E2E Args, I was struck by
the mention of RISC. It's interesting because back in the 1970s and 80s,
there was this general train of thought about building reliable, high
quality systems on cheap and plentiful unreliable parts. It was
interesting because it seemed to resolve the "good, fast, cheap: pick
any two" dilemma that I used to see above programmers' desks from about
1980 on. RAID, RISC, datagrams, old-fashioned CSMA/CD Ethernet, and
massively parallel microprocessor-based supercomputers were all
explorations of that idea, which worked out well in some contexts and
less well in others. RISC was a bust, for example, and while datagrams
are good for content-oriented network applications, they're obviously
less good for real-time network apps, and Ethernet only became dominant
when we dumped CSMA/CD for the collision-free, flow controlled, full
duplex switches that we use today. So why is it that you can build a
nice system using crappy parts in some cases and not in others?

Perhaps the constraint is time, and these systems didn't get all three
of "good, fast, cheap" but only good and cheap. If that's the case, it
places a boundary on how far you can go with an E2E model in large-scale
networks. People want to use the Internet as more than a content network
these days, because interpersonal communication is the real killer app
for networking, and that takes QoS, and you can't do QoS E2E.

Public policy needs to be constrained by engineering, as Noel said, but
the engineering needs to be good, not ideological.

RB
Post by Jon Crowcroft
I may fall down on relevance
(c.f. Deirdre Wilson & Dan Sperberr Relevance Theory, 1985)
http://youtu.be/2SNHtKfDxlg
but anyhow, I suspect I am just as suspect as you
for what its worth...
this has to be set against the rabid free market libertarianism
fairly common in Internet Research circles from 1992 until the present
day, which is also extremely unrepresentative of the world...
Post by David P. Reed
Is it the "aging" part or the "hippie" part that makes a person suspect?
I could give you a list of labels that have been applied to me, in
corporate-type, libertarian, communist, hippie, net-nutter, past-it,
ivory-tower, propellerhead, mere engineer, non-economist, naive,
fuzzy-minded, ...
I personally don't think such labels make a whit of difference.
In fact, I am a member of AAAS, AARP, ACLU, ACM, and that's only the
first four A's on the list of memberships. Does that make a difference?
It turns out that one of my ancestors (John Reed) arrived in Rhode
Island in 1615, and another came to the US on Ellis Island. One of my
ancestors was at the battle of Lexington and Concord, and another was a
Gibson Girl on 42nd street, having immigrated into the country. My
father can be seen in documentary footage shot on the USS Missouri
during the Korean Conflict, and was in charge of designing many of the
modern US Navy ships now in service.
Do those things make a difference?
I guess the fellow who is responsible for WiFi and UWB knows how to
judge ideas by the person's lifestyle.
And by the way, I'm neither aging (I'm 57, and can beat most people in
arm-wrestling, if nothing else), nor a classic "hippie" (my views in
those days tended toward a very different direction - a mixture of
systems design, AI, and math, mixed with stopping the Vietnam War and
creating a free market of ideas).
But so what if I were?
cheers
jon
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
Dave Eckhardt
2009-10-28 02:57:38 UTC
Permalink
[...] Ethernet only became dominant when we dumped CSMA/CD for
the collision-free, flow controlled, full duplex switches that
we use today.
In the two environments I'm familiar with, Ethernet had firmly
crowded out everything else (and there were other things: IBM
Token Ring, Corvus OmniNet, AppleTalk over PhoneNet, LattisNet,
etc.) when it was still half-duplex thin-net, which was replaced
by 10-megabit twisted-pair into hubs, *then* 100-megabit
twisted-pair into switches.

I think there were a lot of places where Ethernet was dominant
before switches... though maybe we're using different definitions
of "dominant"? I think I mean something like "more than 90% of
desktops".

Dave Eckhardt
Richard Bennett
2009-10-28 06:34:45 UTC
Permalink
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
CSMA/CD Ethernet was simply a best-efforts LAN, but switched Ethernet
is a QoS-capable WAN and facilities fabric as well; ever seen an
Internet Exchange Point?&nbsp; Big fat Ethernet switches sit at the heart of
them, passing multiple packets at a time in parallel. It's a whole
different concept of networking than the fat dumb pipe. <br>
<br>
RB<br>
<br>
Dave Eckhardt wrote:
<blockquote cite="mid:***@lunacy.ugrad.cs.cmu.edu"
type="cite">
<blockquote type="cite">
<pre wrap="">[...] Ethernet only became dominant when we dumped CSMA/CD for
the collision-free, flow controlled, full duplex switches that
we use today.
</pre>
</blockquote>
<pre wrap=""><!---->
In the two environments I'm familiar with, Ethernet had firmly
crowded out everything else (and there were other things: IBM
Token Ring, Corvus OmniNet, AppleTalk over PhoneNet, LattisNet,
etc.) when it was still half-duplex thin-net, which was replaced
by 10-megabit twisted-pair into hubs, *then* 100-megabit
twisted-pair into switches.

I think there were a lot of places where Ethernet was dominant
before switches... though maybe we're using different definitions
of "dominant"? I think I mean something like "more than 90% of
desktops".

Dave Eckhardt
</pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC</pre>
</body>
</html>
David P. Reed
2009-10-28 12:13:40 UTC
Permalink
Post by Richard Bennett
CSMA/CD Ethernet was simply a best-efforts LAN, but switched Ethernet
is a QoS-capable WAN and facilities fabric as well; ever seen an
Internet Exchange Point? Big fat Ethernet switches sit at the heart
of them, passing multiple packets at a time in parallel. It's a whole
different concept of networking than the fat dumb pipe.
If I had seen an Internet Exchange point, what would looking at racks,
cables, power supplies, etc. tell me exactly?

And since the word "dumb pipe" is a construct largely used by operators
who use it to describe a "business model", and to explain the Internet
to Ted Stevens as a "series of tubes" but "not a dump-truck," what
contribution to Internet architecture is being made by this idiotic
thread now that Bennett has hooked people into his trolling rig?

"whole different kind of networking" is a really useful description -
it's up there with "carrier-grade" as a marketing concept that has no
content.

Bennett is *paid* by ITIF to throw out these kinds of statements,
calculated to get someone to extend a conversational diversion. He is
not contributing technically on this list which is about technology and
architectural *research*.
Dave CROCKER
2009-10-28 10:44:23 UTC
Permalink
Post by Dave Eckhardt
[...] Ethernet only became dominant when we dumped CSMA/CD for
the collision-free, flow controlled, full duplex switches that
we use today.
In the two environments I'm familiar with, Ethernet had firmly
crowded out everything else (and there were other things: IBM
Token Ring, Corvus OmniNet, AppleTalk over PhoneNet, LattisNet,
etc.) when it was still half-duplex thin-net, which was replaced
by 10-megabit twisted-pair into hubs, *then* 100-megabit
twisted-pair into switches.
Yup. "Ethernet" collision-free switches came quite a bit after real ethernet
dominated LANs.

(The quotations are because the former presents an Ethernet interface but not
Ethernet over the wire. Thicknet, thin-net, and I believe the original versions
of twisted-pair, all used had the shared-access, collision-capable ethernet over
the wire.)

d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
John Kristoff
2009-10-28 12:21:20 UTC
Permalink
Post by Dave CROCKER
Post by Dave Eckhardt
etc.) when it was still half-duplex thin-net, which was replaced
by 10-megabit twisted-pair into hubs, *then* 100-megabit
twisted-pair into switches.
Yup. "Ethernet" collision-free switches came quite a bit after real
ethernet dominated LANs.
10 Mb/s Etherhet LAN switches arrived even before 100 Mb/s Ethernet
was available, largely thanks to Kalpana (eventually bought by Cisco).
In addition to hubs and other reasons, switches really helped answer
criticisms of competing technologies such as Token Ring, eventually
leading to Ethernet becoming the RS-232 of LAN technology.

John
William Allen Simpson
2009-10-29 08:29:31 UTC
Permalink
Post by Dave CROCKER
Post by Dave Eckhardt
[...] Ethernet only became dominant when we dumped CSMA/CD for
the collision-free, flow controlled, full duplex switches that
we use today.
In the two environments I'm familiar with, Ethernet had firmly
crowded out everything else (and there were other things: IBM
Token Ring, Corvus OmniNet, AppleTalk over PhoneNet, LattisNet,
etc.) when it was still half-duplex thin-net, which was replaced
by 10-megabit twisted-pair into hubs, *then* 100-megabit
twisted-pair into switches.
Yup. "Ethernet" collision-free switches came quite a bit after real
ethernet dominated LANs.
Agreed, as to multi-point LAN technology, but only after circa 1990. One
of the environments that I wrestled with during the '80s was considered
the largest thicknet installation in the world. But even then, terminals,
computers, and entire facilities were primarily connected with one or
more variants of a poll-select protocol over RS-232. There were usually
two serial connectors on every machine (with no ethernet at all).

Even today, there are *far* more point-to-point WAN links than ethernet.

I have the advantage of working on far more than 2 facilities. Token
ring and related were never more than fragile and overpriced disasters.
The market spoke, even when big industry was trying to force them down
our throats.

The pedant that interrupted this thread appears to be fairly clueless
about real deployment. And his economic analysis is ... ill-informed.

Perhaps that's a reason that IEEE 802 development in general was so
conservative and poorly done.
Noel Chiappa
2009-10-28 14:44:34 UTC
Permalink
Post by Dave CROCKER
The quotations are because the former presents an Ethernet interface
but not Ethernet over the wire.
Thereby becoming a perfect illustration of the systems architecture truism
that _interfaces_ between subsystems are far more persistent (lifetime-wise)
than the internals of the subsystem.

Other examples of this phenomenon include the RJ-11 phone jack (both
physically and electrically), the standard screw-in light bulb socket (now
hosting those fluorescent spiral tube bulbs), etc, etc.

Noel
Dave CROCKER
2009-10-28 16:23:47 UTC
Permalink
Post by Noel Chiappa
Thereby becoming a perfect illustration of the systems architecture truism
that _interfaces_ between subsystems are far more persistent (lifetime-wise)
than the internals of the subsystem.
Other examples of this phenomenon include the RJ-11 phone jack (both
physically and electrically), the standard screw-in light bulb socket (now
hosting those fluorescent spiral tube bulbs), etc, etc.
A particularly fun example of this is RFC 1001/1002, which re-implemented
NetBIOS. Preserve the IBM API, but use TCP protocols. (There had been a couple
of proprietary non-IBM TCP versions earlier; this created a standard.)

Had IBM published the underlying protocols, the TCP version would no doubt have
been quite different.

d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
Richard Bennett
2009-10-29 03:15:48 UTC
Permalink
I had personally seen the source code to the IBM/Sytek protocols, under
NDA, for NETBIOS/SMB before RFC 1001/2 were written, which is one reason
I could only be a cheerleader for 1001/2. At Tandem we implemented an
SMB server under the T16's Guardian OS. SMB wasn't too hard to reverse
engineer, as the SAMBA guys found out; the IBM PC Network was harder, so
we used a PC with both PC Network and Ethernet cards as a bridge.
Post by Dave CROCKER
Post by Noel Chiappa
Thereby becoming a perfect illustration of the systems architecture truism
that _interfaces_ between subsystems are far more persistent
(lifetime-wise)
than the internals of the subsystem.
Other examples of this phenomenon include the RJ-11 phone jack (both
physically and electrically), the standard screw-in light bulb socket (now
hosting those fluorescent spiral tube bulbs), etc, etc.
A particularly fun example of this is RFC 1001/1002, which
re-implemented NetBIOS. Preserve the IBM API, but use TCP protocols.
(There had been a couple of proprietary non-IBM TCP versions earlier;
this created a standard.)
Had IBM published the underlying protocols, the TCP version would no
doubt have been quite different.
d/
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
Dave Eckhardt
2009-10-28 15:09:05 UTC
Permalink
Post by John Kristoff
10 Mb/s Etherhet LAN switches arrived even before 100 Mb/s
Ethernet was available, largely thanks to Kalpana (eventually
bought by Cisco).
The original claim used (without definition) the word "dominant".
I proposed (without objection so far) the definition "90% of
desktops". My counter-claim is that Ethernet was dominant (90%
of desktops) without the arrival of switches having been the
cause. The existence of a small number of 10-megabit switches
doesn't force acceptance of one causal claim over the other.

So how about this update: "Ethernet was serving 90% of desktops
before 50% of those desktops were talking to switches"?

Dave Eckhardt
Richard Bennett
2009-10-29 03:09:11 UTC
Permalink
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
It strikes me that your 90% claim is a bit of an exaggeration, and more
importantly that it misses the point. Define "the market" as all the
places where switched Ethernet is used today, crank in some realistic
shares, and tell me what you get; by guess is that coax Ethernet was
deployed in around 10-20% of the places where twisted pair and optical
Ethernet LANs, MANs, and WANs are used today<br>
<br>
ARCNet was very big in desktop connections, as far as that goes,
especially in IBM shops because it used the 3270 PHY.<br>
<br>
RB<br>
<br>
Dave Eckhardt wrote:
<blockquote cite="mid:***@lunacy.ugrad.cs.cmu.edu"
type="cite">
<blockquote type="cite">
<pre wrap="">10 Mb/s Etherhet LAN switches arrived even before 100 Mb/s
Ethernet was available, largely thanks to Kalpana (eventually
bought by Cisco).
</pre>
</blockquote>
<pre wrap=""><!---->
The original claim used (without definition) the word "dominant".
I proposed (without objection so far) the definition "90% of
desktops". My counter-claim is that Ethernet was dominant (90%
of desktops) without the arrival of switches having been the
cause. The existence of a small number of 10-megabit switches
doesn't force acceptance of one causal claim over the other.

So how about this update: "Ethernet was serving 90% of desktops
before 50% of those desktops were talking to switches"?

Dave Eckhardt
</pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC</pre>
</body>
</html>
Dave Eckhardt
2009-10-28 15:12:35 UTC
Permalink
Post by Richard Bennett
Post by Dave Eckhardt
I think there were a lot of places where Ethernet was dominant
before switches... though maybe we're using different definitions
of "dominant"? I think I mean something like "more than 90% of
desktops".
CSMA/CD Ethernet was simply a best-efforts LAN, but switched
Ethernet is a QoS-capable WAN and facilities fabric as well;
ever seen an Internet Exchange Point? Big fat Ethernet switches
sit at the heart of them, passing multiple packets at a time
in parallel. It's a whole different concept of networking than
the fat dumb pipe.
I don't understand how your comment is a response to the testable
claim I made in response to your incompletely specified claim.

Dave Eckhardt
Dave Eckhardt
2009-10-29 18:05:58 UTC
Permalink
Post by Richard Bennett
It strikes me that your 90% claim is a bit of an exaggeration,
and more importantly that it misses the point.
Actually, I was trying to get you to define what you meant by
"Ethernet became dominant" by proposing a definition, since
you hadn't.
Post by Richard Bennett
Define "the market" as all the places where switched Ethernet
is used today, crank in some realistic shares, and tell me
what you get; by guess is that coax Ethernet was deployed in
around 10-20% of the places where twisted pair and optical
Ethernet LANs, MANs, and WANs are used today
Now I get it: when you wrote "Ethernet only became dominant
when we dumped CSMA/CD for the collision-free, flow controlled,
full duplex switches that we use today" you meant something
like "Switches were a necessary addition to Ethernet before
it could grow from a single-building LAN to a campus-spanning
technology". I buy that, because treating an entire campus
or medium-sized company as one collision domain wouldn't have
worked out very well.

On the other hand...
Post by Richard Bennett
ARCNet was very big in desktop connections, as far as that
goes, especially in IBM shops because it used the 3270 PHY.
I don't think any of Ethernet's competitors would have scaled
very well either--ARCNet had single-byte node addresses; Token
Ring would have been painful if you had to share a transmit
token with thousands of other machines; etc. So it's unclear
that CSMA/CD was a structural limit of Ethernet--the reality
is probably more like "It doesn't matter much how you contend
among a few hosts, but you can't build large networks unless
you limit contention domains to less than the size of the
large network", which is almost a tautology.

Dave Eckhardt
Lloyd Wood
2009-10-29 19:43:17 UTC
Permalink
Post by Dave Eckhardt
Post by Richard Bennett
Define "the market" as all the places where switched Ethernet
is used today, crank in some realistic shares, and tell me
what you get; by guess is that coax Ethernet was deployed in
around 10-20% of the places where twisted pair and optical
Ethernet LANs, MANs, and WANs are used today
Now I get it: when you wrote "Ethernet only became dominant
when we dumped CSMA/CD for the collision-free, flow controlled,
full duplex switches that we use today" you meant something
like "Switches were a necessary addition to Ethernet before
it could grow from a single-building LAN to a campus-spanning
technology". I buy that, because treating an entire campus
or medium-sized company as one collision domain wouldn't have
worked out very well.
Never mind that.

Two anecdotes from the early days of my comparatively
late PhD studies (1996 or so):

1. The networks lab was next to the artificial intelligence
lab. The AI students were cooler than we were; they dressed
better, had more funding, and had laptop computers. But,
in connecting the laptops to the Ethernet LAN, they didn't
care how a big shared Ethernet coax LAN worked. They'd
just hook up connections any old how, T off more coax,
disconnect when they were done... and often they'd bring
the network down, usually just before they locked up and
took their laptops home for the day.

Never mind collisions. Ethernet switches were necessary
just to protect against and isolate the users.

2. Everyone in networks was working on ATM, and talking
about the Next Big Thing: 25Mbps ATM to the desktop.
I kept looking at the 10Mbps Ethernet we were already
using all the time every day along with the TCP/IP
stack to do daily work on and send emails about ATM,
thinking "Something is wrong with this picture."

L.

DTN work: http://info.ee.surrey.ac.uk/Personal/L.Wood/saratoga/

<http://info.ee.surrey.ac.uk/Personal/L.Wood/><***@surrey.ac.uk>
Richard Bennett
2009-10-31 21:46:55 UTC
Permalink
Post by Dave Eckhardt
So it's unclear
that CSMA/CD was a structural limit of Ethernet--the reality
is probably more like "It doesn't matter much how you contend
among a few hosts, but you can't build large networks unless
you limit contention domains to less than the size of the
large network", which is almost a tautology.
That's part of the story, but the implications of the switched Ethernet
killing off CSMA/CD Ethernet are much larger, and relate the end-to-end
arguments principle. CSMA/CD Ethernet was an end-point managed system
sharing a dump pipe, while switched Ethernet is a system that deploys
intelligence - switching, flow control, buffering, QoS discrimination,
VLANs - inside the network at multiple points. Switched Ethernet is
scalable, manageable, diagnosable, and future-proof, while CSMA/CD
Ethernet is none of these things. So the competition of CSMA/CD and
Active Switching for markets demonstrates something about which approach
to the design of layer 2 networks is superior.

Now the question that this historical fact raises for me is whether we
can draw any implications from the well-settled outcome of the layer 2
tussle for layer 3 and 4 protocols, given the fact that IP is a very
thin abstraction of the Ethernet layer 2 and that TCP is a vehicle for
resolving problems that are typical of the CSMA/CD Ethernet environment;
I offer that as a realistic assessment of the design choices, realizing
that the official story differs from the reality.

In other words: does the success of Switched Ethernet suggest that it's
better to think of network protocols as units of recursion than as
collections of statically-placed functions that operate once and only
once in the lifetime of a packet?

RB
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
David P. Reed
2009-11-01 03:04:22 UTC
Permalink
the fact that IP is a very thin abstraction of the Ethernet layer 2
and that TCP is a vehicle for resolving problems that are typical of
the CSMA/CD Ethernet environment
This statement is nonsense. IP is not a very thin abstraction of
Ethernet layer 2. IP is carried over many protocols other than the
Ethernet. TCP is an end-to-end protocol for in-order virtual circuit
data delivery, designed to work over IP, and to handle problems that
have nothing to do with CSMA/CD.
In other words: does the success of Switched Ethernet suggest that
it's better to think of network protocols as units of recursion than
as collections of statically-placed functions that operate once and
only once in the lifetime of a packet?
No. This is also nonsense, and begs the question. Network protocols
have never been described as not "collections of statically-placed
functions that operate once and only once in the lifetime of a packet".
Nor does the "success" of anything in the marketplace suggest how to think.
Bob Braden
2009-11-01 18:16:20 UTC
Permalink
Post by David P. Reed
the fact that IP is a very thin abstraction of the Ethernet layer 2
and that TCP is a vehicle for resolving problems that are typical of
the CSMA/CD Ethernet environment
This statement is nonsense. IP is not a very thin abstraction of
Ethernet layer 2. IP is carried over many protocols other than the
Ethernet. TCP is an end-to-end protocol for in-order virtual circuit
data delivery, designed to work over IP, and to handle problems that
have nothing to do with CSMA/CD.
Dave Reed is right, of course. Richard Bennett's declarative statement
is so far into an alternate universe from the real world that one has to
suspect he is baiting the list. Dave used the word "nonsense" ... it
seems to me to be difficult to deny the rightness of that word choice.

Bob Braden
Jon Crowcroft
2009-11-01 10:10:52 UTC
Permalink
There definitely are lessons
in the evolution from
end-mediated contention to
switch-mediated access
in ethernet-land.

The oft-perceived analogy
of the whole internet as a big ethernet,
a huge shared resource
with contention mainly mediated
by end systems, is alluring.

So the move to
net/switch-centric resource allocation/control
in the local,
might suggest some similar move
in the wide area...
until you actually think about the
heterogeneity in the
topology, in capacity and in latency,
of the system -

Plenty of enterprise nets and small ISPs
(e.g. UK size) can consider
a carrier-grade switched ether
control philosophy (e.g.
esp. to replace
complicated MPLS setups:)
but it doesn't subsume/replace e2e
resource sharing -

It doesn't address
multihomeing, multipath, mobility or multicast
in any useful way either...it doesn't
speak to swarms and CDNs much either.

There were other lan technologies
which didn't have built in collapse
as part of the media-sharing protocols
so the lesson wasn't as widely
necessary as the e2e monoculture
pretends (people who built
token and slotted rings
had other views of the world
too:)

On the other hand, it would be instructive
to see how many end&edge systems are now on
wireless ethernet and to see if the balance has
swung back once again in "favour" of
shared media/contention.

aloha

jon
Jon Crowcroft
2009-11-01 14:49:38 UTC
Permalink
well to be specific,
TCP retransmission times
and
TCP congestion control
were NOT designed in from day 1

early TCPs had fixed retransmit until the RSRE algorithm
and then it was still some time before the Karn/Partridge
improvements kicked in
plus
early TCPs had no congestion control at all
until '87

however, since then
the adaptation of timers
and the adaption of flow rates
makes the interweb
look very much like a giant contention ethernet -
in fact for exactly the same reason as voice on ethernet
never was a big deal, voice on the interweb
requires you to have a path running at releatively
low utilisation otherwise delays diverge...and loss kicks in

one thing
(van pointed this out in a talk here a couple of days ago)
that saves it from the same fate as pure contention systems
is that there's a packet conservation principle...
again NOT something designed in the original TCP

so that's 3 new principles within the end2end system that
actually weren't in the original design of the protocols
that I count...there's a few other ones lurking inside
IP too, but thats to do with routing, and as Bob Braden
so wisely says, "we don't do routing in e2e"
This is a multi-part message in MIME format.
--------------080003040602040709060908
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
There are probably also lessons in the evolution from networks that are
synchronized with clocks that must have timing with parts-per-billion
accuracy (the "Bell System" architecture - e.g. SONET) to networks that
allow for internal retiming, buffering, etc.
That doesn't mean that it is a fact that IP is a thin layer over such
clock-synchronized networks, which still exist and carry IP traffic.
Nor is TCP designed to be corrective of such networks brittle
unreliability, which leads to rerouting over alternate paths that may
cause transient out-of-order delivery, duplication, and a need to
reallocate resources.
TCP and IP were designed to handle heterogeneity and best efforts, and
the idea that they were either designed to remedy Aloha or evolved so
that they only run on Ethernet - that is nonsense, a just so story.
Post by Jon Crowcroft
There definitely are lessons
in the evolution from
end-mediated contention to
switch-mediated access
in ethernet-land.
The oft-perceived analogy
of the whole internet as a big ethernet,
a huge shared resource
with contention mainly mediated
by end systems, is alluring.
So the move to
net/switch-centric resource allocation/control
in the local,
might suggest some similar move
in the wide area...
until you actually think about the
heterogeneity in the
topology, in capacity and in latency,
of the system -
Plenty of enterprise nets and small ISPs
(e.g. UK size) can consider
a carrier-grade switched ether
control philosophy (e.g.
esp. to replace
complicated MPLS setups:)
but it doesn't subsume/replace e2e
resource sharing -
It doesn't address
multihomeing, multipath, mobility or multicast
in any useful way either...it doesn't
speak to swarms and CDNs much either.
There were other lan technologies
which didn't have built in collapse
as part of the media-sharing protocols
so the lesson wasn't as widely
necessary as the e2e monoculture
pretends (people who built
token and slotted rings
had other views of the world
too:)
On the other hand, it would be instructive
to see how many end&edge systems are now on
wireless ethernet and to see if the balance has
swung back once again in "favour" of
shared media/contention.
aloha
jon
--------------080003040602040709060908
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#ffffff">
<font face="Helvetica, Arial, sans-serif">There are probably also
lessons in the evolution from networks that are synchronized with
clocks that must have timing with parts-per-billion accuracy </font>(the
"Bell System" architecture - e.g. SONET) to networks that allow for
internal retiming, buffering, etc.<br>
<br>
That doesn't mean that it is a fact that IP is a thin layer over such
clock-synchronized networks, which still exist and carry IP traffic.&nbsp;
Nor is TCP designed to be corrective of such networks brittle
unreliability, which leads to rerouting over alternate paths that may
cause transient out-of-order delivery, duplication, and a need to
reallocate resources.<br>
<br>
TCP and IP were designed to handle heterogeneity and best efforts, and
the idea that they were either designed to remedy Aloha or evolved so
that they only run on Ethernet - that is nonsense, a just so story.<br>
<br>
<pre wrap="">There definitely are lessons
in the evolution from
end-mediated contention to
switch-mediated access
in ethernet-land.
The oft-perceived analogy
of the whole internet as a big ethernet,
a huge shared resource
with contention mainly mediated
by end systems, is alluring.
So the move to
net/switch-centric resource allocation/control
in the local,
might suggest some similar move
in the wide area...
until you actually think about the
heterogeneity in the
topology, in capacity and in latency,
of the system -
Plenty of enterprise nets and small ISPs
(e.g. UK size) can consider
a carrier-grade switched ether
control philosophy (e.g.
esp. to replace
complicated MPLS setups:)
but it doesn't subsume/replace e2e
resource sharing -
It doesn't address
multihomeing, multipath, mobility or multicast
in any useful way either...it doesn't
speak to swarms and CDNs much either.
There were other lan technologies
which didn't have built in collapse
as part of the media-sharing protocols
so the lesson wasn't as widely
necessary as the e2e monoculture
pretends (people who built
token and slotted rings
had other views of the world
too:)
On the other hand, it would be instructive
to see how many end&amp;edge systems are now on
wireless ethernet and to see if the balance has
swung back once again in "favour" of
shared media/contention.
aloha
jon
</pre>
</blockquote>
</body>
</html>
--------------080003040602040709060908--
cheers

jon
Bob Braden
2009-11-01 18:01:26 UTC
Permalink
Post by Jon Crowcroft
well to be specific,
TCP retransmission times
and
TCP congestion control
were NOT designed in from day 1
Jon,

It does not really matter, of course, but your quick summary is not
quite accurate.
It depends upon what you consider to be "Day 1". E.g., RFC 793 did NOT
have a
fixed retransmit time. And if you want to give RSRE credit for
exponentially smoothed
RTT measurements (a fact I had forgotten, assuming you are correct), you
ought to
give Van credit for finally figuring out how to do real congestion
control, in 1987.

Bob Braden
Post by Jon Crowcroft
early TCPs had fixed retransmit until the RSRE algorithm
and then it was still some time before the Karn/Partridge
improvements kicked in
plus
early TCPs had no congestion control at all
until '87
Jon Crowcroft
2009-11-02 09:37:22 UTC
Permalink
Post by Bob Braden
It does not really matter, of course, but your quick summary is not
quite accurate.
It depends upon what you consider to be "Day 1". E.g., RFC 793 did NOT
absolutely - sorry - i was talkin about first cut at design - the RSRE
discussion predates the RFC and the result made it in to the spec..
Post by Bob Braden
have a
fixed retransmit time. And if you want to give RSRE credit for
exponentially smoothed
RTT measurements (a fact I had forgotten, assuming you are correct), you
see IEN160 for the discussion and credit - was a couple of years
before i was doin this sort of thing so you prob. know more about this
than me...
http://www.postel.org/ien/pdf/ien160.pdf

but the smarter smoothing happened in 87
(using smoothed mean + mean squre difference
for retransmit rather than just EWMA)
and the late 80s stuff had input from Karn&Partridge
Post by Bob Braden
ought to
give Van credit for finally figuring out how to do real congestion
control, in 1987.
of course!
but that work is so heavily cited I just thought everyone would know
it anyhow :)
one should also cite Raj Jain and KK Ramakrishnan for the DECNET work
from which some of it came, and Frank Kelly for packet conservation
ideas

but the purpose of my comment wasnt to go down memory lane and build a
perfect family tree of ideas (that would be a good thing to do
e.g. based o na code audit:)
but to point out that the original designers
did NOT get everything right in one go...
indeed some of donald davies' (and others)
original ideas about resource pooling and
congestion control which were integral in his vision
of packet switching, were actually lost from between
early 70s and late 80s...

alas...

suppose those who teach history
are doomed to repeat themselves:)
Post by Bob Braden
Post by Jon Crowcroft
early TCPs had fixed retransmit until the RSRE algorithm
and then it was still some time before the Karn/Partridge
improvements kicked in
plus
early TCPs had no congestion control at all
until '87
cheers

jon
David P. Reed
2009-11-02 16:33:06 UTC
Permalink
There are probably also lessons in the evolution from networks that are
synchronized with clocks that must have timing with parts-per-billion
accuracy (the "Bell System" architecture - e.g. SONET) to networks that
allow for internal retiming, buffering, etc.

That doesn't mean that it is a fact that IP is a thin layer over such
clock-synchronized networks, which still exist and carry IP traffic.
Nor is TCP designed to be corrective of such networks brittle
unreliability, which leads to rerouting over alternate paths that may
cause transient out-of-order delivery, duplication, and a need to
reallocate resources.

TCP and IP were designed to handle heterogeneity and best efforts, and
the idea that they were either designed to remedy Aloha or evolved so
that they only run on Ethernet - that is nonsense, a just so story.
Post by Jon Crowcroft
There definitely are lessons
in the evolution from
end-mediated contention to
switch-mediated access
in ethernet-land.
The oft-perceived analogy
of the whole internet as a big ethernet,
a huge shared resource
with contention mainly mediated
by end systems, is alluring.
So the move to
net/switch-centric resource allocation/control
in the local,
might suggest some similar move
in the wide area...
until you actually think about the
heterogeneity in the
topology, in capacity and in latency,
of the system -
Plenty of enterprise nets and small ISPs
(e.g. UK size) can consider
a carrier-grade switched ether
control philosophy (e.g.
esp. to replace
complicated MPLS setups:)
but it doesn't subsume/replace e2e
resource sharing -
It doesn't address
multihomeing, multipath, mobility or multicast
in any useful way either...it doesn't
speak to swarms and CDNs much either.
There were other lan technologies
which didn't have built in collapse
as part of the media-sharing protocols
so the lesson wasn't as widely
necessary as the e2e monoculture
pretends (people who built
token and slotted rings
had other views of the world
too:)
On the other hand, it would be instructive
to see how many end&edge systems are now on
wireless ethernet and to see if the balance has
swung back once again in "favour" of
shared media/contention.
aloha
jon
rick jones
2009-11-01 18:45:28 UTC
Permalink
Post by Richard Bennett
Post by Dave Eckhardt
So it's unclear
that CSMA/CD was a structural limit of Ethernet--the reality
is probably more like "It doesn't matter much how you contend
among a few hosts, but you can't build large networks unless
you limit contention domains to less than the size of the
large network", which is almost a tautology.
That's part of the story, but the implications of the switched
Ethernet killing off CSMA/CD Ethernet are much larger, and relate
the end-to-end arguments principle. CSMA/CD Ethernet was an end-
point managed system sharing a dump pipe, while switched Ethernet is
a system that deploys intelligence - switching, flow control,
buffering, QoS discrimination, VLANs - inside the network at
multiple points. Switched Ethernet is scalable, manageable,
diagnosable, and future-proof, while CSMA/CD Ethernet is none of
these things. So the competition of CSMA/CD and Active Switching for
markets demonstrates something about which approach to the design of
layer 2 networks is superior.
I think you left-out how Power over Ethernet will replace the global
power grid and that it also juliennes fries :)

Color me a cynic, but I rather thought that today's switched
"Ethernet" needed flow control and buffering precisely because CSMA/CD
was removed from Ethernet when it went full-duplex? I seem to recall
that flow-control was not initially present in full-duplex Ethernet.
I'm still not sure how much of the rest of the laundry list above has
been added to Ethernet has been in response to folks going "Routing is
hard, lets go shopping for switches" and the switch vendors being
quite happy to provide a solution to encourage people to buy new
switches.

rick jones
there is no rest for the wicked, yet the virtuous have no pillows
Richard Bennett
2009-11-01 21:26:02 UTC
Permalink
There was a need for flow control as soon as full dupex was included in
the 10BASET spec, but it became even more important with the addition of
100BASETx to switches that were backward compatible with 10BASET. A
collision is better than a silent drop, but neither is necessary. Full
duplex Ethernet switches can transmit multiple frames at the same time,
which is quite convenient in meet-me rooms at IXPs so I don't buy the
routing vs. switching dichotomy; switching helps us do routing.

The point about the thinness of the IP layer doesn't have to do with
routing as much as it has to do with what's in the IP header and what
isn't. I would expect that a network layer protocol would have an
unambiguous address for the host, like CYCLADES, DECNet, XNS, and ISO
CLNP. But all IP has is an address that's a synonym for the LAN
interface address, a point of attachment. So it's not fully separated
from Layer 2. This is especially stark in IPv6 where they just throw in
the whole MAC address into the IP header in order to bypass ARP.

In addition to IP lacking an actual host address, it doesn't do any
protocol either - it's just a packet format and doesn't participate in
any specific sequences of behavior, which is once again just like Blue
Book Ethernet, AKA V2. It's perhaps worth noting that V2 is an odd MAC
protocol since most of its cousins have multiple frame types and state
machines for each, even Switched Ethernet. Granted, there is a network
address in the IP header, for what it's worth, but IP seems to be
missing some function that would make networking a lot easier than it is
in scenarios where a number of diverse applications contend for
resources, and some of the function it's missing was also missing in V2.
This is true in many universes, some of the alternates to this one.

Regarding Jon's comment on the rebirth of CSMA at the edge, there is
some ironic truth to it, but Wi-Fi's not the same style as CSMA/CD
because with 802.11n we have a selectively acknowledged windowing
protocol, much more efficient than TCP where you have to discard
everything after a dropped packet and do it again. I suspect that LTE is
going to be a very large factor one day, and it uses a scheduled system
that doesn't have collisions.

RB
Post by rick jones
Post by Richard Bennett
Post by Dave Eckhardt
So it's unclear
that CSMA/CD was a structural limit of Ethernet--the reality
is probably more like "It doesn't matter much how you contend
among a few hosts, but you can't build large networks unless
you limit contention domains to less than the size of the
large network", which is almost a tautology.
That's part of the story, but the implications of the switched
Ethernet killing off CSMA/CD Ethernet are much larger, and relate the
end-to-end arguments principle. CSMA/CD Ethernet was an end-point
managed system sharing a dump pipe, while switched Ethernet is a
system that deploys intelligence - switching, flow control,
buffering, QoS discrimination, VLANs - inside the network at multiple
points. Switched Ethernet is scalable, manageable, diagnosable, and
future-proof, while CSMA/CD Ethernet is none of these things. So the
competition of CSMA/CD and Active Switching for markets demonstrates
something about which approach to the design of layer 2 networks is
superior.
I think you left-out how Power over Ethernet will replace the global
power grid and that it also juliennes fries :)
Color me a cynic, but I rather thought that today's switched
"Ethernet" needed flow control and buffering precisely because CSMA/CD
was removed from Ethernet when it went full-duplex? I seem to recall
that flow-control was not initially present in full-duplex Ethernet.
I'm still not sure how much of the rest of the laundry list above has
been added to Ethernet has been in response to folks going "Routing is
hard, lets go shopping for switches" and the switch vendors being
quite happy to provide a solution to encourage people to buy new
switches.
rick jones
there is no rest for the wicked, yet the virtuous have no pillows
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
rick jones
2009-11-01 21:57:17 UTC
Permalink
Post by Richard Bennett
Regarding Jon's comment on the rebirth of CSMA at the edge, there is
some ironic truth to it, but Wi-Fi's not the same style as CSMA/CD
because with 802.11n we have a selectively acknowledged windowing
protocol, much more efficient than TCP where you have to discard
everything after a dropped packet and do it again.
My history with TCP stacks does not go back "to the
beginning" (whatever that might actually be), and I did not start in
the "PC" space so perhaps my life was charmed, but going back to 1988
I'd not encountered any TCP where that was the case.

rick jones
there is no rest for the wicked, yet the virtuous have no pillows
Detlef Bosau
2009-11-11 19:11:50 UTC
Permalink
Post by Richard Bennett
In other words: does the success of Switched Ethernet suggest that
it's better to think of network protocols as units of recursion than
as collections of statically-placed functions that operate once and
only once in the lifetime of a packet?
RB
I've just had a very first glance at this discussion. (Thanks god, I
first wrote the post by Joe....)

However, I'm a bit curious what this discussion is all about.

Many of us enjoy Switched Ethernet, me too. However, what is the very
issue with switched Ethernet from the end to end arguments point of view?

And second: Shall switched Ethernet replace TCP/IP?

If not: What is this argument all about?

Just curious.

Detlef
--
Detlef Bosau Galileistraße 30 70565 Stuttgart
phone: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau
ICQ: 566129673 ***@web.de http://www.detlef-bosau.de
Richard Bennett
2009-11-12 00:08:26 UTC
Permalink
Classical Ethernet - the co-ax cable-based, Aloha-derived CSMA/CD system
- is one of the canonical examples of a purely edge-managed network. It
actually hails from the era during which the Internet protocols were
designed, and expresses a similar set of engineering trade-offs.
Thirty-five years after the design of Ethernet, we've dropped the purely
edge-managed approach to building layer 1 and 2 networks in favor of
somewhat more centralized systems: Switched Ethernet, DOCSIS, DSL,
Wi-Max, and Wi-Fi are the leading examples. These systems aren't purely
centralized, of course; they're more like multiply-centralized meshes
than either edge-managed or core-managed systems.

While we now know that edge-managed LANs and MANs are not the way to go,
we still use edge-managed protocols to operate the Internet. The
Jacobson Algorithm is probably the purest example.

The triumph of switched and semi-centralized systems at layer 2 suggests
that it might be beneficial to revisit some of the design tradeoffs at
layer 3 if for no other reason than to bring them up-to-date. In
principle, IP isn't supposed to care what's happening at layer 2, but in
practice it makes a great deal of difference; this is one reason that
people design networks nowadays with the express intention of being good
for IP; e.g., MPLS.

That's the general idea.

RB
Post by Detlef Bosau
Post by Richard Bennett
In other words: does the success of Switched Ethernet suggest that
it's better to think of network protocols as units of recursion than
as collections of statically-placed functions that operate once and
only once in the lifetime of a packet?
RB
I've just had a very first glance at this discussion. (Thanks god, I
first wrote the post by Joe....)
However, I'm a bit curious what this discussion is all about.
Many of us enjoy Switched Ethernet, me too. However, what is the very
issue with switched Ethernet from the end to end arguments point of view?
And second: Shall switched Ethernet replace TCP/IP?
If not: What is this argument all about?
Just curious.
Detlef
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
Detlef Bosau
2009-11-12 16:15:54 UTC
Permalink
O.k., I still don't have an idea, what this argument is all about.
Post by Richard Bennett
Classical Ethernet - the co-ax cable-based, Aloha-derived CSMA/CD
system - is one of the canonical examples of a purely edge-managed
network.
First of all, classical Ethernet is the canonical example of classical
Ethernet.

Period.

No one was forced to use classical Ethernet, and no one was forced to
avoid classical Ethernet.

The discussion initiated by Salzer, Reed and Clark was _not_, whether or
not a certain network technology should have management capabilies,
leaky bucket facilities, SNMP agents or whatever. Instead, the authors
invite us to carefully
consider, where certain functions, duties, responsibilities should be
placed and where not.

IIRC, Dave Reed told us, there were no such think like an "end to end
principle" some weeks ago.

And in fact, there is none. But it is useful to carefully consider the
placement and separation of concerns and responsibilities. No more, no less.
Post by Richard Bennett
It actually hails from the era during which the Internet protocols
were designed, and expresses a similar set of engineering trade-offs.
And scientists and priests still argue, whether we hail from adam and
eve - or from apes and evolution.

Would this make a difference? Despite of the fact, that mankind should
rather behave like apes (and hopefully, apes still do!), because we
wouldn't have seen thermonuclear weapons and many other "human
inventions" then?

When I use a network, my primary interest is not its historical origin
but its use for my problem.
Post by Richard Bennett
Thirty-five years after the design of Ethernet, we've dropped the
purely edge-managed approach to building layer 1 and 2 networks in
favor of somewhat more centralized systems: Switched Ethernet, DOCSIS,
DSL, Wi-Max, and Wi-Fi are the leading examples.
You mentioned some examples where some separations of concerns might
have been done in a different way than 1985.

Wonderful!

When there are compelling reasons for doing so: Go ahead!
Post by Richard Bennett
While we now know that edge-managed LANs and MANs are not the way to
go, we still use edge-managed protocols to
Why not?

Typically, it's a good idea to fit a solution to a problem and not the
other way round.

So, first of all, I will have a look at my problem, e.g. how many
systems are to be connected, are there constraints, e.g. I must not use
a wireline connection in a certain scenario and so on, and then I will
make a choice for a certain networking technology.

This may be switched Ethernet - or it may be something different.
Depending on my actual needs and my actual constraints.
Post by Richard Bennett
operate the Internet. The Jacobson Algorithm is probably the purest
example.
And I don't see, how switched Ethernet provides an alternative to VJCC.
Post by Richard Bennett
The triumph of switched and semi-centralized systems at layer 2
suggests that it might be beneficial to revisit some of
Excuse me, but where is the triumph of switched Ethernet over VJCC?
Post by Richard Bennett
the design tradeoffs at layer 3 if for no other reason than to bring
them up-to-date. In principle, IP isn't supposed to care what's
happening at layer 2,
but it's always a good idea for IP, not to ignore the lower layers.

(I'm actually writing a very small and tiny network simulator, because
NS2 and other ones are still to big for my purposes and it's quite
appealing to have a small simulator in some few hundred lines of Java,
which simply does its job.

However, it would really spare me quite a few night sessions, if IP
could really ignore the lower layers.)
Post by Richard Bennett
but in practice it makes a great deal of difference; this is one
reason that people design networks nowadays with the express intention
of being good for IP; e.g., MPLS.
Excuse me, but I don't see your point.

We can well discuss the advantages of circuit switching over packet
switching or the other way round.

However, both have their justification and both are in use for several
decades now.

And I really don't see the big difference between placing a flow label
into an Ethernet frame or to introduce some
"Flow Switching Over Ethernet Protocol (FSOEP)" to achieve the same goal.

This is a major argument for minor a minor benefit.

Detlef
--
Detlef Bosau Galileistraße 30 70565 Stuttgart
phone: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau
ICQ: 566129673 ***@web.de http://www.detlef-bosau.de
Richard Bennett
2009-11-12 23:10:27 UTC
Permalink
Detlef, I'm asking you to think abstractly about certain problems in the
design of distributed systems. I regard many of the interpretations and
applications of end-to-end notions to network management and regulation
as flawed, regardless of the intent of any of the authors of any
particular paper decades ago, or even to their subsequent refinement by
the original authors or others.

One the problems that arises in distributed systems is access to shared
resources, and the end-to-end solution to this problem is deeply flawed.
In order to share resources according to a policy, there needs to be a
system element enforcing the policy. In network operations, this
function is indispensable and must be part of the network
infrastructure. There are many reasons for this, of course, but the
chief ones are a) the lack of reliability and trust in end systems; and
b) the lack of information about system demand on the part of the end
system, which in principle only knows that it wants and not what anyone
else wants to do on the shared resource.

The idea that access to shared resources is best accomplished by
uncoordinated end systems leads to a lot of grief.

What does this have to do with Jacobson's Algorithm? A lot, it turns
out, as we see in the rise of systems that take the job of congestion
mediation a couple of steps further, such as ECN and Re-ECN. If you
recognize that information about the state of shared resources is
essential to their rational management, no problem.

Of course, all of this is simply to say that optimal solutions to
problems of resource management can only be made close to the resource
in question, but it doesn't address a later version of the end-to-end
arguments principle, that sub-optimal solutions (from an efficiency of
fair-sharing perspective) to such problems may be preferred if they lead
to greater system generality, opportunities for innovation, free speech,
reliability, or some other reason.

I hear this sort of argument being made very frequently these days, and
it does have some resonance. Efficiency is after all a short-term goal,
while generality is a long-term goal and innovation is a positive
externality. But to appreciate that, one needs to think a bit abstractly.

RB
Post by Detlef Bosau
O.k., I still don't have an idea, what this argument is all about.
Post by Richard Bennett
Classical Ethernet - the co-ax cable-based, Aloha-derived CSMA/CD
system - is one of the canonical examples of a purely edge-managed
network.
First of all, classical Ethernet is the canonical example of classical
Ethernet.
Period.
No one was forced to use classical Ethernet, and no one was forced to
avoid classical Ethernet.
The discussion initiated by Salzer, Reed and Clark was _not_, whether
or not a certain network technology should have management capabilies,
leaky bucket facilities, SNMP agents or whatever. Instead, the authors
invite us to carefully
consider, where certain functions, duties, responsibilities should be
placed and where not.
IIRC, Dave Reed told us, there were no such think like an "end to end
principle" some weeks ago.
And in fact, there is none. But it is useful to carefully consider the
placement and separation of concerns and responsibilities. No more, no less.
Post by Richard Bennett
It actually hails from the era during which the Internet protocols
were designed, and expresses a similar set of engineering trade-offs.
And scientists and priests still argue, whether we hail from adam and
eve - or from apes and evolution.
Would this make a difference? Despite of the fact, that mankind should
rather behave like apes (and hopefully, apes still do!), because we
wouldn't have seen thermonuclear weapons and many other "human
inventions" then?
When I use a network, my primary interest is not its historical origin
but its use for my problem.
Post by Richard Bennett
Thirty-five years after the design of Ethernet, we've dropped the
purely edge-managed approach to building layer 1 and 2 networks in
favor of somewhat more centralized systems: Switched Ethernet,
DOCSIS, DSL, Wi-Max, and Wi-Fi are the leading examples.
You mentioned some examples where some separations of concerns might
have been done in a different way than 1985.
Wonderful!
When there are compelling reasons for doing so: Go ahead!
Post by Richard Bennett
While we now know that edge-managed LANs and MANs are not the way to
go, we still use edge-managed protocols to
Why not?
Typically, it's a good idea to fit a solution to a problem and not the
other way round.
So, first of all, I will have a look at my problem, e.g. how many
systems are to be connected, are there constraints, e.g. I must not
use a wireline connection in a certain scenario and so on, and then I
will make a choice for a certain networking technology.
This may be switched Ethernet - or it may be something different.
Depending on my actual needs and my actual constraints.
Post by Richard Bennett
operate the Internet. The Jacobson Algorithm is probably the purest
example.
And I don't see, how switched Ethernet provides an alternative to VJCC.
Post by Richard Bennett
The triumph of switched and semi-centralized systems at layer 2
suggests that it might be beneficial to revisit some of
Excuse me, but where is the triumph of switched Ethernet over VJCC?
Post by Richard Bennett
the design tradeoffs at layer 3 if for no other reason than to bring
them up-to-date. In principle, IP isn't supposed to care what's
happening at layer 2,
but it's always a good idea for IP, not to ignore the lower layers.
(I'm actually writing a very small and tiny network simulator, because
NS2 and other ones are still to big for my purposes and it's quite
appealing to have a small simulator in some few hundred lines of Java,
which simply does its job.
However, it would really spare me quite a few night sessions, if IP
could really ignore the lower layers.)
Post by Richard Bennett
but in practice it makes a great deal of difference; this is one
reason that people design networks nowadays with the express
intention of being good for IP; e.g., MPLS.
Excuse me, but I don't see your point.
We can well discuss the advantages of circuit switching over packet
switching or the other way round.
However, both have their justification and both are in use for several
decades now.
And I really don't see the big difference between placing a flow label
into an Ethernet frame or to introduce some
"Flow Switching Over Ethernet Protocol (FSOEP)" to achieve the same goal.
This is a major argument for minor a minor benefit.
Detlef
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
Loading...