"The Linux Gazette...making Linux just a little more fun!"

(?) The Answer Guy (!)

By James T. Dennis, tag@lists.linuxgazette.net
LinuxCare, http://www.linuxcare.com/

(?) From the Dim History: EQL Revisited

Bandwidth Load Sharing w/o ISP Support

Issue 13 was the very first issue that contained the Answer Guy's replies!

From Andrew Byrne on Wed, 08 Sep 1999

Hi there,

I came across the information below on the web page http://sunsite.mff.cuni.cz/lg/issue13/answer.html which appears to be written by you.

(!) Well, that would probably be mine. I guess that would be a mirror in Czechoslovakia.

(?) Since it dates back to late 1996, I was wondering if you have any new information on EQL. I have been told by someone who works for an ISP that EQL may still work even if it isn't supported at the ISP's end of the connection. He noted that all incoming connections could only be directed to either dial-up connection's IP address, but all outgoing data could be sent via EQL.

If this is true, then EQL may work for what I need, that being using it with two 33.6kbps PPP connections to provide a web server. All incoming requests would come via one PPP connection, but web traffic sent out would be shared across the two PPP connections.

(!) Actually, if you use DNS round robin then incoming requests will be roughly distributed across each connection. Using "policy-based" routing and the "equal-cost multi-path" options in the Linux kernel can give you the load distribution on the outbound traffic.

(?) If you do know any more about how EQL works, could you please tell me if what I'm saying is true, or correct me if i'm wrong.

(!) I think it will be better to address the objective. You want traffic distribution over multiple ISP links but you're asking about traffic distribution over multiple low-level links to a single ISP (EQL). They aren't quite the same thing.
It is quite common for people to present a diagnosis and perceived solution as though it was their question. One of the things I learned as a tech support guy (and continually strive to remember) is to look past the question that's presented, and guess at the underlying objective.

(?) Thankyou!
Andrew Byrne.

(!) Before I answer I'll also quote something I said at the end of my original answer:
(After reading this you'll know about as much on this subject as I do; after using any of this you'll know much more).
This is true of many things that I say in this column. It will be worth remembering as you read on.
As far as I know EQL still has the constraints that I detailed back in 1996. Your ISP must participate with a compatible driver on his end; both of your lines must go to a single host at your provider's end.
It's also important to note that EQL and other "bonding" or line multiplexing scheme will only increase your effective bandwidth. This does nothing to lower your latency. Here's a useful link to explain the importance of this observation:
Bandwidth and Latency: It's the Latency, Stupid
by Stuart Cheshire <cheshire@cs.stanford.edu>
TidBITS#367/24-Feb-97 (Part 1)
TidBITS#368/03-Mar-97 (Part 2)
(Someone kindly pointed me to some copy of this article back when this column was published. Now, at long last, I can pass it along. I don't remember whether I was publishing follow-up comments to TAG columns back then).
In any event EQL is not appropriate for cases where you want to distribute your traffic across connections to different providers. It's not even useful for distributing traffic load to different POPs (points of presence) for one ISP.
However, there are a couple of options that might help.
First, you could simple DNS round robin. This is the easiest technique. It is also particularly well suited to web servers. Basically you get one IP address from one ISP, and another from a different ISP (or two addresses from different subnets of one ISP). You can bind each of these addresses to a different PPP interface. If you were using ISDN or DSL routers (connecting to your system via ethernet) then you'd use IP aliasing, binding both IP addresses to one ethernet interface in your Linux host. Then you create an A record for each of these in your DNS table. Both A records (or all of them, if you're using more than two) are under the name: www.YOURDOMAIN.XXX).
DNS round robin is quite simple. It's been supported for years. Basically it's a natural consequence of the fact that hosts might have multiple interfaces. So I might have eth0 and eth1 on a system known as foo.starshine.org. There is no law that says that these interfaces have to be in the same machine. I can create two web servers with identical content, and refer to both of them as www.starshine.org.
The only change that was required for "round robin DNS" was to patch BIND (named, the DNS daemon) to "shuffle" the order of the records as it returned them. Clients tend to use the first A record they find. Actually a TCP/IP client should scan the returned addresses for any DNS query to see if any of them are on matching subnets. Thus a client on the 192.168.2.* address should prefer the 192.168.2.* address over a 10.*.*.* address for the same hostname. (For our purposes this will not be a problem since 99.9999% of your external web requests will not be from networks that share any prefix to yours).
The load distribution mechanics of this technique are completely blind. On average about half of the clients will be accessing you through one of the IP addresses while the other half will use the other address. In fact, for N addresses in a round robin farm you'll get roughly 1/N requests routed to each.
The is the important point. Since you're not "peering" with your ISPs at the routing level (you don't have an AS number, and you aren't running BGP4) then the links between you and your ISPs are static. Thus the IP address selected by a client determines which route the packets will take into your domain.
Note, only the last few hops are static. You're ISP might be using some routing protocol such as RIP or OSPF to dynamically select routes through their network, and the backbones and NAP (network access points) are always using BGP4 to dynamically select the earlier portions of the routes --- the ones that lead to your ISP.
I realize this is confusing without a diagram. Try to understand this: each packet between your server and any client can travel many different routes to get to you. That's true even if you only have a single IP address. However, the first few hops (from the client's system to their ISP) are usually determined by static routes. The last few hops (from your ISP to your server) are also usually along static routes. So, for almost all traffic over the Internet it's only the middle hops that are dynamic.
The key point about DNS round robin load balancing (and fault tolerance) is that the different IP addresses must be on different networks (and therefore along different routes).
So, this handles the incoming packets. They come into different IP addresses on different networks. Therefore they come in through different routes and thus over different connections to your ISP(s).
Now, what about outgoing traffic. When we use round robin to feed traffic to multiple servers (mirrored to one another) there is no problem. Each of the server can have different routes (outbound), so the traffic will return along roughly the same route as it traversed on it way in.
When we use round robin to funnel packets into a single system we have a problem.
Consider this: an HTTP request comes in on from; the web server fashions a response (source:, destination: What route will this response take?
The default route.
There can normally only be one default route. Normally only the destination address is considered when making routing selections. Thus all packets that aren't destined for one of the local networks (or one of the networks or hosts explicitly defined in one of our routing tables) normally go through our default.
However, this is Linux. Linux is not always constrained by "normalcy."
In the 2.2 and later kernels we have a few options which allow us finer control over our routing. Specifically we "policy based routing" in the kernel, get the "iproute" package and configure a set of routes based on "source policy." This forces the kernel to consider the source IP address as well as the destination when it makes its route selection.
Actually it allows us to build multiple routing tables, and a set of rules which select which table is traversed based on source IP address, TOS (type of service flags) and other factors.
I found a short "micro HOWTO" on the topic at:
Linux Policy Routing
... that site was hard enough to reach that I've tossed a copy on my own web site at:
Starshine Technical Services: The Answer Guy
(I should do that more often!).
There are also some notes under /usr/src/linux/Documentation/networking/ in any of the recent (2.2.* or later) kernel.
I guess it's also possible to enable the "equal cost multi-path" option in the kernel. This is a simple (and crude) technique that will allow the kernel to use redundant routes. Normally if I were to define two routes to the same destination then only the first one will be used, so long as that route is "up." The other (redundant) route would only be used when the kernel received specific ICMP packets to alert it to the fact that that route was "down." With multi-path routing we can define multiple routes to a given destination and the kernel will distribute packets over them in a round-robin fashion.
I think you could enable both of these features. Thus any outbound traffic which matched none of your policies would still be distributed evenly across your available default routes.
I hope you understand that these techniques are ad hoc. They accomplish "blind" distribution of your load across your available routes/connections without any sensitivity to load or any weighting. This is a band-aid approach which gives some relief based on the averages.
Let's contrast this to the ideal networking solution. In an ideal network you'd be able to publish all of the routes to your server(s). Routers would then be able to select the "best" path (based on shortest number of hops across least loaded lines with the lowest latencies).
In the real world this isn't currentl feasible for several reasons. First, you'd have to have an AS (autonomous systems) identification. You're ISPs (all of them) would have to agree to "peer" with you. They'd have to configure their routers to accept routes from you. Naturally they would also have to be "peering" with their interconnects and so on. Finally these routes would then take up valuable memory in the backbone and 2nd tier routers all over the Internet. This extra entry in all of those routers is an additional bit of overhead for them.
Ultimately a router's performance is limited to the number of routes it can hold and the amount of computation that it takes to select an interface for a given address. So it's not feasible to store entries for every little "multi-homed" domain on the Internet. In fact the whole point of the CIDR "supernetting" policies was to reduce the number of routes in the backbone routers (and consequently reduce the latencies of routing traffic through them).
So we use these cruder techniques of "equal-cost multi-path" and "policy-based" routing. They are things that can be done at the grassroots level (which is the Linux forte, of course).

Copyright © 1999, James T. Dennis
Published in The Linux Gazette Issue 46 October 1999
HTML transformation by Heather Stern of Starshine Technical Services, http://www.starshine.org/

[ Answer Guy Current Index ] 1 2 3 5 5 6
7 8 9 10 11 12 [ Index of Past Answers ]

[ Table Of Contents ] [ Front Page ] [ Previous Section ] [ Linux Gazette FAQ ] [ Next Section ]