Cluster computing with Linux & Beowulf
dfussell at byu.edu
Fri Feb 28 12:51:31 MST 2014
On 02/24/2014 09:01 AM, Lloyd Brown wrote:
> You'll definitely need to
> distribute the connections among a pool of servers in some fashion. The
> trouble is, especially if you're using a hardware network load balancer
> like LVS, it's going to have a very hard time keeping up with that
> number of connections. Possibly some hardware solutions exist that can
> utilize special-purpose ASICs to balance that many connections, but
> you'll have to dig into that. I honestly don't know. Possibly one of
> Brocade's ServerIron devices or similar?
ServerIrons use FPGAs instead of ASICs, are pretty nice, but very
pricey. Your looking at about $16k for a basic model switch. LVS is
cheap though the latency introduced is higher than with an FPGA/ASIC
load balancer. It would probably be better for learning server load
balancing and testing to see if load balancing is appropriate for your
application before shelling out the big bucks. As a general rule,
stateless things (like DNS queries, HTTP requests, etc) load balance
fairly well. Statefull things get a little more interesting; still
possible, just interesting.
Other options for load balancers would be Barracuda, Kemp, and Cisco.
Beware any load balancer priced below $1000; it's most likely talking
about a multi-homing broadband router that distributes traffic across
two WAN links. I also saw a research presentation on using OpenFlow
switches to do load balancing, but it's still pretty limited, and the
openflow hardware switches available are limited and pricey.
More information about the PLUG