November 30, 2016

Haproxy Open Source Loadbalancer

Httperf Open Source Loadbalancer

load balance via
Load balancing improves the distribution of workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units, or disk drives. Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource. Using multiple components with load balancing instead of a single component may increase reliability and availability through redundancy. Load balancing usually involves dedicated software or hardware, such as a multilayer switch or a Domain Name System server process

Httperf -- a tool for measuring web server performance. It provides a flexible facility for generating various HTTP workloads and for measuring server performance. The focus of httperf is not on implementing one particular benchmark but on providing a robust, high-performance tool that facilitates the construction of both micro- and macro-level benchmarks. The three distinguishing characteristics of httperf are its robustness, which includes the ability to generate and sustain server overload, support for the HTTP/1.1 and SSL protocols, and its extensibility to new workload generators and performance measurements."

souce : Httperf 

Httperf's Default Scope

It's important to note that by default httperf only tests the standard http payload of your application -- e.g. the rendered HTML of the URL you are testing. Much like "curl", it does not load assets (images, javascript or css) by default. In this document, I will be referring to this as the "base payload". There are ways to configure it to load additional requests as part of the same session, which I will be covering.

Round-robin DNS

An alternate method of load balancing, which does not require a dedicated software or hardware node, is called round robin DNS. In this technique, multiple IP addresses are associated with a single domain name; clients are expected to choose which server to connect to. Unlike the use of a dedicated load balancer, this technique exposes to clients the existence of multiple backend servers. The technique has other advantages and disadvantages, depending on the degree of control over the DNS server and the granularity of load balancing desired.

How To Install on Ubuntu Server

apt-get install httperf

# this config needs haproxy-1.1.28 or haproxy-1.2.1

log local0
log local1 notice
#log loghost local0 info
maxconn 150000
#chroot /usr/share/haproxy
user haproxy
group haproxy

log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 150000
contimeout 5000
clitimeout 50000
srvtimeout 50000

       mode    http
       stats   enable
stats uri /lb?stats
stats realm Haproxy\ Statistics
       stats   auth    haproxy:admin
       balance roundrobin
server  cluster1  cookie A check maxconn 150000
server  cluster2  cookie B check maxconn 150000
server  cluster3  cookie A check maxconn 150000
server  cluster4  cookie B check maxconn 150000
#errorloc 502
errorfile 503 /etc/haproxy/errors/503.http
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http


Network Topology :
Load Balancing
we used 2 physical server an put 2 Virtual machine on this server, so i have 4 web server will be connected to load balancer.

We will test this loadbalancer from zero data :

After this we testing to load balancer using script and create 15000 connection :

httperf --timeout=5 --client=0/1 --server= --port=80 --uri=/zae --send-buffer=4096 --recv-buffer=16384 --num-conns=15000 --num-calls=1
Maximum connect burst length: 1266

Total: connections 15000 requests 11728 replies 11728 test-duration 45.850 s

Connection rate: 327.2 conn/s (3.1 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 1.3 avg 1.8 max 160.1 median 1.5 stddev 2.2
Connection time [ms]: connect 0.2
Connection length [replies/conn]: 1.000

Request rate: 255.8 req/s (3.9 ms/req)
Request size [B]: 68.0

Reply rate [replies/s]: min 0.0 avg 292.8 max 586.0 stddev 284.7 (6 samples)
Reply time [ms]: response 1.6 transfer 0.0
Reply size [B]: header 204.0 content 232.0 footer 0.0 (total 436.0)
Reply status: 1xx=0 2xx=0 3xx=11728 4xx=0 5xx=0

CPU time [s]: user 6.91 system 32.07 (user 15.1% system 69.9% total 85.0%)
Net I/O: 125.9 KB/s (1.0*10^6 bps)

Errors: total 3272 client-timo 3 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 3269

| Read More apache and Nginx 

Result :
Data between virtual server, they have a same number of traffict.

Previous Post
Next Post

post written by: