Creating Linux Virtual Servers

Creating Linux Virtual Servers

Wensong Zhang, Shiyao Jin, Quanyuan Wu
National Laboratory for Parallel & Distributed Processing
Changsha, Hunan 410073, China
Email: wensong@iinchina.net

Joseph Mack
Lockheed Martin
National Environmental Supercomputer Center
Raleigh, NC, USA
Email: mack.joseph@epa.gov

http://proxy.iinchina.net/~wensong/ippfvs

0. Introduction

Linux Virtual Server project.

The virtual server = director + real_servers

multiple servers appear as one single fast server

 Client/Server relationship preserved

Installation/Control

Patch to kernel-2.0.36 (2.2.x in the works)

 ippfvsadm (like ipfwadm) adds/removes servers/services from virtual server. Used in

ippfvs = "IP Port Forwarding & Virtual Server" (from the name for Steven Clarks' Port Forwarding codes)

code based on

single port services (eg in /etc/services, inetd.conf)

tested

protocols If the service listens on a single port, then LVS is ready for it.

additional code required for - IP:port sent as data, two connections, callbacks.

ftp requires 2 ports (20,21) - code already in the LVS.

Load Balancing

The load on the servers is balanced by the director using GPL, released May 98 (GPL) http://proxy.iinchina.net/~wensong/ippfvs

1. Credits

LVS

High Availability

2. LVS Server Farm

Figure 1: Architecture of a generic virtual server

The director inspects the incoming packet - The default table size is 2^12 connections (can be increased).

Gotchas (for settting up, testing)

3. Related Works

Existing request dispatching techniques can be classified into the following categories:

3. Director Code

kernel compile options: Director communicates with real servers by one of

Figure 1: Architecture of a generic virtual server

4.1. VS-NAT - Virtual Server via NAT

popular technique for allowing access to another network by many machines using only one IP on that network. Multiple computers at home linked to internet by ppp connection. Whole company on private IP's linked through single connection to internet.
 
 

VS-NAT: Diagnostic Features

VS-NAT Example

ippfvsadm setup

 ippfvsadm -A -t 202.103.106.5:80 -R 172.16.0.2:80
ippfvsadm -A -t 202.103.106.5:80 -R 172.16.0.3:8000 -w 2
ippfvsadm -A -t 202.103.106.5:21 -R 172.16.0.2:21

Rules written by ippfvsadm

 
Protocol Virtual IP Address Port Real IP Address Port Weight
TCP 202.103.106.5 80 172.16.0.2 80 1
172.16.0.3 8000 2
TCP 202.103.106.5 21 172.16.0.3 21 1
Example: request to 202.103.106.5:80

 Request is made to IP:port on outside of Director
 
 

load balancer chooses real server (here 172.16.0.3:8000), updates VS-NAT table, then
 
packet source dest
incoming 202.100.1.2:3456(client) 202.103.106.5:80(director)
inbound rewriting 202.100.1.2:3456(client) 172.16.0.3:8000(server)
reply to load balancer 172.16.0.3:8000(server) 202.100.1.2:3456(client)
outbound rewriting 202.103.106.5:80(director) 202.100.1.2:3456(client)

VS-NAT Advantages

VS-NAT Disadvantages

4. VS-TUN - Virtual Server via IP Tunneling

Normal IP tunneling (IP encapsulation)

Tunnelling used

VS-Tunnelling

For ftp, http, scalability

VS-TUN Diagnostic features

Figure 4: Architecture of a virtual server via IP tunneling

Routing Table

Director

 link to tunnel

 /sbin/ifconfig eth0:0 192.168.1.110 netmask 255.255.255.255 broadcast 192.168.1.255 up
route add -host 192.168.1.110 dev eth0:0

ippfvsadm setup (one line for each server:service)

ippfvsadm -A -t 192.168.1.110:80 -R 192.168.1.2
ippfvsadm -A -t 192.168.1.110:80 -R 192.168.1.3
ippfvsadm -A -t 192.168.1.110:80 -R 192.168.1.4

Server(s)

 ifconfig tunl0 192.168.1.110 netmask 255.255.255.255 broadcast 192.168.1.255
route add -host 192.168.1.110 dev tunl0
packet source dest data
request from client 192.168.1.5:3456(client) 192.168.1.110:80(VIP) GET /index.html
ippfvsadm table is src 192.168.1.110, dest 192.168.1.2, director looks up routing table, makes 192.168.1.1 src, encapsulates  192.168.1.1(director) 192.168.1.2(server) source 192.168.1.5:3456(client), dest 192.168.1.110:80(VIP), GET /index.html
packet of type IPIP, server 192.168.1.2 decapsulates, forwards to 192.168.1.110  192.168.1.5:3456(client) 192.168.1.110:80(VIP) GET /index.html
reply from 192.168.1.110 (routed via 192.168.1.2) 192.168.1.110:80(VIP) 192.168.1.5:3456(client) ‹html›...‹/html›

VS-TUN Advantages

VS-TUN Disadvantages

VS-DR Direct Routing

Based on IBM's NetDispatcher

 Setup uses same IPs as VS-TUN example on a local network, with lo:0 device replacing tunl device

 lo:0 doesn't reply to arp (except Linux-2.2.x).

 Director has eth0:x 192.168.1.110, servers lo:0 192.168.1.110

 When sending packets to server, just changes the MAC address for the packet

 Differences:

VS-DR Advantages over VS-TUN

VS-DR Disadvantages

5. Comparison, VS_NAT, VS-TUN, VS-DR

property/LVS type VS-NAT VS-TUN VS-DR
OS any must tunnel (Linux) any 
server mods none tunl no arp (Linux-2.2.x not OK) lo no arp (Linux-2.2.x not OK)
server network private (remote or local) on internet (remote or local) local
return packet rate/scalability low(10) high(100's?) high(100's?)

6. Local Node

Director can serve too. Useful when only have a small number of servers.

 On director, setup httpd to listen to 192.168.1.110 (as with the servers)

 ippfvs -A -t 192.168.1.110 -R 127.0.0.1

7. High Availability

What if a server fails?
 
 

Server failure protected by mon. mon scripts for server failure on LVS website.

8. To Do

9. Conclusion