[nmglug] Production server question

Nick Frost nickf at nickorama.com
Fri Mar 27 12:19:14 PDT 2009


On Mar 27, 2009, at 11:56 AM, Andres Paglayan wrote:

> I have a little question,
> what are your guidelines these days for a new Linux production server?
>
> ...I know all the variables, well, depends the load, users, apps,  
> etc etc,
> some apps are fine on $50 used pentium IVs
> but for a new, commercial kind of install
> this specific case is a server for 100 users on a rather heavy Rails  
> web app,
> I am more concerned about the approach and the value,
> i.e., there are 1u rack mountable servers from $500 to $10,000
>
> On several years of IT I had only couple of HD failed,
> one cheap mobo capacitors blown,
> couple of mem simms DOA
> and four or five power supplies gone puff,

Well, among the options there's the OEM route and the custom route.   
Depending on budget, you could do something like an HP DL380G5 for  
about $5,000.00 to $6,000.00 on the high end, though depending on disk  
requirements the cost might change.  A fully loaded HP MSA70 SAS array  
of 25 drives costs about $10,000.00 but it sounds like you wouldn't  
need that much disk, just the server.

Dell has some servers in the $3,800+ range that might do the job, for  
example;

PowerEdge Energy Smart 2950 III
Dual Core Intel® Xeon® L5240, 6MB Cache, 3.0GHz, 1333MHz FSB, ES
Additional Processor
Dual Core Intel® Xeon® L5240, 6MB Cache, 3.0GHz, 1333MHz FSB
Memory 8GB 667MHz (4x2GB), Dual Ranked DIMMs, Energy Smart

Sun Microsystems has some decent servers at about $4,000.00 to $5,000  
that support 4 drives (for RAID-5 or RAID-10)

http://www.sun.com/servers/entry/x4200/

As for the custom route, in my experience PC Power and Cooling makes  
some nice PSU's.  I've had good luck with some lower-end server  
motherboards at Newegg, such as the Tyan S2925A2NRF;

uptime
  13:03:47 up 18 days, 14:12,  4 users,  load average: 1.21, 1.22, 1.19

but I think they no longer sell that board and for a system supporting  
100 users, more CPU's/cores, more RAM, and faster disk (SAS or U320  
SCSI) might be nice.  Hard to say without knowing the memory  
utilization of the web application in question under the type of load  
you are seeing (100 users).  There are a number of server boards to  
choose from which are well-reviewed;

SUPERMICRO MBD-X7DWA-N Dual LGA 771 Intel 5400 Extended ATX Server  
Motherboard - Retail
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182130

But, building a system from parts and finding a case to fit an EATX  
board like that is less appealing to me nowadays.  I'd rather just  
purchase a barebones system or a box that I can unpack, rack, install  
and be up and running.  My patience for fooling with hardware in the  
work environment has lessened.

Supermicro makes some nice affordable barebones server systems (and so  
does Tyan);

SUPERMICRO SYS-6015B-TB 1U Barebone Server Intel 5000P Dual LGA 771  
Dual Intel Xeon 1333/1066MHz FSB
http://www.newegg.com/Product/ProductReview.aspx?Item=N82E16816101133

SUPERMICRO SYS-6025W-NTR+B 2U Barebone Server Intel 5400 Dual LGA 771  
Dual Intel Xeon 1600/1333/1066MHz FSB
http://www.newegg.com/Product/ProductReview.aspx?Item=N82E16816101180

I used a Tyan GT20 1U (barebones) server at my last job and it was  
absolutely bulletproof hardware, but the load was nothing like 100  
users on a web application.

I've had good luck with software RAID, both with RAID-1 and RAID-5  
under Linux, but you could use a 3ware or Adaptec controller for  
hardware RAID with a Supermicro server.

If the server is mission critical (production) and intended to support  
100 users, I would think that the aim would be availability and to  
avoid downtime as that might be a significant problem for a multi-user  
system intended to serve/support 100 users. So, a clear argument could  
be made to purchase an OEM (Dell, HP, IBM, Sun ,etc.) server with same- 
day on-site hardware support, or have some spare parts on hand  
(purchase two less-expensive identical servers and use one as a  
development server/backup).  At my day job we have six of the HP DL380  
G5 servers, each with an attached MSA 70 array.  Recently two of the  
servers blew SAS drives (three SAS drive failures in total).  Repair  
was as simple as using the HP Array Configuration CLI to confirm the  
drives were failed (as the drive LED's indicated), HP shipped  
replacement drives the next day and I just popped them in and the  
servers were running all the while (RAID-6 configuration).  The  
hpacucli exists for Linux;

http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&cc=us&prodTypeId=15351&prodSeriesId=1121516&prodNameId=3288134&swEnvOID=4035&swLang=8&mode=2&taskId=135&swItem=MTX-80a59132d68a442288a48e2634

and under that type of hardware RAID (HP Smart Array Controller),  
dealing with disk failures ought to be as easy as pulling failed  
drives and inserting new ones (i.e. hardly any work/effort at all, and  
a good degree of redundancy, i.e. RAID-6).  I will confess to liking  
the hardware in these HP systems quite a bit (despite the fact that  
they run Windows), but they are $5,000.00 to $6,000.00 servers and  
with the arrays each server is more like a $15,000.00 to $16,000.00  
system.  I think one can do just as well (performance-wise) for a lot  
less money.

I think for the server you describe, I would use a minimum of 8 GB of  
RAM and consumption will depend on a number of variables, such as if  
you use Apache 1.3.x or 2.x with loadable modules, and I have no idea  
what the resource requirements for the web application you mention  
would be.  For the system you propose, I would vote for fast disk I/O  
such as SAS or U320 SCSI with 10K RPM or 15K RPM drives; probably SAS  
is your best bet.  Disks will fail no matter what and as such I'd  
suggest hardware or software RAID, having spares on hand or quickly  
available (overnight shipping).

We primarily use Sun servers at $day_job, and with a few exceptions,  
for the most part I like the Sun hardware.  At another previous job  
they have a number of Supermicro servers, some of which have been  
running reliably for 4+ years (but having a hot spare server is a good  
idea if the system is a production system supporting  a large number  
of users...i.e. 100 users).

At the moment I'm testing an 8-way SunFire X2200 with 8 GB of RAM  
(running Rosetta at home) the testbed OS is Fedora Core 10 (x86_64) and  
the machine is doing fine (only two drives in RAID-1 though, and so  
not suitable I would think for your proposed 100 user system). We've  
have some trouble with this particular system, but in general the Sun  
servers have been great (some running Solaris, some running Linux) and  
perform well.  Having a company with spare parts and Field Engineers  
behind the product is nice for production systems.

top - 06:59:22 up 21:34,  3 users,  load average: 8.18, 8.17, 8.17
Tasks: 250 total,  10 running, 238 sleeping,   0 stopped,   2 zombie
Cpu0  :  0.0%us,  1.3%sy, 98.7%ni,  0.0%id,  0.0%wa,  0.0%hi,   
0.0%si,  0.0%st
Cpu1  :  6.0%us,  1.0%sy, 93.0%ni,  0.0%id,  0.0%wa,  0.0%hi,   
0.0%si,  0.0%st
Cpu2  :  0.0%us,  0.0%sy,100.0%ni,  0.0%id,  0.0%wa,  0.0%hi,   
0.0%si,  0.0%st
Cpu3  :  0.0%us,  3.3%sy, 96.7%ni,  0.0%id,  0.0%wa,  0.0%hi,   
0.0%si,  0.0%st
Cpu4  :  2.3%us,  1.0%sy, 96.7%ni,  0.0%id,  0.0%wa,  0.0%hi,   
0.0%si,  0.0%st
Cpu5  :  0.0%us,  1.0%sy, 99.0%ni,  0.0%id,  0.0%wa,  0.0%hi,   
0.0%si,  0.0%st
Cpu6  :  1.7%us,  0.3%sy, 98.0%ni,  0.0%id,  0.0%wa,  0.0%hi,   
0.0%si,  0.0%st
Cpu7  :  0.0%us,  0.3%sy, 99.7%ni,  0.0%id,  0.0%wa,  0.0%hi,   
0.0%si,  0.0%st
Mem:   8197460k total,  3878332k used,  4319128k free,   176892k buffers
Swap: 41945592k total,        0k used, 41945592k free,  1049976k cached

-Nick

----------------------------------------
Nicholas S. Frost
7 Avenida Vista Grande #325
Santa Fe, NM  87508
nickf at nickorama.com
----------------------------------------




More information about the nmglug mailing list