Comments Locked

48 Comments

Back to Article

  • idlehands - Thursday, November 12, 2009 - link

    Really? Another virtualization discussion without even mentioning z? I'd go with z/linux on a z/10ec or z/10bc depending on what it was I was going to do. Save on some floor space with an upgrade path to z/os if need be.
  • ultimatebob - Friday, October 30, 2009 - link

    If I got the rebuild the entire server room from scratch? I'd go with 2 socket blade servers loaded with quad core processors and tons of memory. Then I'd connect those to a SAN running VMWare ESX and VCenter. I could probably get at least 8 VM's running on each blade server that way, and I can squeeze 14 blade servers into 7U of rack space. That's 112 VM's in just 7U of space (not counting the SAN or UPS)
  • williamwbishop - Thursday, January 8, 2015 - link

    That is exactly my design....built it starting back in 2006...less cores, but now I'm at 6 cores each proc..
  • RagingDragon - Wednesday, October 14, 2009 - link

    My apps mostly have low CPU utilization with IO or memory (quantity, not performance) bottlenecks. Considering that, and your results from "Expensive Quad Sockets vs. Ubiquitous Dual Sockets": a few 4S/24C Opteron servers with 128GB of RAM and lots of fiberchannel cards would suit my loads well.
  • Quarkhoernchen - Monday, October 12, 2009 - link

    "In the real-world, virtual environments fit a very specific niche, and in no way should dominate the datacenter. "

    That depends on the environment. I'm quite sure that virtual environments are able to dominate a datacenter in small-, mid-sized and even in some big companies. In my company, 80 percent of all servers are virtual machines.

    The big and heavy loaded file-, database- and mail applications are installed on their own hardware and nearly all smaller applications, domain controllers and web servers are completely virtualized in a VMware 3.5 cluster with all nice features that virtualization technology offers especially easy image-level backup (snapshots) and recovery. For mass deployment of new servers, just create a template and deploy from that or clone an existing virtual machines in minutes.

    Due to virtualizing existing servers I was able to reduce the power consumption of the whole datacenter by 25 % now and at the end of next year it is possibly only the half.

    regards, Simon
  • Robear - Monday, October 12, 2009 - link

    It's like asking, "If you could restart the auto industry from scratch, would you make all vehicles unleaded?"

    I chose multi-node servers only because I like their versatility. Blades are great for scaling out horizontally and have been really great for installing department or appliance servers, but unless you have a SAN infrastructure, they don't leave you many options for storage.

    You also have to consider the politics involved. IT isn't a monarchy. Different departments get different budgets, and most are none-too-thrilled about getting a virtual environment with their money, especially if the departmental decision-maker is tech-savvy.

    Lastly, there's a difference between requesting a server for a SQL cluster and requesting one for a file share or application server.

    Your data center needs to be versatile. Blades pair well with a SAN if that storage fits the application. Virtual environments are great for software R&D and lightweight apps (apps that will typically take < 50% server resources). When you get above that, you need to pack a lot in for a little. The dense servers we're seeing from HP and SuperMicro are nice because you don't need the blade housing; you can replace normal servers and increase the density AND you have a little more versatility with storage.

    In the real-world, virtual environments fit a very specific niche, and in no way should dominate the datacenter.

    Anyway I think some would agree or disagree with some of my statements, and I think HOW the IT operation runs and how the company runs is a big part of what the datacenter looks like. At the end of the day, I think I'd like to see a 1-node-per-U standard would be a great target.



  • NewBlackDak - Tuesday, October 13, 2009 - link

    This completely depends on your storage. We find that with our Netapp/NFS setup we have a sustained 95MBps disk throughput. We haven't found a single real world application that is bottle-necked by storage in our virtualized environment. This includes the 5 different DBMS we use/tested.

    The biggest factor is if the virtualized datacenter was setup correctly. If it was put together with misinformation, or slapped together on the cheap(or with existing parts).
  • lynxinator - Sunday, October 11, 2009 - link

    A couple of months ago I built a Vmware ESXi 3.5 host / server with the parts listed below. Each of the 4 windows 2008 virtual machines use a single NIC and a single core. All of the VMs share a 8 drive RAID 10 array. The 5th NIC is used for management. There are a few other VMs that I start and stop as needed.

    I had to move the case fan that is mounted at the top of the case towards the front of the case because the power supply is longer than normal. Sometimes the time on the virtual machines is incorrect even though the time on the ESXi is correct. I have not spent a lot of time trying to fix the problem. I recently upgraded to ESXi 4.0 which took at least a half hour.

    Newegg.com prices:

    Rosewill 6" Molex 4pin Male to Two 15pin SATA Power Cable Model RC-6"-PW-4P-2SA - Retail
    Item #: N82E16812119238
    Price: $2.29 * 3 = $6.87

    Thermaltake 11.8" Y Cable with Blue LED Light Model A2369 - Retail
    Item #: N82E16812183147
    Price: $3.49 * 3 = $10.47

    Rosewill R901-P BK Triple 120mm Cooling Fan, Mesh Design Front Panel, ATX Mid Tower Computer Case - Retail
    Item #: N82E16811147125
    Price: $49.99

    Western Digital Caviar Blue WD1600AAJS 160GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive - OEM
    Item #: N82E16822136075
    $39.49 * 8 = $315.92

    BIOSTAR TFORCE TA790GX 128M AM3/AM2+/AM2 AMD 790GX HDMI ATX AMD Motherboard - Retail
    Item #: N82E16813138130
    Price: $109.99

    Intel EXPI9301CTBLK 10/ 100/ 1000Mbps PCI-Express Network Adapter - Retail
    Item #: N82E16833106033
    Price $29.99 * 3 = $89.97

    hec HP585D 585W ATX12V Power Supply - No Power Cord - OEM
    Item #: N82E16817339009
    Price: $26.99

    AMD Phenom II X4 940 Deneb 3.0GHz Socket AM2+ 125W Quad-Core Black Edition Processor Model HDZ940XCGIBOX - Retail
    Item #: N82E16819103471
    Price: $169.99

    Adaptec 2258100-R PCI-Express x8 SATA / SAS (Serial Attached SCSI) 5405 Kit Controller Card - Retail
    Item #: N82E16816103096

    MASSCOOL FD12025S1L3/4 120mm Case Fan - Retail
    Item #: N82E16835150070
    Price: $4.79 * 3 = $14.37

    WINTEC AMPX 2GB 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) Desktop Memory Model 3AXT6400C5-2048 - Retail
    Item #: N82E16820161182
    Price: $30.99 * 4 = $123.96

    Intel PRO/1000 MT Gigabit NIC PWLA8490MT W1392
    Price: $19.00 x 2 = $38
    Ebay URL: http://cgi.ebay.com/Intel-PRO-1000-MT-Gigabit-NIC-...">http://cgi.ebay.com/Intel-PRO-1000-MT-G...LH_Defau...

    Shipping: $53.01

    Total: $1389.52
  • lynxinator - Sunday, October 11, 2009 - link

    Both the Adaptec 5405 and the 5805 work well with ESXi.

    Adaptec 2244100-R PCI Express SATA / SAS (Serial Attached SCSI) 5805 Kit Controller Card - Retail
    Item #: N82E16816103098
    Price: $569.99


    Adaptec 5805 Total: $1,526.51
  • maomao0000 - Sunday, October 11, 2009 - link

    http://www.myyshop.com">http://www.myyshop.com

    Quality is our Dignity; Service is our Lift.

    Myyshop.com commodity is credit guarantee, you can rest assured of purchase, myyshop will

    provide service for you all, welcome to myyshop.com

    Air Jordan 7 Retro Size 10 Blk/Red Raptor - $34

    100% Authentic Brand New in Box DS Air Jordan 7 Retro Raptor colorway

    Never Worn, only been tried on the day I bought them back in 2002

    $35Firm; no trades

    http://www.myyshop.com/productlist.asp?id=s14">http://www.myyshop.com/productlist.asp?id=s14 (Jordan)
  • joekraska - Sunday, October 11, 2009 - link

    Dell R710's with 72GB of ram. Dual 10GE, aggregating to Force 10 10GE switches top of rack. 10GE CISCO line cards to the core.

    Dell EqualLogic tiered storage cluster for the VMDK files in Tier 1. In Tier 2, NetApp NFS volume with DEDUP turned on.

    Joe.

  • Czar - Saturday, October 10, 2009 - link

    We have a 6 ESX host setup on IBM LS21 blades, those with 2x dualcore AMD processors. They are a bit old, but we are not in any problems with regard of cpu performance. Since these are IBM blades without the memory expansion we only have two 1gb nics per host. Though it has not been much of a problem, we have had to link a few vm's together so they are always on the same host.

    But yes with VM's CPU is not a limiting factor, Memory is not a limiting factor. Network and Disk IO is a limiting factor, but those are both hardware related that has nothing to do with virtualization.

    If I had my way and were to design my work setup again I would go with HP blades, same size, dual socket, just more cores now :) and more memory, go with 4 nics.

    Then expand this setup as needed.

    Oh and I would seriosly think about going with iSCSI after seeing Citrix test setup (think it was citrix, it was on brianmadden.com)
  • Quarkhoernchen - Saturday, October 10, 2009 - link

    If I could break up from the roots:

    Servers:

    Supermicro SuperServers (Twin 2U Series with 1200 Watt redundant PS)
    http://www.supermicro.com/products/system/2U/6026/...">http://www.supermicro.com/products/system/2U/6026/...

    Each node with the following configuration:
    2x Xeon X5550 / X5560 / X5570
    6x 8 GB Dual-Rank RDIMM DDR3-1066 (48 GB) (upgradable to 96 GB)
    1x Additional Intel PRO 1000 ET quad-port server adapter (a total of 6 NICs per node)

    I would ever take a look for network adapters with the new Intel 82576 Chipsatz (16 TX/RX queues, VMDq support). Next year Intel comes up with the new 82599 chipset (128 TX/RX queues) for 10 Gb ethernet.

    Storage:

    EqualLogic PS6000XV / PS6000E

    Network:

    Cisco Catalyst 3750G for user traffic
    Cisco Catalyst 3750-E for storage traffic
    Cisco Catalyst 3750G for VMotion / FTI traffic

    Virtual Network Configuration:

    2x 1 Gbit for user traffic (per node)
    2x 1 Gbit for storage traffic (per node)
    1x 1 Gbit for VMotion (per node)
    1x 1 Gbit for FTI (per node)

    If more bandwidth per node is required I would consider buying a normal 1/2U server with support for two or more additional network cards.

    regards, Simon

    regards, Simon
  • LeChuck - Friday, October 9, 2009 - link

    I vote for Dual Rack Servers. They offer great performance and are probably the best bang for the buck. Start with getting at least two to form an ESX/vSphere-Resource-Cluster, better 3 or more, depending on what your need is. Ideally you have them in separate buildings/rooms to be prepared for an outage of one of them in case there is a major problem in the housing room. You gotta have the licenses for vMotion and Virtual Center Server and all that, that goes without saying. And I'm talking only VmWare here... obviously. ;)
  • LoneWolf15 - Friday, October 9, 2009 - link

    Either dual-socket blade servers or dual-socket rack servers connected via fiber to attached storage devices in RAID. Saving rack space and power would be a priority, along with redundancy to prevent storage failure.
    The heaviest stuff (I'd call us small enterprise) we run would scream on a dual quad-core with 32GB of RAM; a dual hex-core would be overkill for some time to come.
  • LoneWolf15 - Friday, October 9, 2009 - link

    Actually, since we're talking virtualization though, I guess bumping it to 64GB for expansion headroom and clustering some dual-socket quad-cores or hex-cores would keep us ahead of the curve for awhile.

    Of course, that assumes I had the budget.
  • Brovane - Friday, October 9, 2009 - link

    We use 5 Dell R 900's for our VMware ESX Cluster. We just use 2 CPU's in the box and leave the other 2 empty. We find we run out of Memory before we run out of CPU. Probably if we where buying new hardware now we would buy R710 servers.
  • VooDooAddict - Friday, October 9, 2009 - link

    I'd go with "Large" Quad Socket Rack servers for PRODUCTION Virtual hosts due to the following reasons in order of importance.

    1. Memory cost. Quad Socket servers typically have more memory slots, enabling more RAM at lower densities and therefor lower costs.
    2. Licensing Price per virtual host of VMware.
    3. Fewer Large Virtual hosts will reduce the overhead and quantity of VMotions for load balancing, when compared with many blade hosts.

    I'd still like to keep the systems as dense as possible, 1U (rare for 4 socket) or 2U servers would still be ideal. 4U and 6U massive quad socket systems with room for internal disk arrays are unneeded as all the storage (besides the Virtual Host OS mirror) would be on a SAN.


    That said... DEV and TEST virtual hosts are better on dual socket 1U vhosts due to the added flexibility needed and lower costs.
  • Ninevah - Friday, October 16, 2009 - link

    Licensing for VMWare is based upon the number of processor sockets in the host, so you would pay the same price for 2 servers with 2 sockets as you would for 1 server with 4 sockets.
  • andrewaggb - Friday, October 9, 2009 - link

    The company I work for was just aquired by a larger company and we just took over a 5400 square foot datacenter. I see an opportunity to recommend getting some real hardware :-).

    I'm curious what people who run larger vm clusters successfully would use for servers, licensing, storage, backups, switching etc for say 40-60 virtual machines (most lightly used) and what they think it would cost. Most of our vm's are running VOIP applications of some kind or low traffic web servers.
  • crafty79 - Friday, October 9, 2009 - link

    A few pizza boxes, workgroup level nfs or iscsi storage and probably 1gigabit backbone should suffice.
  • KingGheedora - Friday, October 9, 2009 - link

    I'm a developer so I don't handle VM builds or any of the hardware stuff. Our IT staff has always handled building machines and VM's for us, so I have no way of knowing if they did something wrong. But so far every virtual server I've used has had horrible, horrible disk performance. I would use VM's for things that barely use disk, like webs servers or app server depending on what the apps do. But definitely not for database, or for some of our custom apps that write to disk alot.

    Anyways I was wondering why the overhead is so high for disk performance on VM's? I suspect the VM's may have not been configured so that each VM has it's own spindle(s). What else can be done, would it be faster if disk space that exists outside the virtual machine (i.e. the disk space is not part of the file allocated for the VM on the virtual server host) were mounted from within the VM? Disk space on something fast like a SAN or something?
  • caliche - Monday, October 12, 2009 - link

    If you are stacking all the disk access on common storage devices in a basic setup, and they would all have to line up and share. A good RAID setup grouping systems to separate storage sets/spindles to get more throughput is a minimum for any x86 virtual system setup and real disk traffic. For a budget setup more spreading VMs across spindles would help, you may need more disk controllers as well. And of course good backups or central code storage is critical - more spindles and heat means more points of potential failure.

    Bigger setups put the disk on a good SAN or other shared storage. RAID/Stripe on the array, let it use central array cache, let the storage buffering protocols smooth out the I/O issues. You can create other problems that way and have to manage a whole new layer of hardware, but at least then you can manage things centrally and adjust/expand if needed.

  • crafty79 - Friday, October 9, 2009 - link

    you hit the nail on the head. It was a design decision on their part to not give the vm's a lot of available IOPS.
  • TheCollective - Thursday, October 8, 2009 - link

    6x Dell r710's with Xeon 5570's and 144GB of RAM (max).
  • joekraska - Sunday, October 11, 2009 - link

    The 144GB of RAM would require 8GB dimms. The cost delta calculation, even with the extra vmware licenses, would suggest 12X Dell R710's with 72GB of RAM ea would be better, wouldn't it? Haven't checked 8GB DIMM costs in a month or two...
  • NewBlackDak - Tuesday, October 13, 2009 - link

    That totally depends on power requirements. We're in a remote data center, and the memory price is made up in 4 months with rent/power/cooling.

    We're doing something similar with Sun 4170s.
    After several configs we landed on a 1U server with as much RAM as you can stuff in it. A couple 10GB NICs with USB, Flash card, or SSD boot.
    All datastores are NFS based on netapp storage with PAM cards.

  • xeopherith - Thursday, October 8, 2009 - link

    Recently I just virtualized 7 of our "servers" using ESXi.

    The reason I say it like that is we have 4 Cisco Phone servers on their own VLAN, one Database server with very little information, and our PDC that does almost everything and lastly a proxy server with dansguardian for internet filtering.

    I just built this server in a 4U Chenbro rackmount case. Tyan dual socket opteron motherboard with 16GB ram and a RAID 10 housing 2TB of storage. Opteron 2378's and an adaptec 5405 if I remember correctly. There is room to upgrade the RAM further to 16GB but I don't think I'll be using that anytime soon. Right now I have 12gb committed but hardly any of it is actually consumed.

    It is running great so far only I plan to add some more networking via the secondary PCI Express slot.
  • Lord 666 - Tuesday, October 13, 2009 - link

    What Cisco phone apps are you running virtualized? Assuming Callmanager and Unity? Currently I'm running Meeting Place Express 2.1 virtualized without issue, have second failover node of Unity on the project plan, and debating Callmanager and UCCX. Realistically, going to wait until UCCX goes Linux next year.
  • xeopherith - Wednesday, October 14, 2009 - link

    I'm currently running call manager and emergency responder but Unity seemed to be slow and eventually stopped working so I'm using a physical server there. I don't think it would be an issue if I created and installed the machine from scratch but this is one thing that doesn't work well from physical to virtual.
  • JohanAnandtech - Thursday, October 8, 2009 - link

    So you use ESXi for production? Do you manage your servers by RDP/ssh sessions? I can imagine that is still practical for a limited amount of servers. How far would you go? (Like how many servers before you would need a management software?)
  • crafty79 - Friday, October 9, 2009 - link

    We use esxi in production too. Of course we have VirtualCenter managing it though. Why wouldn't you use esxi? 10x less patches, fully managed remotely through api, more resources available for vm's.
  • Ninevah - Friday, October 16, 2009 - link

    How do you do the licensing for those ESXi servers with VirtualCenter? I thought the centralized licensing system with vCenter required 1 license file with all the ESX licenses in it in order for vCenter to manage all those servers.
  • xeopherith - Monday, October 12, 2009 - link

    I have one linux server so far and I have always managed all my machines through some kind of remote whether that means SSH, terminal services, whatever. I don't see why that would be a disadvantage.

    I'm working through the couple of problems I have run into so far and I would say it has been pretty successful.

    The only real difficult part was that except for two all my machines were moved from real to virtual. The couple of problems I have run into seem to only be because of that. Network performance and disk performance. However changing some of the settings seemed to resolve it completely.
  • monkeyshambler - Thursday, October 8, 2009 - link

    Personally I'm not the greatest fan of virtualisation as yet but going with the theme it would have to be dual socket rack servers alternately kitted out with either SSD disks (for database server virtualisation) and large capacity disks (450 - 600G 15k's) for general data serving.
    I think you still get far more value from your servers by separating them out into roles. e.g. database, webserver, office server.
    Admittedly the office servers are excellent candidates for virtualisation to their often low usage.
    The key to many systems these days is the database so making a custom server with solid state drives really pays off in transaction throughput.
    I'll buy into the virtualisation more thoroughly when we can partition them according to timeslice usage so we can guarantee that they will always be able to output a certain level of performance like mainframes used to provide.
  • 9nails - Wednesday, October 7, 2009 - link

    Blade servers can get me more CPU's per rack unit, but really, CPU's aren't the bottle neck. It's still disk and network. With that in mind, I can get more cards into a bunch of 2U rack server than I can a blade server chassis. And with more I/O and network to my servers backups aren't the big bottle necked on the wire as they are with blades. My hero is not the servers but the 10 Gb FCoE cards.
  • Casper42 - Wednesday, October 7, 2009 - link

    Our upcoming design is based on the following from HP:
    c7000 Chassis
    BL490c G6 (Dual E5540 w/ 48GB initially)
    Flex-10 Virtual Connect

    Storage we are still debating between recycling an existing NetApp vs saving the money on the FC Virtual Connect & HBAs and spending it on some HP LeftHand iSCSI storage.


    A little birdie told me next near you will see an iSCSI Blade from LeftHand early next year. Imagine an SB40c that doesnt connect to the server next door, but instead has 2 NICs.


    So when it comes to "Virtualization Building Blocks", you can mix and match BL490s and these new iSCSI Blades.
    Need more CPU/RAM, pop in a few more 490s.
    Need more Storage, pop in a few more LeftHand Storage Blades.
    With expansion up to 4 chassis in 1 VC Domain, you can build a decent sized VMWare Cluster by just mix and matching these parts.
    Outgrow the iSCSI Blades? You can do an online migration of your iSCSI LUNs from the Blade storage to full blown P4000 Storage nodes and then add more 490s to the chassis in place of the old Storage Blades.

    This allows you to keep your iSCSI and vMotion traffic OFF the normal network (keeping your Network team happy) and still gives you anywhere from 10 to 80 Gbps of uplink connectivity to the rest of the network.

    Now if you really want to get crazy with the design, add in the HP MCS G2 (Self Cooling Rack) and you can drop a good sized, very flexible environment into any room with enough power and onjly need a handful of Fibre cables coming out of the cabinets.
  • mlambert - Thursday, October 8, 2009 - link

    Casper has the basic idea (C-class with BL490's) but I'd go with FC VC along with the 10Gbit VC, scratch the LH iSCSI and go with NTAP NFS for every datastore except for your transient data swap VMFS (fc makes sense for these). Use the FC for SAN boot of all ESX hosts and any possible RDM's you might need. Toss in SMVI for backup/restore + remote DR.

    You could stay cheap and go with normal 16/32port 4gb Brocades to offset the Nexus 7000's with 10Gbit blades.

    FAS3170c's with the new 24 disk SAS shelves and all 450GB disk. Maybe a PAM card if you have a bunch of Oracle/SQL to virtualize with heavy read IO req's.

    Thats about it. Really simple, easy to maintain, basic array side cloning, 100% thin provisioned + deduplicated (besides the transient data VMFS), with built in remote site DR as soon as you need it.
  • rbbot - Tuesday, October 13, 2009 - link

    I've heard that you have to have local storage on the blade in order to have a multipath connection to the SAN - if you use SAN boot you are stuck with a single path. Does anyone know if this is still true?
  • JohanAnandtech - Friday, October 9, 2009 - link

    "scratch the LH iSCSI and go with NTAP NFS"

    Why? Curious!
  • mlambert - Saturday, October 10, 2009 - link

    Theres multiple whitepapers on the benefits of NFS over VMFS provisioned LUNs but some key points are:

    Thin VM provisioning by default
    Backup/Restore of individual VM's rather than entire datastores

    Plus when you tie that into NetApp storage, you have flexible volumes for your NFS datastores (meaning on the fly shrink + grow of any/all datastores), WAFL gets to do the data de-duplicaiton (meaning it understands the blocks and can do better single instsance storage). Add in snapmirror for all your NFS datastores and you have a super easy to maintain, replicated, portable, de-duplicated solution for all your business continuity requirements.

    I kinda sound like a NTAP salesman here, but it really is the best for ESX.

    Also for any of the FC purists remember that the Oracle Austin datacenter runs almost all of its databases on NFS. Only a select few remain on FC and those are all on DMX.
  • tomt4535 - Wednesday, October 7, 2009 - link

    Dual Socket Blades for sure. We use HP C7000s here with BL460c servers. With the amount of RAM you can put in them these days and the ease of use with Virtual Connect, you cant go wrong.
  • ltfields - Wednesday, October 7, 2009 - link

    We use C7000s as well with BL465c G6 boxes, and they scream on performance. The sweet spot for us right now is still 32GB of RAM, but wouldn't be surprised if it's 64GB next year or possibly 128GB. We don't use Virtual Connect yet, but we're considering it, probably when we make the jump to 10GBe switching...
  • HappyCracker - Wednesday, October 7, 2009 - link

    Our setup is quite similar. We use BL465c blades for most hosts, and have peppered in a few BL685c servers for applications with more CPU or memory requirements. Overall, management of blades is just better integrated than that of traditional racked servers. I think the modularity of the smaller blades allows a bit more flexibility across a virtualized server environment; a host outage doesn't affect as many guests.

    Instead of Virtual Connect, we jumped to the Cisco 10GbE on the network stuff and the integrated 9124 switches for the FC traffic. It's knocked the FC port count way down, gotten rid of the rat's nest in cables, and made management easier overall.
  • Casper42 - Wednesday, October 7, 2009 - link

    @HappyCracker: How many Cisco 3120X do you have in each Chassis. Right now Cisco only offers 1Gb to each Blade and then 10Gb uplink.

    So I would imagine you have at least 4 (2 layers x Left+Right modules) so you can separate your VM Traffic from your Console/vMotion/HA/etc traffic. Or are you stuffing all that into a single pair of 1Gb NICs?

    Whats your average VMWare density on the 465? 10:1, 15:1 ?
  • HappyCracker - Thursday, October 8, 2009 - link

    It ranges from about 9:1 to 15:1; the 685c is run much tighter with 6:1 at its most loaded. DRS is used and guests are shuffled around by the hosts as necessary.

    As for the network aggregators, there are six in each chassis, then the two FC modules rounding out the config.
  • rcr - Wednesday, October 7, 2009 - link

    Is there a way to enable to show the results of polls without voting, because I'm not a IT expert or something like that, but it would be pretty interresting to see what those prefer.
  • Voo - Thursday, October 8, 2009 - link

    I second that and just clicked "Something Else" which shouldn't distort the outcome too much.

Log in

Don't have an account? Sign up now