• Intro
  • Case
  • Cooling
  • CPU, mobo
  • Disks, PSU
  • Final Results
  • Final Thoughts
  • The price tag
The First Ideas for a new system

By Harm Millaard, March 2012

This is the first part of a number of articles about a new system that needs to be built. It starts with the theory and in subsequent articles, well panels across the horizontal top of the page, it will show the actual progress on the build. This series may go on for months, depending on market developments, technological developments and budget restraints. Hover the mouse over one of the topics below to expand it.

A sincere Thank you to Bill Gehrke, Eric Bowen, Gary Bettan, Mitch Wood, Randall Leong and Todd Kopriva for helping me with this article and your very constructive thoughts and remarks. You have all been a big help.

Tackling our doubts

From time to time we all face the question, should we build a new system now or in the next couple of months? What should we aim at, in terms of improvements? Do we need to save some more $$ and postpone our new system for a couple of months to profit from that decision for years to come? Where can we find info about option A or B to help us decide what is best for us? Is it really worth it in terms of 'Bang-for-the-buck' BFTB? Have we taken all aspects of a new system into consideration, did we not forget anything? Should we wait for those new accounced products?

The ultimate purpose is to tell you how I went about building a new system, what considerations I had for certain choices, where my doubts were and how I tackled them. The limitation of the series is that it only applies to my situation and not by definition to your situation, so you have to distill anything that applies to your situation and tweak as necessary for your needs.
Background

I mainly use my system for editing video using the Adobe Master Collection CS5.5 and well as website development with Dreamweaver for the PPBM5 Benchmark, it's successor and some other sites. The main codecs I use vary from simple ones like DV and HDV up to RED and EPIC 4K and 5K material, but the majority is HDV, AVCHD, XDCAM-EX 4:2:2 and Canon MXF 4:2:2.

My current system is getting rather dated, using only an i7-920 at 3.7 GHz with 24 GB RAM, a GTX 480 video card and a rather extensive disk I/O system. I mean it still runs without problems, is sufficiently fast for my current needs, so no urgent need to get a new system, but it is better to plan ahead than getting caught in a bind when a major component fails and you really need to get a new system the next day.

External influences
  • CPU: the advent of new CPU's like Sandy Bridge, Sandy Bridge-E (i7-3xxx), Ivy Bridge, i5 Xeons and ???
  • GPU: the advent of Kepler.
  • Platform: 1155 and 2011 platforms.
  • HDD prices: The ongoing high prices of conventional disks.
  • PCIe-3.0: The first indications of cards to support PCIe-3.0.
  • Thunderbolt & PCIe-3.0: Announced by Intel, but without a timetable.
  • The advent of Windows 8.

Interesting developments to say the least, but often without clear delivery dates to accompany announcements, or if they are announced with a specific date, they are often in very short supply and at premium prices. So, while interesting, keep this in mind when planning for a new system.
The first ideas

Having read a bit on new developments and seeing that in practice my ageing system, while still a decent performer, was being overtaken by more and more modern systems, I started to look at possible components for a new system. But, I'm lucky. I can take my time to figure out exactly what I want to have, can check my choice of components, learn from the experiences of others, avoid common mistakes, and I can sleep on a difficult decision and postpone a bit if necessary. Whatever I will ultimately end up with, it has to be a big step forward from where I am now, otherwise it is a very unwise investment.

While the primary emphasis is on Premiere Pro, there are a lot of similarities for After Effects users. After Effects has very similar (but slightly divergent) hardware requirements. Considerations that are specific to After Effects will be noted in the topics below, such as memory and video card.

About the CPU
When Sandy Bridge was first released, it was not a feasible upgrade from an i7-920 (OC) @ 3.7. Sure, you can OverClock the Sandy Bridge further, but the 1155 platform with its limited PCIe lanes was a downgrade for me, forcing the video card to run in PCIe-8x mode, causing a 10-15% performance penalty. At best it could match my system, but not surpass it. What about Ivy Bridge then? No, still the same limitations of PCIe lanes. So, either i7-39xx or even a dual Xeon SB-EP. Hang on, that i5-2690 Xeon goes for over $ 2 K apiece and then the rest of the components are equally expensive, so let's forget about that.

Initial choice of CPU: i7-39xx with the intention to overclock to 4.6 - 4.8 GHz

However, the current 3930K and 3960X have two cores disabled, as well as part of the L3 cache. I really hope that Intel will announce a 3980X with all 8 cores enabled as well as all 20 MB L3 cache. The i5-2690 has all cores enabled and the full complement of L3, so why not for the i7-39 range? To answer my own hopes as being something not expected in the short term, is that Intel is very strict about the 130W TDP limit, so do not expect that 8-core to appear until the Ivy version of these chips are available. Still time to wait and see.
About the motherboard
I am considering either an Asus or Gigabyte X79 motherboard with 8 DIMM sockets, a FW port for older HDV/DV capture and for the rest up-to-date. I'll figure that one out later, but it looks like the choice is limited to the Asus Sabertooth X79, the Asus P9X79 WS or the Gigabyte GA-X79-UD5. I like that the Asus P9X79 WS has two 1 Gbps NIC's and more SATA-600 and USB3 connections. 2 NIC's is advantageous when using a network and/or a NAS.

Initial preference of mobo: Asus P9X79 WS

Special consideration must be given to the layout and positioning of the PCIe-3.0x slots in light of the videocard chosen (see later under video card) because of the three slot width required for high-end video cards.

About memory

At least 8 x 4 GB sticks, but possibly 8 x 8 GB sticks, depending on price. The speed is secondary for the moment, as long as it is 1600+ and the RAM is low voltage (1.35V). Much will depend on price and availability. Maybe I can use the six 1600 4 GB sticks I have and only buy two more, although less than optimal. If anything, my current 1.5V sticks may cause problems when overclocking. I'll have to figure that one out.

Why consider even 64 GB when 32 seems more than enough? The thought is that the extra price of the additional RAM is small, but the potential benefits are great. One can use somewhere between 32 and 24 GB for RAM cache to improve performance. Is it worth it? Like so often it depends on the codecs you use. If you use heavily compressed codecs like DSLR, AVCHD or RED then yes, it is worthwhile, if you use simple codecs like HDV or DV, no, it is not worthwhile.

There is another consideration to go for 64 GB memory and that is if you use After Effects quite regularly. After Effects has very similar hardware requirements, but it will happily gobble the additional RAM when going from 32 GB to 64 GB, especially with the new Global Performance Cache.

About the video card

We all know that hardware MPE makes all the difference and it can only be used with certain nVidia cards. Rendering, scaling on export, blending and blurring can lead to impressive performance gains over software MPE. The number of CUDA cores is decisive in that aspect. I will wait for further news about the Kepler range, but from the leaked specs, the 680 sure looks nice and the 690 even better, albeit at a price. Is it worth it, dunno. I have to decide that later. Anyway it makes no sense in getting a 5xx card. In that case it is much cheaper to port the 480 I have to the new system. There is no need for a second Kepler card to steer a third monitor, because that capability is one of the strong points of the new Kepler range. Note that official support of Kepler video cards may be quite some time in the future, but I base my choice on the use of the 'hack', as I've done with my current 480.

Furthermore, the fact that After Effects can use multiple GPUs for CUDA computation (for the ray-traced 3D renderer) makes using some GPU setups sensible that may have been a waste for Premiere Pro.

Initial idea for the video card: Gainward GTX 680 Phantom 4 GB or a GTX 685/690 with 4 GB VRAM.

About disk I/O & Raid

I want to go for a better disk I/O setup than I currently have, but just look at current day prices of HDD's. That will cost a fortune if I want to extend on my current day 16 x 1 TB disks or even replace them with faster SATA-600 disks with larger cache. One thing to note is that I suspect one of these disks in my raid30 to be starting to give trouble, causing time out errors. It has not died yet, but it may in the near future. One thing is for sure, while I can port my Areca ARC-1680iX-12 controller to a new system for the time being, I ultimately want to get the new PCIe-3.0 controller, possibly called the Areca ARC-2082iX-24 with the 24 + 4 ports. Of course with at least 4 GB of cache memory and a battery backup module (BBM).

Why a 24 port raid controller you may ask. First of all, the price difference between a 12 port and a 24 port model is very small and you do not want to be in a situation where the number of ports are a limitation. Second, if HDD prices were not so high, and they may come down in the next couple of months, then the following setup may be worth considering, provided raid support for the trim function of SSD's materializes:
  • C: OS & programs: 4 x SSD in Raid10. One could always start with a single SSD.
  • D: Pagefile, media cache, renders: 6 x HDD in Raid0 using 2 internal ports plus the 4 external ports.
  • E: Media, exports and all other stuff: 22 x HDD in Raid30, including 2 hot-spares. 2 x (10 R3 + 1 HS).

Why would anybody in his right mind want so many disks? That makes no sense at all. Actually it does. Why? Because disks are still the main bottleneck in each system, the CPU, GPU and memory are way faster. If you only edit easy codecs like DV or HDV the Need for Speed is of course less than when editing RED 4K or EPIC 5K material. If you edit multiple tracks, your disk speed requirements go up, if you use multicam, your speed requirements go up, if you use 4:2:2 or even 4:4:4 material, your speed requirements go up. If the nature of your video is fast moving, lots of short clips, your speed requirements go up. If you use AE compositions exported as uncompressed, your requirements go up because of the sheer size. Redundancy costs extra disks but buys safety.

Initial preference of raid controller: Areca ARC 2082iX-24 (PCIe-3.0), expected end of Q2/2012?

Given the high prices of HDD's I have to postpone the choice and number of disks and the definite raid configurations for a later date. The controller is not out yet, so patience is the word, but realistically it may boil down to 2 x (7 R3 + 1 HS) plus 4 x R0, for a total of 20 HDD's. Well, 16 new 2 TB HDD's are still less costly than a single i5-2590 Xeon CPU, and the other disks can be ported from my existing system. That leaves me room to grow when the need arises.

About the case

Given the ambitions with the disk setup, my current Lian-Li PC-A77 with 17 disks and 2 BR burners internally will be too small. I need something that will house around 28 HDD's, 2 BR burners, 4 SSD's and a multicard reader for ingest from CF and SD cards.

With the huge number of disks in such a system, I want to have hot-swappable bays, so it is easy to exchange failed disks. Normal big towers will not do because they are simply not big enough, so this means further investigation. But it also raises another question: cooling. Air, water or even more extreme, nitrogen? I'll come back to that later.

I found a case that easily meets my requirements, albeit at a price. And it can be tailored to my specific needs. And most importantly, it will fit under my desk, even on casters and casters are a necessity for such a large case. It beats working in the cramped space under the desk for maintenance, upgrades or disk replacement. Enlarging the case with a pedestal or going for a larger model would not fit, so they were out.

When I tell you the details, your initial reaction will be: 'He's crazy to even consider such a case'. Well, that may be the case in this case, but OTOH a case like this is like a tripod system. It can easliy outlive several generations of systems, giving an expected life span of more than 10 years. You never need to get another case. The same with a good tripod, it can easily run in $ 3K to $ 5K+ figures, but it is a 'life-time' investment.

Almost certain case choice: CaseLabs MAGNUM TH10

I want to have the capability to use up to 4 SSD's, so this Addonics_SSD_bay seems to fit the bill, taking only a single 5.25" bay. For the hot-swappable HDD cages I will opt for 3 Chenbro_Hot_Swappable_bay units, since the TH10 does not fit any more.

Why the Chenbro drive cage and not the Supermicro CSE-M35TQ? They are similar, 5 hot-swappable disks in a 3 bay housing and have the same functionality. The things that steered me towards the Chenbro is that it includes two USB connections on the front of the cage, the drive cages on the PSU side are compatible with Chenbro backplanes, so I have only one kind of backplanes in this system. Last, there are rumors that the Supermicro included fan is pretty loud.

That leaves me with a potential configuration like this (hover the mouse over the image to expand it and with the mouse pointer over the image, use the scroll wheel to move up or down):

Magnum TH10

About cooling

With the case and drive cages almost a certainty, the question remains the cooling, since the case does not include any cooling, but has huge amounts of space for mounting a number of radiators, although the drive cages will limit my options.

Initial thoughts are that a system with 28 HDD's will make so much noise anyway (each internal drive cage has a 120 mm fan and the hot-swappable Chenbro's have a 80 mm fan at the back) with 7 fans for the drives, the options for Kepler water cooling are currently non-existent, so the benefit of water cooling must, at least for the time being, come from the CPU cooling. It may result in a very slight advantage over air cooling, almost negligent, but will require a tedious installation, is much more expensive and requires more maintenance, so it does not look appealing to me. My initial feeling is KISS, just a Noctua NH-D14 in push-pull configuration. But that means also that for all those possible fans I have to budget a large number of Scythe Slip Stream SY or Noctua NF fans. A separate article is planned on thermal design, where we will examine the pro's and con's of positive and negative pressure and the consequences.

Initial preference CPU cooler: Noctua NH-D14 CPU cooler

Initial preference case fans: Scythe Slip Stream SY1225SL 12L (very quiet) or

Noctua NF-P12 (because of a high CF/db ratio)

About the PSU

I have checked this possible system, making several assumptions and ended up with a required PSU of around 1500W with all those disks, overclocking, video card, fans etc. This is based on the advertised 195W use of the 680 Kepler card, so this is the best I can find at the moment on power requirements.

Keep in mind that 1500W may seem huge and be a serious burden for your electricity bill, but the PSU will only use what it needs. A 1500W PSU will not use more energy than a 500W PSU in a small system if the power is not needed. It will run cooler and more stable however. It is like a Volkswagen Beetle (500W) trying to keep up behind the safety car in a Formula 1 race, it has to run at full power to even keep up, while the Formula 1 car (1500W) has to worry that his tires will not get too cold.
My inital thoughts are to install two PSU's in this case, one a Corsair or Seasonic 80 Plus Gold PSU of around 500W to power the video card and the BR burners, plus some of the case fans, and another PSU of around 1000W to power the system and hard disks.

Whichever way you turn it, staggered spin-up of the disks is a necessity.

Initial thoughts: Corsair or Seasonic 500W plus 1000W PSU's

On second thought, serious doubt coming up now about the two PSU's. Some PSU's refuse to start if there is no motherboard attached. This may be a serious issue and limit my choices. In the past I have noticed this issue with a CoolerMaster PSU, that refused to start if it was not attached to a mobo. This may mean I have to short two connections to circumvent this. I will investigate further and keep you appraised.
Conclusion of Part 1
The major components have been identified, some preferences indicated, but it is still very preliminary.

Next step: Order the case with all required case components and start working on part 2.

A rather novice at video editing may think, after reading all this, it is way over my head and the writer is only focussed on disks and the case. "I have been editing with a two disk configuration for years without major problems, so this is utterly exorbitant and from another planet."

That is (partly) correct, but you have to consider where I am coming from and where I want to go. I have a decent system with a PPBM5 score of 157 seconds. The best i7-39xx system currently holds a score of 133 seconds. That difference in performance does not justify the extra cost of a new CPU, motherboard and extra RAM for such a relatively small performance increase, especially if one compares that 133 score at an OverClock of 4.8 GHz versus my 157 score at an OverClock of 3.7 GHz.

In my case, the upgrade to a 2011 platform and an i7-3930K will make a difference, but not enough to justify the cost, unless other components are upgraded at the same time.

The Case and Drive Cages

This is the second part of a number of articles about a new system that needs to be built. The Intro starts with the theory and in subsequent articles (panels) it will show the actual progress on the build. This series may go on for months, depending on market developments, technological developments and budget restraints. Hover the mouse over one of the topics below to expand it.

Doubts, doubts and more doubts

CS6 is out. Kepler is out. Great news so far, but...

Keper video cards are not yet supported for the AE Ray-tracing 3D rendering, due to lacking functionality that is not included in the current driver library. Adobe is working on that but it is unclear how long that will take. Another issue is that the much touted Maximus solution, where one can use both a video card and a Tesla C2075 card to improve video performance is limited to Quadro cards only. It simply will not work with a GTX card like the 570/580/680.

Simply put, a Tesla C2075 card, with only 448 CUDA cores to perform the hardware acceleration at an extravagant price plus a Quadro card does not make much sense to me yet. I'm waiting for Adobe to show me wrong here, but with the current price tag for both the Tesla and the Quadro, I think I will remain with a GTX card, it just does not seem worth the almost 7-fold price over a top-of-the-line Kepler 680, even if I have to wait for Adobe to get the Ray-tracing 3D rendering issue solved. So far all Quadro cards have only excelled in their price, not in their performance.

Memory issues

From many stories, the memory controller on the i7-3930K, my initial choice of CPU, seems rather finicky and especially when overclocking. It seems to be very important to only use 1.35V sticks and preferably without Micron chips on these sticks, but with the current state of affairs the choice seems to be limited to Samsung 4 GB sticks and that sucks...

With 8 sticks that would limit me from the outset to only 32 GB memory and I intend to use 64 GB, because AE can very nicely use all that extra memory and even though PR does not need as much memory or use memory as effectively as AE, it can still be put to good use in the form of RAM cache.

At this moment the only DDR3-1600 64 GB kit I can get here are the G.Skill Ripjaws Z Series. which are qualified for the Asus P9X79 WS mobo, even though they are 1.5V.

Start with the MAGNUM TH10

I have finally received the case that I was waiting for. Let it be explicitly stated that this was my own fault. I was doubting what to order, but once ordered Jim was extremely fast in delivering. Any delay was my fault, not CaseLabs. It is huge, and I mean really huge. It can easily house a couple of mid towers and then some. For instance this case is almost three times bigger than a CoolerMaster HAF 912 Plus and more than twice as big as my Lian-Li PC-A77 case.

The initial doubts with this case were what to order? The transport costs are sizable, there is no European dealer and you have to order it in California, so you must make sure you have all the components you will need in one order.

The case comes with four MAC-125 mounts to accomodate the installation of two BR burners, a multi-card reader and a Chieftec 4 x 2.5" drive bay. I also wanted a Caster kit, so moving the heavy case around is a bit easier. Even though it increases the height of the case somewhat (67 mm), it still fits under my desk, but I have to check the airflow from the top fans to the outside because of the intended cooling.

In comparison to my Lian-Li PC-A77 case, which is considered a big tower by everybody, this case is huge. Keep in mind that this Lian-Li case on the left has 14 3.5" HDD's at the front plus two 5.25" BD-R burners, so it is not very small. The Magnum is way bigger, but that explains the name. It hardly fits under my desk. Notice the four 120mm fans openings behind the mesh on the PSU side.

Next I needed three MAC-127 5.25" Device Mounts for the Chenbro drive cages and at least two Quad 120mm Fan Mounts, to install 4 120mm fans, once on the mobo side for air intake and once on the PSU side. Here you see the drive cages installed. Also take note of the many options there are for proper cable management, with the PSU('s) on the other side of the case.

But that is not all. On the other side of the case, the PSU side, there are cage assy's for 16 more 3.5" disks. I needed three standard HDD Cage Assy's (MAC-101) for the installation of 12 HDD's on the PSU side of the case. Initially I considered four of these, but I figured that with 12 HDD's on the PSU side and 15 HDD's on the mobo side, making for 27 3.5" HDD's in total, plus 4 x 2.5" disks, that will be enough to start with. Luckily Jim shipped my case with four of these MAC-101's. With some mods, I can easily add a couple of cage assy's to increase the number of 3.5" disks to around 40 or more if I feel so inclined.

All these cage assy's have space for a 120mm intake fan. In addition I needed a MAC-123 PSU Support Bracket. Apart from exhaust fans all that needs to be installed on this side is one (or two) PSU's, but I think that one will suffice in my case. The support bracket is already installed.

The hinged doors make for very easy access to the case.

Another very nice feature of this case is the so called Tech Station. This means that you loosen four thumb screws and slide the complete motherboard-tray out of the case for easier installation of motherboard, CPU, cooler, RAM, cards and when finished, slide it back in and fasten the thumb screws again.

Whether I will really need 4 Fans on either side of the case or that a smaller number of fans will suffice must be determined at a later stage when I will discuss cooling. Here you can see the Quad fan mount installed on the mobo side of the case. Not shown here, but there are 4 120mm fan holes in the bottom of the case on the mobo side as well as 4 120mm holes on the PSU side.

The top fans will be used for air exhaust. With room for 8 120mm fans at the top and 8 at the bottom, a real tornado may be the result. It may be deafening, but that will have to show when we get into cooling this monster.

I have seen many cases in the past years, but this is absolutely the best case I have ever seen by far. The build quality, the eye for details, the ease of use, the huge amount of space, all put this case at the ultimate top. NASA would endorse this quality in an eyeblink. But there is more to say about CaseLabs. Not only is the case superb, but both in pre-sales and in after-sales, Jim Keating does a fabulous job in helping you get the exact case you need. Prompt replies to questions, to-the-point and the delivery is fast and the packaging of the shipment, the protection of the case, is very good as well.

The rest follows in the next weeks. This is work in progress.

The drive cages

The Chenbro cages have arrived. It will give me room to add 15 hot-swappable HDD's on the mobo side of the case and in combination with the HDD Cage Assy's mentioned above that should do for the time being. But I have to figure out whether the included fan on the back of the cage is any good in terms of noise or whether it is better to replace them. In the next article I will consider cooling and then tell you whether the fans on these Chenbro drive cages are any good and whether they need to be reversed for proper air-flow. Standard fans on these cages are the Y.S. Tech FD128032HB 8cm fans, which are set up to expell air from the case to the outside with a CFM rating of 46.9, but with a huge 45 dBA sound level.

As you can see from the picture above, the three Chenbro drive cages have been installed, but I'm still waiting for the rest of the components. Meanwhile the Icy Dock hot swap 4-in-1 drive cage has also arrived and has been installed.

The major disappointment at this moment is that drive prices have not declined yet to pre-flooding levels, so with at least 15 hot-swappable new drives at one side of the case, and some of my existing drives on the PSU side, it will seriously add up.

The Thermal Design and Fan Choices
General remarks on cooling

Before going into the fan choices, there are a number of design issues to be considered, but these depend on the components to be used. Several things are sure, even at this moment in the build. CPU and a third party CPU cooler, whether that is a Noctua cooler or another brand, a nVidia GTX 68x video card, which is a 2.5 or even 3 slot card, and an Areca raid controller card. For audio the on-board capabilities will be used, so no extra sound card. In my workflow and with my material there is no need for an AJA, Black Magic or Matrox card, especially with the four monitor support on the Kepler cards. So this means quite simply only a video card and a raid controller in the system.

This is very important to know, because with the video card blowing out the hot air at the back and a number of open PCIe slots at the back as well, it simply means you need to have at least a balanced air pressure or a positive air pressure, but never a negative air pressure. You may wonder what I mean by that. A balanced air pressure is an airflow where the amount of incoming air into the case is about equal to the amount of air blown out of the case. A negative air pressure means there is more air being blown out of the case than the intake fans deliver and reversely a positive air pressure is where the intake fans suck in more air than the exhaust fans blow out of the case.

With only two PCIe cards installed in the system it is necessary to use at least balanced air pressure or better positive air pressure, because then the cooling of the video card is not hampered by the air coming in through the open slots in the case of negative air pressure.

Why not water cooling

Water cooling is quiet and the lower the noise from the PC the better. Well, I'm not going to dispute that, it is a sound(less) idea, but there are some drawbacks about water cooling and the major drawback, often overlooked by many is that the radiators need fans to dissipate the heat and the fans are the noise creators in the first place. Let's try to put the pro's and con's of watercooling versus air in perspective:

Pro's of water cooling:

  • It saves the sound of at least one fan on the CPU cooler, possibly two in a push-pull configuration.
  • It saves the sound of the video card fan, if there is a cooling block available for the video card in use.

Con's of water cooling:

  • It is far more expensive than air cooling and not much better.
  • It still requires fans for the radiator(s), negating the advantage of no CPU or video fan.
  • It adds the sound of the pump.
  • It is a difficult and time consuming job to install and requires more maintenance than air cooling.
  • It requires a large chassis, or external housing and the clutter that brings.

The major noisemakers in a system are the mechanical parts, fans and conventional disks. With increasing numbers of disks the benefits of water cooling diminish, because the disks all make noise and there does not exist a cooling block for disks, so fans are still required there.

Balanced or positive air pressure

With a closed video card that outputs hot air at the back and a number of empty PCIe slots, I want at least balanced air pressure, but prefer positive air pressure. So that means I have to calculate the amount of cool air going into the case (measured in CFM, cubic feet per minute) and the amount the exhaust fans can displace and take into consideration the intake filters that will prevent dust and debris to accumulate on the fans and thus reduce the intake air flow.

The exhaust airflow on the mobo side is 3 x 46.9 = 140.7 CFM out the front from the Chenbro disk cages and the exhaust airflow from the video card (don't know yet how much that is) which is estimated at another 40 to 45 CFM, so in total there is an airflow being sucked out of the case of around 180 to 185 CFM. But we need to add to that the airflow from the CPU cooler, the Noctua NH-D14 Special Edition in push-pull configuration, and the exhaust fan on the back of the case, so we need to add about 75 CFM, bringing the total up to around 260 CFM.

Looking at these requirements and fans available, I ended up with 5 fans on the motherboard side of the case, 4 for intake at 75 CFM each, giving a total of 300 CFM pushed into the case, and one exhaust fan on the back of the case in addition to the ones mentioned already. That gives a nice positive air pressure on this side of the case. I again had to change my initial selection and have opted for the Zalman ZM-F3BL fans, because of the high CFM rating and the low noise level. I will not use the LEDs, but that is what they come with. I hate a badly designed Christmas tree under my desk.

Things are of course a lot easier with the Magnum TH10 case in comparison to a standard case, because of the divided case, a mobo side that contains only mobo, CPU, cards and disks and on the other side the PSU, that only contains the PSU and disks, so the heat sources are nicely distributed.

For the PSU side I opted for 4 intake Noctua NF-P12 fans and 2 exhaust fans, giving me 4 x 54 = 216 CFM intake and 108 CFM exhaust air flow, plus the airflow from the PSU, so again a positive air pressure. The reason is simply to keep the noise level down, so the Noctua felt the best option.

Filters on the intake fans

Keep in mind that you will need to filter all intake fans and clean those filters regularly. If fans get dirty, they have to work harder, the bearings will suffer, create more noise and the cooling performance goes down. That is the reason I'm looking into Demciflex filters for all the intake fans. However the Magnum case is not magnetic, so I have to request the custom models with the Xtra Magnet.

Here the Zalman fans are installed. From an aesthetic point of view, black silicon pins would have been nicer than these white silicon pins, but a black marker can do miracles and now at least I know there is no noise from vibrations. Keep in mind that the fan frame, the four supports that keep the fan together are always on the exhaust side of the fan.

Another way to find the correct intake or exhaust direction of a fan is to look for the air-flow direction arrows on the fan housing.

A nice thing to notice in this case is that the position of the fan mount can be adjusted very easily. You can place it higher or lower in increments of 12mm to exactly cool where you need it most. There is a margin of around 84mm available to position it higher or lower, otherwise the mesh opening in the hinged door will obstruct the airflow.

DemciFlex has been asked to make custom intake filters for these side mounted fans. I just sent them the exact dimensions and the same for the front intake filter, which is a bit more complicated because of the panel in the middle. Meanwhile the Noctua fans have arrived and have been installed.

Here are the drawings DemciFilter made for me for the custom filters for both the side-intake filters and the front filters. When I receive them I will add pictures of them.

The DemciFlex custom made filters have arrived. This is what it looks like on the front side, from the inside of the front cover. It is very fine mesh, easy to remove from the magnetic fastener for cleaning.

For the front fans I put the dust filter on the inside of the front cover, because the fans are in fixed positions, but for the side fans I put the dust filter right over the fans, to allow for placing the fans in a different position and still keeping the air filter in front of the intake fans.

CPU cooling and cooling paste

The choice of the CPU cooler is most likely the Noctua NH-D14 SE2011. To mount this cooler on the CPU, I have opted for cooling paste from Coollaboratory, Liquid Ultra. There is one caveat with this cooling paste, it corrodes aluminium, so you better make sure that the flat surface of the cooler is completely copper. The cooling capabilities of this specific paste are extraordinary and way better than for instance OCZ Freeze Extreme.

Warning: If you are considering a much better affordable system, based on the Ivy Bridge processor, be warned that the cooling paste Intel uses on the Ivy Bridge is no good, especially when overclocking. At stock speed the i7-3770K runs 11 degrees hotter than with Liquid Ultra cooling paste and at 4.6GHz even 20 degrees centigrade. However if you change the cooling paste, you also void the warranty.

The CPU, Mobo, Memory, Video and Raid Controller

It may be in vain, but while the build progresses and I decide on more and more components, I am hoping that Intel will come out with an Ivy Bridge-E CPU, based on the 22nm production process, that will have all 8 cores and the full complement of 20 MB L3 cache enabled and still remain within the Intel TDP limits. In the meantime I am just biting my nails. Well, Rome wasn't built in one day either.

The second reason I have for postponing on this section is that I hope Areca will shortly announce a new range of raid cards that support PCIe-3.0 instead of the current 2.0 versions. The latest news from Areca mentions that the new SAS 6Gb RAID card with PCIe-3.0 interface should be available around Q3 this year. Whether that means July 1 or September 30, I do not know.

Motherboard & CPU Cooler

Luckily motherboards do not change often, so it is a safe bet to get the Asus P9X79 WS, even if Ivy Bridge-E CPU's are announced at a later moment. But that also means if I want to install that motherboard, I will have to decide on the CPU cooler now, because I need to install the back-plate together with the mobo. I can install the CPU and the cooler at a later date. So I will order the Noctua NH-D14 SE2011 one of these days together with the mobo.

The main reason for ordering these now is that prices of motherboards and CPU coolers are pretty stable, but not so for CPU, memory and video card. By installing the mobo now it gives me more information to decide on the way cable management should be handled.

You will notice when I have placed the order in the Price tag panel.

Memory

I have just read this statement from Samsung:

“Samsung will also aggressively move to establish the premium memory market for advanced applications including enterprise server systems and maintain the competitive edge for Samsung Green Memory products, while working on providing 20 nanometer (nm) class* based DDR4 DRAM in the future.” Also see Samsung DDR4 Memory Technology.

I don't know how far in the future, but with registered modules in 8 GB and 16 GB sizes being delivered to major CPU and controller makers, it makes sense to wait a bit and hope for the future not being too far off.

The Disks, PSU and Cable Management
The PSU, Power Supply Unit

The PSU choice was relatively simple. Requirements were Gold+ rated, fully modular and top quality. I used the eXtreme Power Supply Calculator Pro to figure out what kind of wattage I would need, and specifically looked at the 12V rail to give me the amperage I need. Well, it came up with this overview:

So it was clear I needed a 1200W PSU. I mean there is nothing bigger, even though this calculation is based on only 20 HDD's and not up to the 31 I have room for. This takes into consideration that the two BD-R burners would not be used at the same time as the full CPU/GPU power and from what I found available the choice came to the Corsair Professional Gold AX1200 model.

The PSU has arrived and has been installed.

That gives me the first practical problem in this build. I need to place the quad fan mount at such a position that the fan of the Corsair PSU, which is larger than the fan openings in the mount (13.5 cm versus 12 cm) can exhaust the maximum amount of air. But that may make it dififcult to place the two planned exhaust fans in the upper two spots. The top left one is no problem, but the right one may be a bit tight. Well, I'm still waiting for the Noctua fans, so maybe it will just fit in, otherwise I have to move the fan mount up by 12mm. We'll see.

Of course it is great to have the PSU and the heat it causes far removed from the CPU and motherboard in the other side of the case.

The disks (Ouch! So expensive)

Since prices have not yet dropped to pre-flooding levels, this will be the most expensive component in the system.

With current price levels and assuming I will leave my current system mainly intact, this means I will need quite a lot of disks. At this moment I consider getting (at least in the first stage) the following setup:

Disk Type # Configuration Capacity Purpose
C: Corsair Performance Pro 256GB 1 Single disk* 256 GB OS & programs
D: Seagate Barracuda ST2000D 4 Raid0 8 TB Pagefile, media cache, previews
E: Seagate Barracuda ST2000D 16 Raid30, 2x(7xR3+ Hot-spare) 24 TB Media & projects
F: Seagate Barracuda ST2000D 4 Raid3* 6 TB Exports

* The intention is to make this a 2 SSD Raid1 on the motherboard (there are simply not enough SATA3 ports available for a Raid10 with 4 SSD's), but cost is currently the limiting factor. Further, if the need arises, I intend to add a hot-spare to this F: drive array or convert this array to a Raid6 with some extra disks.

The Corsair Performance Pro SSD came out as best in a recent test of 37 SSD's and has a reasonable price of around € 235. The only drawback of this SSD is its high power consumption during writes. The main advantage of this SSD is that it is based on the Marvell controller, in contrast to for instance the Intel 520 and Crucial M4, which are based on the SandForce controller.

Why is that important? Well, SandForce measures their rated speeds on compressible data, so a 50 K Word document can be written in X time, but the controller compresses it to say 10 K before actually writing it. So they claim 50 K writes instead of 10 K effective writes. Video material is not compressible, it has already been heavily compressed, so all the claimed write speeds with SandForce controllers are extremely inflated when working with video material. In practice the write speeds of SandForce based SSD's with video material do no better than half the claimed speed at best. Marvell OTOH measures the real write speeds, using uncompressible material.

A second issue in choosing a SSD is the 'stable state' performance degradation. A new SSD performs quite good, but after having been written to, performance degrades over time, until it reaches its 'stable state' when performance does no longer degrade. The Corsair Performance Pro shows much less performance degradation over time than the Intel 520 or Crucial M4.

The Seagate Barracuda ST2000D is probably the fastest SATA disk available at this moment, has the best price per GB and has a low sound level. This is of course important with 24 of those disks in the system. All the files that can easily be recreated are on the raid0, so the risk is negligent in case of disk failure. The important raid, drive E: with the media and projects on it, has a hot-spare for each raid3, in addition to the parity disk. Of course I lose the effective storage space of 4 disks with such a setup, but it buys me safety in case of disk failure.

However, there is one complicating factor specifically for Europe and that is that legal warranty is two years, no matter what the manufacturer says. In this case Seagate gives only one year warranty, so the consequence is that every shop that sells these disks has to pay the 2-nd year of legally required warranty out of his own pocket. For many shops this simply means they no longer sell these Seagate disks. The risk is too big, especially since these disks have - again - a high failure rate, many are DOA or develop screeching noises. Period.

Luckily I'm not in a hurry, so I can look around for alternatives, but the choices are very limited. WD RE4 are still SATA 300 disks, WD Caviar Black are notoriously bad for parity raids, Hitachi 7K3000 Deskstars may be an option, of course the Hitachi Ultrastars are much better but carry a corresponding price tag.

I have to test it, but I think that there is no sense in increasing the number of disks in this array for drive E:, because the PCIe-2.0 bus will be the bottleneck for further performance improvements and I have only just heard from Areca about a new PCIe-3.0 card with better bandwidth, which is expected sometime in Q3. Based on the speed of these disks and the bandwidth of the raid controller, I expect this raid30 volume to max out at around 1250 MB/s transfer rate or even less. The F: disk will be the slowest of the three conventional volumes, but since it is only for exports, that will not be a handicap, and I can expand it at a later date to 5 of more disks.

I will carry over at least two disks from my old system as drive A: and B: which contain stock footage and Sonicfire audio clips and miscellaneous stuff, but leave the rest of the disks in the old system intact. If necessary in the future, I can add a couple more SSD's in the Icy Dock cage or add some more 3.5" disks on the PSU side. Even with 24 new disks and 2 old ones, I still have 5 drive bays free for expansion.

The ugly part of this setup is that it will set me back around € 2,500. The setup is cool, but the price is not cool!

Cable Management

Once the motherboard is installed, I will show you with pictures the huge benefit of this case in getting a clean airflow through the system. Remember that lower temperatures means less noise and better longevity of the components, especially in an overclocked system.

Tuning and The Final Results

This will be fully documented when we arrive at this stage, but we are not there yet. Still a long way to go.

For those who can't wait to see what the system looks like with all the front bays occupied, see:

The top bay is the 4-in-1 2.5" hot swap bay, next the multi-card reader with USB2, USB3 and eSATA, then two BD-R burners and then three 5-in-3 3.5" hot swap cages. A great way to fill those 13 5.25" bays for my purposes. The PSU side is still empty at this moment.

Final Thoughts

This has been the last part of a number of articles about a new system that needed to be built. In the previous parts I have tried to show you the steps and considerations I went through. This series has gone on for months, sometimes too slow for my taste, but I knew I had no rush and there were so many things to consider, like waiting for the single GPU Kepler with a 384 bit memory bus, the Areca PCIe-3.0 successor of the 1882 card, the Ivy Bridge-E with 8 cores and 20 MB L3 cache or Windows 8 and all the new tuning required.

Preliminary thoughts: The new build I realized is far from easily affordable, and does not really meet my expectations yet. It was an ambitious project and in terms of 'Bang-for-the-buck' BFTB it has failed. Am I disappointed? The honest answer is no. I'm a freak when it goes about performance and from the start I knew it was extremely difficult to achieve the same BFTB-score as with my last system, but I've made progess, albeit at a price.

So, in summary, I do hope you enjoyed this series and have picked up some pieces here and there to profit from with your next build.

The Price tag for 'Harm's Monster'

Prices are in Euro's excluding VAT at the moment of purchase, including transport.

Component Quantity Price Total
Asus P9X79 WS, motherboard 1 € 277 € 277
CaseLabs Magnum TH10 chassis, complete, incl. $ 300 transport cost * 1 € 757 € 757
Chenbro AESK33502, 5-in-3 3.5" hot swap drive cage 3 € 117 € 351
Coollaboratory Liquid Ultra, extremely efficient cooling paste for the CPU 1 € 8 € 8
Corsair Professional Gold AX1200, Gold+ certified 1200W PSU 1 € 196 € 196
DemciFlex custom filters, made to order to fit the 'Monster' 1 € 42 € 42
Icy Dock MB994SP-4S, 4-in-1 2.5" hot swap drive cage 1 € 50 € 50
LG BH10LS38, BD-R burner with LightScribe 2 € 68 € 136
Noctua NF-SP12 fans, low sound 1300 RPM fans with high CFM 6 € 15 € 90
Noctua NH-D14 SE2011, massive CPU cooler 1 € 60 € 60
Raidsonic ICY BOX IB-863-B, multicard reader with USB2, USB3 and eSATA 1 € 24 € 24
Zalman ZM-F3BL, quiet fans with high CFM 5 € 9 € 45
Total investment up to now € 2.036

* I warned you that the case will be considered exorbitant by many. It is way more expensive than a regular mid-tower or even a big-tower, up to 10 times the price (including the shipping costs). But keep in mind that a case has an almost everlasting life. It is similar to a tripod that outlives a camera many times. I'm pretty confident that I can still use this case when the third generation i9 CPU's with 32 cores plus Hyper Threading with support for 256+ GB memory are available, or even dual 40-core Xeons with up to 1 TB memory. I consider this a long term investment.

You will notice that I changed some components from what I figured in the Intro. This was due to availability and price. The chassis is unchanged, as are the Chenbro drive cages, but instead of the Addonics drive cage, I chose the Icy Dock, because it supports SATA III and was less costly. The LG BR burners were readily available, slightly faster than the Optiarc's and somewhat cheaper and the LG's in my current systems perform quite well. This list will get longer and longer as we get further into the build. There is a long way to go.