Register    Login    Forum    Search    FAQ

Board index » Swiftech Communications » Theory & Practice

Post new topic Reply to topic  [ 1 post ] 
Author Message
 Post subject: Actual Influence of Flow Rate in system temperature
 Post Posted: June 30th, 2010, 2:17 pm 
User avatar

Joined: January 24th, 2007, 4:35 pm
Posts: 258
Location: Lakewood, CA

It’s been a while since anyone published real life data measuring the actual impact of adding graphics water-block (s) to a loop, and we were interested in observing how the latest EVGA FTW blocks would integrate with the Apogee XT. Taken individually, these blocks are designed to dissipate extreme amounts of heat very efficiently, but this also result in substantially higher pressure drop values than in earlier designs. So the question was, how do they work together?

More specifically, the questions we wanted to answer were:

  1. What is the impact of adding a high-end graphic card to the CPU temperature, and does it affect the CPU maximum stable overclock?
  2. What is the impact of adding two high-end graphic cards in SLI to the CPU temperature; does it affect the CPU maximum stable overclock, and what GPU block configuration works best, serial or parallel?
  3. What is the thermal performance of the GPU cooling solutions under heavy stress in various configurations?


  • For the purpose of these tests, we wanted to cover the largest possible audience in the enthusiast community. So we setup a new bench which we believe to represent a typical middle to upper-range system. It is composed of an MCR320 Drive radiator, with built-in MCP355 pump, and an Apogee XT waterblock; the loop uses ½ lines. The fans are Gentle Typhoon’s (D1225C12B5AP-15) running at 1850 rpm and rated at 28 dB. We chose them because they are popular, and we found that they do represent a good compromise between cooling performance and operating noise.
  • Components are connected to the loop with CPC quick-disconnect fittings; they are fairly restrictive, but the time they save in changing setups overshadows any other considerations.
  • The CPU is an early Ci7 920, Revision C0/C1 stepping 4.
  • For the Graphic cards, we wanted (2) EVGA GTX480 FTW, but they were unavailable at the time of testing, so we settled for (2) EVGA GTX470 FTW instead. Given the increasing popularity of the 470 for its overclockability and bang-for-the-buck factor, it’s not such a bad thing anyways.
  • The Motherboard is a Gigabyte EX58-UD3R, and the OS is Windows 7 Ultimate 64 Bit.


  • The CPU maximum stable overclock was well established, since we have been using this same 920 ever since its introduction. It is 4095 Mhz (Intel Turbo mode on, and HT enabled), at 1.424v (after droop).
  • The GPU’s maximum stable overclock was established in the graphics tests using Furmark in extreme burn mode at 1920x1050 for a minimum of two hours, and further validated by running a 3D Marks (Vantage results are posted in the report).
  • Max stable overclock for one card was 898 MHz Core and 1050 MHz Memory, @1.087 Volts.
  • Max stable overclock for 2 cards in SLI was 825 MHz core and 1000 MHz memory, @1.087 Volts. Note: To ascertain that our Maximum stable overclock in SLI was not temperature related, we also tested the cards at 850Mhz using our extreme bench composed of (2) MCR320 radiators with (6) 82CFM fans, and (2) MCP655’s in series, and the test failed.

    In order to answer our initial questions, we conducted two sets of tests:

  • CPU load tests: In order to maintain consistency with previous test data, we ran our usual 8 instances of BurnK6. We logged the temperature results at 2 seconds intervals using CoreTemps. The average temperature of the 4 cores is reported.
  • GPU load tests: We used Furmark in extreme burning mode, windowed in 1920x1050, post processing off to enable 100% load to both GPU’s in SLI configuration, and logged the temperature results at 2 seconds intervals with GPUZ.

Environmental Temperature recording:

  • Air temperature: each fan was equipped with a type T Thermocouples (accurate at +/- 0.1c) at the inlet, and the average of the 3 values is reported.
  • Coolant temperature was measured at the radiator inlet with a Type T thermocouple (accurate at +/- 0.1°C).

We do hope that the following data will help in rationalizing the readers’ further setup decisions, and without further ado, here are the test results.

First set of CPU tests featuring a single card in the loop:


To our first question, “Impact on CPU temperature of adding one restrictive GPU waterblock in the same loop”, we see that between CPU tests #1 and #2 the increase in CPU temperature when adding a GTX470FTW is equal to 0.68°C (Note 1 above). We can also report that the CPU remained entirely stable under these test conditions.

With further analysis, we can also determine the actual temperature increase solely due to the added heat generated by the GPU; it is calculated by (*) below and it is equal to: 0.34°C. This allows us to conclude that the added pressure drop in the loop actually contributed to the rise in CPU temperature by 0.34°C as calculated in (**) below. This is a very marginal increase considering the relatively high pressure drop of both blocks, and also considering that the reduced flow rate decreases the waterblock AND the radiator efficiency.

(*) : (ΔT Water to Air test 2) – (ΔT Water to Air test 1): 4.63 – 4.29 = 0.34°C
(**): (Note 1) – (*): 0.68 - 0.34 = 0.34°C

Second set of CPU tests, featuring the SLI setup :


To the first part of our second question “What is the impact of adding two high-end cards in SLI to the CPU temperature?” , CPU tests #3 and #4 show us that the temperature rise in the CPU is 1.26°C (note 2) when the cards are setup in parallel, and 1.84°C (note 3) when they are setup in series. In terms of CPU stability, the CPU remained fully stable in both cases.

So, while the overall increase in CPU temperature remained nominal (about 2%) it is also interesting to note that the parallel setup shows a measurable advantage of 0.58°C over serial as calculated in (*) below, which can be 100% attributed to a substantially lower pressure drop at the system level. And this fact clearly answers the second part of our question #2: “what GPU block configuration works best, serial or parallel?” : Clearly, the higher flow rate in the CPU waterblock and in the radiator yield a better thermal resistance and result in lower temps in a parallel setup than in a serial setup.

(*): Note 3 – note 2: 1.86 – 1.26= 0.58°C

Finally, for those users who already have a GPU in their CPU loop, want to add a second one and need to know what to expect, the data presented in note 4 shows that a second VGA card installed in parallel will result in another 0.58°C rise in CPU temperature, whereas note 5 shows that installing the second card in series will add 1.16°C.

Graphics Tests:

The graphics stress tests are obviously also influenced by flow rate, and we will see how below. CPU temperature is reported for reference only, since there is very little load on the CPU during intensive graphics (50% on one core, under Furmark).


We see a substantial increase in average GPU temperature from one card to two, ranging from 7.39 °C (calculated in Note 1) for a parallel setup to 7.99 °C for a serial (calculated in Note 2). But while 7 to 8 °C can seem like much, it is also important to remember that the overclock limitation in SLI mode was demonstrated during our initial setup NOT to be temperature related (see note in the above Methodology section).

Finally, note 3 is of particular interest within the framework of this study, because it shows that even at the GPU level, a parallel setup with modern blocks such as those presented here remains a superior solution to serial, as evidenced by a 0.6°C advantage of parallel over serial.


While the importance of flow rate is certainly not to be discarded when planning a system setup, as particularly evidenced by the differences found between parallel and serial VGA configurations, we see with the tested Swiftech components that the overall impact of this parameter remains nominal in terms of total system performance. This is due to the fact that these components are designed to be highly efficient at low flow.

Results summary for reference:


Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 1 post ] 

Board index » Swiftech Communications » Theory & Practice

Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to: