Four Multi-GPU Z77 Boards from $280-$350 - PLX PEX 8747 featuring Gigabyte, ASRock, ECS and EVGA
by Ian Cutress on August 22, 2012 9:15 AM ESTEVGA Z77 FTW Overclocking
Note: Ivy Bridge does not overclock like Sandy Bridge. For a detailed report on the effect of voltage on Ivy Bridge (and thus temperatures and power draw), please read Undervolting and Overclocking on Ivy Bridge.
Experience with EVGA Z77 FTW
With no automatic overclock options, users are forced to use either the BIOS to adjust the CPU frequency, or an updated version of ELEET software from the EVGA website. Typically as an overclocker I jumped straight in with the BIOS.
The EVGA overclocking methodology is a little strange, albeit a bit simpler than some other implementations. All the important options such as the CPU multiplier and voltages are all contained within the main overclocking menu. There is even an ‘OC Mode’ option which disables all non-essential onboard controllers and ports to promote stability for competitive overclocking. Memory has its own menu, which allows for full adjustment of memory parameters, but for BCLK adjustment users have to navigate to a completely separate menu just for the single value.
With the pair of ClearCMOS buttons on board, recovering from a failed overclock that did not correct itself immediately was easy enough to do and implement again in the BIOS. If the system is not happy with the memory settings at boot time, the motherboard would produce the error code ‘55’ and beep several times to indicate this.
However memory can be a bit of an issue – I found that my kit would not perform at XMP. EVGA have said to me that memory is a priority right now and their board will work with the majority of standard mainstream kits.
When the more severe overclocks were being performed (4.6 GHz and above), the system would sometimes reduce the CPU multiplier automatically. I found that this is due to the power settings in the BIOS, which need to be raised. I personally set these at 250W (a value of 2000 as it takes values in 1/8 of a watt) and never had an issue, though without direct instructions do this seems a little odd.
Methodology:
Our standard overclocking methodology is as follows. We select the automatic overclock options and test for stability with PovRay and OCCT to simulate high-end workloads. These stability tests aim to catch any immediate causes for memory or CPU errors.
For manual overclocks, based on the information gathered from previous testing, starts off at a nominal voltage and CPU multiplier, and the multiplier is increased until the stability tests are failed. The CPU voltage is increased gradually until the stability tests are passed, and the process repeated until the motherboard reduces the multiplier automatically (due to safety protocol) or the CPU temperature reaches a stupidly high level (100ºC+).
Our test bed is not in a case, which should push overclocks higher with fresher (cooler) air. We also are using Intel's All-in-one Liquid Cooler with its stock fan. This is a 120mm radiator liquid cooler, designed to mimic a medium-to-high end air cooler.
Manual Overclock:
Our manual overclock testing was simple, starting at our standard 1.100 volts and 44x multiplier with Ivy Bridge. For stability we adjusted the VDroop in the BIOS to Disabled fairly quickly, and at higher multipliers the power limits of the CPU were adjusted to stop the system reducing the CPU multiplier. Here are our results:
At 44x, the BIOS was set to 1.100 volts on the CPU and VDroop left at Intel SPEC. In the OS this produced a memory error during PovRay. Thus VDroop was adjusted to ‘disabled’, which gave stability. The system showed 1.138 volts at load, with peak temperatures of 71°C during PovRay and 74°C during OCCT.
At 45x, the system was stable at a minimum BIOS voltage setting of 1.100 volts, which showed 1.126 volts in the OS at load. Peak temperatures observed were 72°C during PovRay and 73°C during OCCT.
At 46x, the system was stable at a minimum BIOS voltage setting of 1.125 volts, which showed 1.150 volts in the OS at load. Peak temperatures observed were 77°C during PovRay and 80°C during OCCT.
At 47x, the TDP limits for the CPU were raised to 250W. With this, the system was stable at a minimum BIOS voltage setting of 1.175 volts, which showed 1.197 volts in the OS at load. Peak temperatures observed were 82°C during PovRay and 84° during OCCT.
When attempting to reach 48x, the system was still not stable at 1.275 volts set in the BIOS, causing PovRay to hang the system after a couple of minutes and temperatures to rise above 100°C.
24 Comments
View All Comments
ultimatex - Wednesday, August 22, 2012 - link
I got this MOBO from Newegg the first day they had it available , I couldn't believe the price since it offered 8x8x8x8x , Picked it up the first day and havent looked back. Doesnt look as cool as the Asrock extreme9 but it still looks good. Awesome Job Gygabyte , Anandtech should have given them a Gold not bronze though since the fan issue is a minor issue.Arbie - Wednesday, August 22, 2012 - link
For gaming, at least, how many people are really going to build a 2xGPU system? Let alone 3x or 4x. The are so few PC games that can use anything more than one strong card AND are worth playing for more than 10 minutes. I actually don't know of any such games, but tastes differ. And some folks will have multi-monitor setups, and possibly need two cards. But overall I'd think the target audience for these mobos is extremely small.Maybe for scientific computing?
Belard - Wednesday, August 22, 2012 - link
Yep.... considering that most AAA PC games are just ports from consoles... having 3-4 GPUs is pointless. The returns get worse after the first 2 cards.Only those with 2~6 monitors can benefit with 2-3 cards.
Also, even $80 Gigabyte boards will do 8x x 8x SLI/CF just fine.
But hey, someone wants to spend $300 on a board... more power to them.
cmdrdredd - Wednesday, August 22, 2012 - link
"Only those with 2~6 monitors can benefit with 2-3 cards."Oh really? 2560x1440 on a single card is garbage in my view. I am not happy with 50fps average.
rarson - Wednesday, August 22, 2012 - link
If you're going multi-GPU on a single monitor, you're wasting money.Sabresiberian - Wednesday, August 22, 2012 - link
Because everyone should build to your standards, O god of all things computer.Do some reading; get a clue.
Steveymoo - Thursday, August 23, 2012 - link
Incorrect.If you have a 120hz monitor, 2 GPUs make a tonne of difference. Before you come back with a "no one can see 120hz" jibe. That is also incorrect.... My eyes have orgasms every once in a while when you get those ultra detail 100+ fps moments in battlefield, that look great!
von Krupp - Friday, August 24, 2012 - link
No. Metro 2033 is not happy at 2560x1440 with just a single HD 7970, and neither are Battlefield 3 or Crysis. The Total War series also crawls at maximum settings.I bought the U2711 specifically to take advantage of two cards (and for accurate colours, mind you). I have a distaste for multi-monitor gaming and will continue to have such as long as they keep making bezels on monitors.
So please, don't go claiming that multi-card is useless on a single monitor because that just isn't true.
swing848 - Monday, December 8, 2014 - link
At this date, December 2014, with maximum eye candy turned on, there are games that drop a refrence AMD R9 290 below 60 fps on a single monitor at 1920x1080 [using an Intel i5-3570K at 4GHz to 4.2GHz]Sabresiberian - Wednesday, August 22, 2012 - link
This is not 1998, there are many games built for the PC only, and even previously console-oriented publishers aren't just making ports for the PC, they are developing their games to take advantage of the goodness only PCs can bring to the table. Despite what console fanboys continue to spew, PC gaming is on the rise, and console gaming is on the relative decline.