• 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Overclocking Fury X - Part 2
We purchased a Sapphire Fury X earlier this month from etail, and reviewed it against the reference GTX 980 Ti and also versus the EVGA GTX 980 Ti SC at stock and overclocked settings, using 32 games.  Besides being generally slower than the GTX 980 Ti, we found that our Fury X sample had a very irritating pump whine, as well as being a very poor overclocker.

Because of its annoying pump whine, we secured a RMA and received a second Sapphire Fury X late last week.  We would like to share our observations with you about our second Fury X including pump noise and overclocking.

We probably received our second Fury X from the same initial batch as our original unit.  This pump is not the one that the reviewers mentioned with a Cooler Master sticker affixed.  Rather, ours has a plain Cooler Master embossed logo, just like the first unit that we received.  It is also not the one with the embossed shiny Cooler Master logo, which is the one most touted as the "fixed" unit on social media.

PUMP-no2This second unit isn't quite as noisy as the first one, and the ultrasonic frequencies are less irritating. However, we are an audio-sensitive user, and our PC sits about two feet away from us while gaming. We are also going to return this unit in hopes of getting a Sapphire Fury X without these pump noise issues at all.

We ran the same overclocking tests that we ran with our first unit and more.  Our first unit managed an overclock of only 25MHz to the core and 24MHz to the High Bandwidth Memory (HBM), which is quite poor - 1075MHz/524MHz overclocked - compared with 1050MHz/500MHz stock.  How did we do with our second Sapphire Fury X unit?

This chart compares the original Sapphire Fury X (top) with our second retail sample on the bottom.  More testing confirmed our original tests, that the power limit makes no difference to overclocking and perhaps only marginal difference to stability, comparing zero offset to +50 offset.  We used Heaven 4.0 on all tests to load our GPUs to 100%.


We managed a better stable overclock on the second Fury X sample of +50MHz on the core compared with +25MHz for our original Fury X, but the HBM did not overclock quite as well.  Although our second sample did not immediately crash with +30MHz offset added to the HBM clocks, it did not gain performance, and we had to finally settle for +23MHz for complete stability when we overclocked the core and the memory together.  Our final stable overclock reached 1100MHz core/523MHz memory, compared with the stock 1050MHz/500MHz memory.

Overall, overclocking Fury X has been disappointing to us, and it is not quite the "overclocker's dream" that AMD's Richard Huddy suggested that we would see in his preview during E3.  We can only hope that voltage adjustments will become available soon for us to test.

It has been a month since the Fury X was released into retail and it is clear that there are still some units being sold that have pump whine which "sensitive users" may find intolerable.  We are hoping that our third sample will have no abnormal pump noise, and of course we will test it to see how its overclock compares with our two Fury X samples from retail that we have tested so far.  We are also planning to buy a high-performance microphone for audio analysis of our video cards in future testing.

In the meantime, follow us on BabelTechReviews Community to stay up with the very latest tech news.  We are currently working on a SLI versus CrossFire scaling with CPU speed evaluation that should prove quite interesting.

Happy Gaming!


Really all these defective cards should have been recalled. You should not be in this situation as a consumer, let alone as a hardware reviewer. This whole thing is really embarrassing for AMD.
Look at it this way.

I am a audio-sensitive user.  My PC sits 2 feet away while I am gaming (which is damn convenient for a HW reviewer, but noise issues are magnified).  And my PC has a large opening for its side case fan which means the noise is broadcast right to me.

Many users use headsets while gaming, the PC sits further away, and perhaps they are not sensitive to ultrasonic frequencies.  So if all of the original batch of Fury Xes have the whine, probably only ten percent of us suffer from my issues.

That's why AMD leaves it to their partners.  A recall would be too expensive for them.  I would say look for refurbished "deals" on the partner websites in a couple of months when Fury X stock is good.
I would never buy Fury in its current state. My next GPU will have HDMI 2.0 in preparation for a 4k big screen TV. I won't be upgrading for a while, next summer at the earliest.

More power to the rest of you though if the deals are good.

I just don't think that the Fury X is a useful card for 1080p (and it's not really useful for 4k either without HDMI 2.0).
It looks like there are voltage unlock tools for Fury X


Mine is still unpacked and UPS picks it up tomorrow. Hmmmm.  

Yep, it's an overclocker's dream - a nightmare!
They used BF3 for the testing and the charts give the sad results. +1.44mV is the sweetspot for peformance, but power still goes way up.  
Quote:As you can see, Fiji scales nearly linearly with voltage, and performance follows, at roughly half the clock increase rate.

Near +96 mV, the power limiter will start to kick in from time to time during games, when set to default, which is why we set it to +50% for all this testing.

Once we reach +144 mV, which results in a scorching 1.35 V on the GPU, the maximum stable frequency reaches its peak. At this point the VRMs are running temperatures above 95°C, even though they are cooled by the watercooling loop via a nearby copper pipe. That much heat on the VRMs is definitely not good for long-term use. I would say a safe permanent voltage increase on an unmodded card is around 40 mV or so.
[Image: scaling.gif]
[Image: memory.gif]
[Image: power.gif]
Quote:In this graph I'm showing full system power draw during testing. This test has clock speeds fixed at 1100 MHz for better comparison. As you can see, power ramps up very quickly, much faster than maximum clock or performance. From stock to +144 mV, the power draw increases by 27%, while overclocking potential goes up only by 5%, and real-life performance increases by only 3%.

In all these tests, GPU temperature barely moves thanks to the watercooling block. Going from 67°C at stock voltage to 71°C at +144 mV isn't worth mentioning. The heat output is definitely increased though, it's just the watercooler that soaks up all the heat, but the heat will be dumped into your room ultimately.

Looking at the numbers, I'm not sure if a 150W power draw increase, just to get an extra 3 FPS, is worth it for most gamers. Smaller voltage bumps to get a specific clock frequency 100% stable are alright though.

I guess that I can leave it in the box.  Wizzard has developed his own voltage regulating tools which are not yet publically available.  And this is really interesting from the first page one of Fury unlock article regarding the poor support that AMD gives devs:

Quote:While the voltage controller on the cards is well-known and has support for I2C (a method to talk to the voltage chip from the host PC, through software), getting I2C to work on Fiji in the first place posed another set of challenges. Unlike NVIDIA, AMD does not provide good API support to developers, their ADL library is outdated and buggy, with updates spaced years apart. So most software utility developers implement hardware access directly in hardware, writing directly to the GPU registers, which AMD is changing around with every new GPU. AMD's developer support is pretty much non-existent these days. All my contact has been worried about for four weeks now is that I make sure I use AMD's "new" GPU codenames in GPU-Z (for the R9 300 Series re-brands).

With recent GPU generations, AMD has transitioned GPU management tasks away from the driver, onto a little micro-controller inside the GPU called SMC, which is handling jobs like clock control, power control and voltage control. On Fiji, this controller adjusts and monitors voltage dynamically, which helps with overall power consumption. However, it makes voltage control more difficult than before. When overriding voltage externally, the controller will sense a discrepancy between its target voltage and real voltage, and assume a fault has occurred, so it sends the GPU into its lowest clock state: 300 MHz. The voltage monitoring process also keeps the I2C bus very busy, which causes interference with other transactions, such as those sent by GPU-Z, to do its own monitoring. If two of these transactions overlap, the result data will be intermixed or faulty, which will cause the SMC to sense another possible fault, this time turning off the screen and setting fan speed to 100%, to avoid damage to the card.

Working around this was no easy task, but it looks like I've finally managed to crack it, which means voltage monitoring and software voltage control will soon be available in the software I make, other tool developers will soon follow as well.

Forum Jump:

Users browsing this thread: 1 Guest(s)