• 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
The GeForce 358.50 Performance Analysis featuring Ashes of the Singularity
#11
Well you certainly have the cards you would need to test it if it ever does become available. Does DX12 make the memory pool shared as well? By that I mean, if you have two 4gb cards in SLI, does it effectively become 8gb of VRAM under DX12?
  Reply
#12
I'm not sure. I don't believe so. I'll try to find out.
  Reply
#13
OK thanks. I'm curious about this. There were rumors that it's true.
  Reply
#14
(11-21-2015, 01:22 AM)SickBeast Wrote: OK thanks.  I'm curious about this.  There were rumors that it's true.

Well, I was really curious, so I did a search. Evidently it can.

http://www.guru3d.com/news-story/both-ma...emory.html

Quote:Robert Hallock (Head of Global Technical Marketing at AMD) shared something interesting on Twitter earlier on. You guys know that when your have a GPU graphics card combo with dual-GPUs that the memory is split up per GPU right ?

Thus an 8GB graphics card is really 2x4GB. As it stands right now, this will be different when DirectX 12 comes into play and apparently already is with Mantle. Basically he states that two GPUs finally will be acting as ‘one big’ GPU. Here's his complete quote:

Mantle is the first graphics API to transcend this behavior and allow that much-needed explicit control. For example, you could do split-frame rendering with each GPU ad its respective framebuffer handling 1/2 of the screen. In this way, the GPUs have extremely minimal information, allowing both GPUs to effectively behave as a single large/faster GPU with a correspondingly large pool of memory.

Ultimately the point is that gamers believe that two 4GB cards can’t possibly give you the 8GB of useful memory. That may have been true for the last 25 years of PC gaming, but thats not true with Mantle and its not true [with DX12]
  Reply
#15
Awesome! Thanks! It's a shame it took them so long to come up with this. It's really useful though. It's going to breathe new life into cards like the 290x with 4gb if you run them in Crossfire. Same with the GTX 970.
  Reply
#16
Again, not really.  290X will be ancient news by the time (real) DX12 games are out.  And they will be MORE demanding, not less.  What programmer will care to program for a (then) 5 year old card to maximize its memory for mGPU when it will be too slow (anyway) then?

Look at what Pascal will be like (and you can be sure that AMD will aim to be competitive). 16GB of memory!! Now programmers *may* care to program for 16GB plus 16GB in a mGPU configuration for the new crop of games, but not for old cards.

http://babeltechreviews.com/community/sh...hp?tid=461

Quote:. . . the Pascal GPU will be manufactured in Taiwan Semiconductor Manufacturing Company (TSMC), using the brand new 16nm FinFET process. This process is much more than a simple number, since it marks the shift from planar, 2D transistors to the FinFET i.e. 3D transistors. This shift required that the engineers make lot of changes in the thought process, and should result in significant power savings.

... Pascal will bring support for up to 32GB of HBM2 memory. However, the actual products based on Pascal will launch with 16GB HBM2 memory, and more memory will depend solely on memory vendors such as SK.Hynix and Samsung. What is changing the most is bandwidth. Both the Kepler-based Tesla (K40) and Maxwell-based M4/M40 featured 12GB of GDDR5 and achieved up to 288GB/s of memory bandwidth. Those 16GB HBM SDRAM (packed in four 4GB HBM2 chips) will bring 1TB/s in bandwidth, while internally the GPU surpasses the 2TB/s barrier. ...

NVIDIA’s Marc Hamilton said: “Using 3D memory, not only the memory capacity will go up, the memory bandwidth will go up significantly. With a much faster GPU, and higher memory bandwidth, the existing interconnects in the server are just plain outdated. So, we had to develop our own interconnect called NVLink, five times faster than existing technology.”
  Reply
#17
I think 290x Crossfire will still be quite potent even compared to Pascal. That setup will definitely run newer games well for the next 2 years at the very least. I definitely see your point however I think that doubling the VRAM for SLI/Crossfire is a really big deal for current cards even if DX12 games are still a ways off. It's going to help with those games for sure.

It's actually a very good thing for the Fury cards as well.

The thing is, though, Pascal looks like it's going to be a massive leap over current cards. I'm not sure that two GTX 980Ti cards in SLI will even match it.
  Reply
#18
You are missing my point.  Pascal is next year with 16GB of vRAM.  However, *real* DX12 games will be running best on Volta - two years from now!

By then, 290X will be low midrange.

Each DX iteration - DX7>DX8>DX9>DX10>DX11 brought better visuals and far more demanding games which required a new generation of GPUs. Do you think DX12 is going to be any different?

Huh
  Reply
#19
(11-21-2015, 01:51 AM)apoppin Wrote: You are missing my point.  Pascal is next year with 16GB of vRAM.  However, *real* DX12 games will be running best on Volta - two years from now!

By 290X will be low midrange.

Yes 290X will be low midrange but a pair of them in Crossfire will be pretty good, definitely midrange and definitely capable of running new games at 1080p, at least IMO.
  Reply
#20
If AMD still supports Hawaii crossfire then, 290X CF would probably as be good as you say for 1080P.
  Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)