• 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Ampere Taped Out? (Ampere thread)
(09-24-2020, 01:21 PM)apoppin Wrote: https://babeltechreviews.com/the-rtx-309...nchmarked/
Lovely card if you are a whale looking for a shiny new harpoon.

As for nvidia's 8K gaming pivot, to quote GamersNexus's gloriously sassy video review - "it's a console experience", to which I'd add it's an XBONE S console experience...

I will be goddamned if I watch any Yt video.

I think my own review explained it pretty well without the dramatics that the video reviewers use. And my conclusion matches the rest of the reviewers. Mine is the first review linked.

The 3090 is halo card for those with a lot of disposable cash that want the bragging rights for the "fastest" - or for the person who needs a ton of memory for creative apps and who can't afford a real Titan or a Quadro (or who doesn't need the optimizations that the GeForce will never get).

It will probably sell like hotcakes and be in very short supply ... you may call it a PL if you like, and you will get no argument from me. But the card is impressive in its own right - just expensive.
I'll be the first to post this here:

Evidently there have been complaints about the stability of the 3080/3090. Shades of the RTX 2080 Ti launch ... I have had zero issues with either card - stock or overclocked and they have been stable in every game and app I tossed at it - like close to 200 hours combined. I know a couple of other reviewers with no issues either. So it is possibly AIB (most likely, as Zotac is most affected) or driver issues, if true. The FE is very solid as Igor admits.

Quote:NVIDIA, by the way, cannot be blamed directly, because the fact that MLCCs work better than POSCAPs is something that any board designer who hasn’t taken the wrong profession knows. Such a thing can even be simulated if necessary. I will of course stay on it, because my interest is naturally aroused.

It's a good site. Very detailed and a million times better than what THG has deteriorated into ... actually it reminds me of Tom's original site.
I was watching Bitwit's latest video where he visited the closest microcenter to him (Tustin and he is from LA) and talked to the people waiting in line for 3080 and 3090's. Some have been in line for 3.5 days.

A few few of them actually got cards toward the end of the video, they are all 3rd party cards and I'm thinking most of them will likely be going back to microcenter in the next few days because of this manufacturing stuff up. What a colossal waste of time and effort that will turn out to be for those in line. Drive hundreds of KMs to microcenter, wait several days to get a card, drive back home, experience faulty card, drive back to microcenter, lodge rma wait up to 8 weeks for replacement card.

What a launch! on par with the Soviet N1 moon rocket so far....
I saw that posted on Reddit r/nvidia earlier today. Tustin is my closest Microcenter also - about a 90 minute drive when there is no traffic.

It reminds me of the people who stand in line for Apple flagship cards,. and many enthusiasts camped out when the i9-10900K launched.

It sucks. It's a real shame. It's awful, and I hate to see it. No it doesn't look good for Nvidia one bit. But then I told you that I expected the 3090 to be in extremely limited supply after the 3080 launch (and before I received any notice from Nvidia about receiving one; so I had no privileged information).

I told you that I would agree with you that the 3090 launch is a PL - in advance.
Some issues with the new AIBs - all collected in one place:

Reply from EVGA
[Image: 1l3PTBN.jpg]

Also some issues with the awful shipper that Nvidia uses - Digital Rivers (AMD also uses this garbage company for god-knows-why).
I don't understand Microcenter's store location strategy, frankly. Why on earth do they have one stuck in Tustin and not other places? Is the populations of Los Angeles, San Diego, San Francisco somehow too small for them?! You'd have to go to New York, Boston, Washington or Dallas, Houston to find bigger cities full of consumers wanting to shop with you.

But from what I understand most Microcenters are stashed away in smaller, remoter places. Weird. Seriously weird.
Yes. Agreed. I have no clue - although Tustin is in Orange County - a major population center just outside of LA. Looking at the Wikipedia of their history gives me no explanation where there is only one store to serve 30 million people:


At any rate, my traveling days out of my home are over until a new President is elected and this pandemic is under control. I feel silly (but safe) wearing a respirator while shopping at stores - I now have enough supplies to last well into next year and the rest I can get online even cheaper - including fresh groceries.

The ONLY reason I rented a car and went shopping was because I had to renew my driver's license before my B-day next week, and the stupids in the government decided I needed an eye exam - that could only be done in person - although I could have completed my application and got my DL online otherwise. That's also why I got the "RealId" since that one requires an in-person visit. So I killed many birds with one stone .. and I don't appear to have got covid after a full week home in isolation :)

And of course, if T-rump is reelected, then herd immunity will come in a couple of years. I guess I can wait until then. I am pretty patient and BTR is beginning to grow and take all of my attention.
I know you dislike videos but Jayztwocents has produced a very good video that explains the capacitor issues in a way that even noobs can understand (without being condescending to those of us with knowledge)

(09-26-2020, 05:17 AM)gstanford Wrote: I know you dislike videos but Jayztwocents has produced a very good video that explains the capacitor issues in a way that even noobs can understand (without being condescending to those of us with knowledge)


Didn't Igor's Lab do OK with the explanation? I do understand what he is saying:
[Image: Bottom-POSCAP-vs-MLCC.jpg]
Quote: Below the BGA we see the six NECESSARY capacitors for filtering high frequencies on the voltage rails, i.e. NVVDD and MSVDD. Apart from the fact that there is still enough high-frequency “garbage” from the voltage converters, it is mainly the so-called GPU load including all jumps caused by boost, which leads to very broadband frequency mixtures, which become more extreme the higher the boost clock goes. The BoM and the drawing from June leave it open whether large-area POSCAPs (Conductive Polymer Tantalum Solid Capacitors) are used (marked in red), or rather the somewhat more expensive MLCCs (Multilayer Ceramic Chip Capacitor). The latter are smaller and have to be grouped for a higher capacity.

According to the list and specifications of Nvidia, both are possible. In terms of quality, however, good MLCCs are better able to filter the very high frequency components in particular. In the end, this is simple practical knowledge, which only often enough collides with the world view of a financial controller. If one searches the forums, it seems that the Zotac Trinity is particularly affected when it comes to instabilities starting at certain boost clock rates from around 2010 MHz. A feat, because Zotac is relying on a total of six cheaper POSCAPs.

And here is an engineer who details Ampere's POSCAP/MLCC counts:
Quote:So, as an Electronics Engineer and PCB Designer i feel i have to react here.

The point that Igor makes about improper power desing causing instability is a very plausible one. Especially with first production runs where it indeed could be the case that they did not have the time/equipment/driver etc to do proper design verification.

However, concluding from this that a POSCAP = bad and MLCC = good is waaay to harsh and a conclusion you cannot make.

Both POSCAPS (or any other 'solid polymer caps' and MLCC's have there own characteristics and use cases.

Some (not all) are ('+' = pos, '-' = neg):


+ cheap

+ small

+ high voltage rating in small package

+ high current rating

+ high temperature rating

+ high capacitance in small package

+ good at high frequencies

- prone to cracking

- prone to piezo effect

- bad temperature characteristics

- DC bias (capacitance changes a lot under different voltages)


- cheap more expensive

- bigger

- lower voltage rating

+ high current rating

+ high temperature rating

- less good at high frequencies

+ mechanically very strong (no MLCC cracking)

+ not prone to piezo effect

+ very stable over temperature

+ no DC bias (capacitance very stable at different voltages)

As you can see, both have there strengths and weaknesses and one is not particularly better or worse then the other. It all depends.

In this case, most of these 3080 and 3090 boards may use the same GPU (with its requirements) but they also have very different power circuits driving the chips on the cards.

Each power solution has its own characteristics and behavior and thus its own requirements in terms of capacitors used.

Thus, you cannot simply say: I want the card with only MLCC's because that is a good design.

It is far more likely they just could/would not have enough time and/or resources to properly verify their designs and thus where not able to do proper adjustments to their initial component choices.

This will very likely work itself out in time. For now, just buy the card that you like and if it fails, simply claim warranty. Let them fix the problem and down draw to many conclusions based on incomplete information and (educated) guess work.

No one really knows outside of Nvidia. But the hypothesis presented seems quite valid. The FE is mostly immune from these issues - but the AIBs appear to have been rushed along with everyone else including us reviewers.

And I think her reasoning is OK. Buy it if you want it and RMA it if it fails - insist that they pay shipping both ways. It will cost the AIBs money, but it is only fair to the consumers.

And EVGA - Nvidia closest partner and the best company to buy GeForce from - has addressed the issue:
Quote:Recently there has been some discussion about the EVGA GeForce RTX 3080 series.

During our mass production QC testing we discovered a full 6 POSCAPs solution cannot pass the real world applications testing. It took almost a week of R&D effort to find the cause and reduce the POSCAPs to 4 and add 20 MLCC caps prior to shipping production boards, this is why the EVGA GeForce RTX 3080 FTW3 series was delayed at launch. There were no 6 POSCAP production EVGA GeForce RTX 3080 FTW3 boards shipped.

But, due to the time crunch, some of the reviewers were sent a pre-production version with 6 POSCAP’s, we are working with those reviewers directly to replace their boards with production versions.
EVGA GeForce RTX 3080 XC3 series with 5 POSCAPs + 10 MLCC solution is matched with the XC3 spec without issues.

Also note that we have updated the product pictures at EVGA.com to reflect the production components that shipped to gamers and enthusiasts since day 1 of product launch.
Once you receive the card you can compare for yourself, EVGA stands behind its products!

Seriously, at this point, I'd buy EVGA if I was looking for a 3080. They will do an advanced RMA. .. and perhaps ASUS also because they also appear to have taken their time with their board production unlike some of the lesser companies known to cut corners (cough, Zotac .. sneeze, Gigabyte).

I'm sorry but Jay's video isn't worth 2 cents of my time. I read faster, and I comprehend text way better .. and I don't want drama. He has access to the same sources that I do, and I understand it just as well as he probably does. I just don't have the time to make a post on BTR about it - that's what this forum is for.

I am working on my follow-up to the 3080/3090 launch - Turing vs. Ampere vs. Pascal in VR - and I also need to measure vRAM usage with SuperSampling. I think it will be a far better use of my time then to beat this mismatched capacitor horse as all the other tech sites are doing. I don't follow. I want BTR to lead.

Forum Jump:

Users browsing this thread: 2 Guest(s)