Nvidia's Dumbest Decision
Samsung 8nm is a bad deal for everyone.
♥ Check out https://adoredtv.com for more tech!
♥ Subscribe To AdoredTV – http://bit.ly/1J7020P
► Support AdoredTV through Patreon https://www.patreon.com/adoredtv ◄
Buy Games on the Humble Store! –
►https://www.humblebundle.com/store?partner=adoredtv ◄
Bitcoin Address – 1HuL9vN6Sgk4LqAS1AS6GexJoKNgoXFLEX
Ethereum Address – 0xB3535135b69EeE166fEc5021De725502911D9fd2
♥ Buy PC Parts from Amazon below.
♥ NEW USA Store! – https://www.amazon.com/shop/adoredtv
♥ Canada – http://amzn.to/2ppgYsX
♥ UK – http://amzn.to/2fUdvU7
♥ Germany – http://amzn.to/2p1lX6r
♥ France – http://amzn.to/2oUAK2Z
♥ Italy – http://amzn.to/2p37Uui
♥ Spain – http://amzn.to/2p3oIBm
♥ Australia – https://amzn.to/2uRTYb7
♥ India – https://amzn.to/2RgoWmj
♥ Want to help with Video Titles and Subtitles?
http://www.youtube.com/timedtext_cs_panel?tab=2&c=UCHXbDmbswY3xNOmzr5O3zgA
1.Lower IPC per cuda core. (compaired to Pascal & Turing)
2.Lower frequency tolerance (very sensitive to power filtering)
3.The3080 has higher performance with higher power consumption (vs 2080 Ti)
The 3080 is not that impressive.
The 3070 has 2080 Ti performance, albeit at lower power consumption. Not impressive. But has higher efficiency than RTX 3080. (higher perfomance per watt).
The 3070 is the one I would choose, if I was buying new Nvidia (I think not).
Feel like sticking to my 1070 for another generation.
What language does this guy speak? …….. Anyone?
Ok I had not thought about all the negatives of the 30 series.
so nvidia are going back to tsmc 7nm for the 30’s in 2021 bahahahah!
its always been 8nm. They cant switch processes 15 months before launch. TSMC 7nm was a marketing ploy
Thank you for the insightful information. I wish I found your channel sooner. Nvidia knowingly chose the cheaper node, knowing full well it would negatively impact performance giving us consumers a suboptimal product just to SPITE TSMC. Nvidia force fed this shit to everyone and still spun it into a win. Yes, Ampere is faster overall than last gen, but it could have been so much better, much more efficient with TSMC. In the very least we would have a product with a much higher performance per dollar/watt than we do now.
Could it be that Apple simply did not leave enough capacity at TSMC to supply Nvidia? Could it be that it wasn’t the 26 dollars of saving that Nvidia was after, but to simply ensure that its demand would be met?
6 months later and it seems it worked out better than 7nm because TSMC can’t make enough wafers
This definitely is an example of penny-wise-pound-foolish!
AMD deserving this so far, if the launches go well. Looks like Nvidia and Intel got too greedy and a little sleepy after gorging themselves on everyone’s wallets for so long
Really hope team red don’t fuck up the drivers, I’ve heard they’ve made it a focus but we’ll have to see
With hindsight it looks like it was a great decision. If NVIDIA wasn’t on Samsung, the TSMC wafers would have needed to be further split between AMD and NVIDIA making this GPU shortage even worse. Availability, more than power usage or performance, has been the overriding factor for this generation so far.
Yes, a lot of people fell for that CRAP! LOL
My dude the full chip is the A6000
Welcome back..
This decision is starting to look better and better for each day. The 3080 performs better than the 6800 XT even with this worse manufacturing process, and Nvidia saves a pretty penny while still being able to keep margins up.
At this point in time, I don’t understand why Nvidia and AMD don’t have they’re own chip FABS.
Samsung cheaper. TSMC even with their capacity cannot make chips for every single design house.
Looking at this decision in retrospective, this was good idea(from business standpoint). I’m sorry Jim but you got this prediction wrong, mostly because you could not foresee the semiconductor market situation. Going for 8nm node allowed Nvidia to get decent supply from a node they do not compete for with other manufacturers (like AMD for example), The benefits of TSMC 7nm node is apparent but Nvidia in the current market can and is competing very well and thanks for using Samsung 8nm i bet they got much more wafers they could with TSMC 7nm, allowing them to sell more GPUs than they could with going solely with TSMC. For AMD 7nm node became a trap as they have so many products on the same node (CPUs, GPUs, APUs for consoles), so something has to give in and GPUs as they are less lucrative as CPUS (especially the server SKUs) and are not bound under supply agreement from Microsoft/Sony take the hit. For Consumers yes we are stuck with less efficient power hungry GPUs, desktop users can take the hit as we can use bigger coolers (3 slot) and water cooling to even cool 300+ Watts of dissipated heat, but laptop users are the ones that took the greatest hit as cooling is limited even in the most impressive and bulky designs.
Was it really so awful? With hindsight it looks like Nvidia got to market before RDNA2 came out, not competing for 7nm wafers at TSMC has actually helped them maintain supplies to miners & lucrative B2B. They’ve managed to inflate the market yet again after release while claiming to want to lower costs to customers. Many buyers on tech channels & content producers are simply ignoring alternative AMD cards despite being in stock and performing well, having been conditioned to think they need RTX despite few games supporting it 30 months after RTX 20 launched.
Finally 3090 has sold, hardly anyoneone has seen a 3080 and the tie is being phased in to maximise their take, before miners flood the used market
how do you rebrand 16nm to 12nm?
I love how people talk about the 3090 like its a good card… realistically it has the worse cuda core efficiency Ive ever seen. Not that its a bad card, its just never been a card I consider impressive in terms of architecture
I have 2 x R9 Fury Sapphires in crossfire.. ive had them pulling 400w each… in winter i have no heating on, windows wide open, im still sat in my underwear sweating…
Why is this just now being shown under my recommended videos? This should have been watched by everyone within first week of launch. Really explains why the stock was practically non-existent.
intel and nvidia in a laptop…yikes
in the end, it was a great decision.
Strange, i progressed from the initial hate, to absolute utter hilarity. Ampere cards are about as efficient as a chocolate fireguard. Nvidia basically polished a turd, knowing it’d sell out.
All those CUDA Cores figures of the Ampere 8nm graphic cards are kinda ‘fake’ they all have some sort of ‘hyper-threading’ it’s impossible to pass from 4.6K to 10.5K from one generation to the others, if not see the 826 mm² TSMC 7nm it has smaller manufacturing process, larger die and… less CUDA Cores? wirh 5.8K nope, nope, nope, there’s something fishy going on here.
Welp, it looks like amd’s no more efficient. And judging by amd’s power hungry and shitty laptop gpu’s, they ain’t pulling much here.
no idea why one one already saw this already??? AMD doing both console release with their gpu cpu hardware not an nvidia insight?
Thanks for explaining the manufacturing processes. It looks like Ampere is just a rebranded Turing. We’ll be waiting until 2021 or 2022 for true next-gen cards.
And it seems the Samsung plant can only produce 5 units a year
Don’t have spanish subtitles:(
No joke I have a msi blower 1080ti and it’s OCed and it heats my room up to the point I sweat, or in the winter I don’t need to turn the heat on.
How can you have TSMC produce your chips when there capacity is booked and tapped out for the next 3 years to many companies. How is nvidia dumb? They just missed the train that’s all
It wasnt announced not too long ago that in 2021 they will be using TSMCs 7nm process for their ampere cards. I wonder how much of a difference that will make.
Have you seen linus’ video? He says that the benchmarks check out.
I commented "this reads like Nvidia asking us to not give them a hard time for doing another Fermi" on their official Youtube upload about engineering the Ampere cooling. Was my snap first impression.
Nigerian accent of talking english is better than this …
Ah man I just watched this again.
Jim you are the very best, everyone thinks so. I love your voice the way you use it. And your brain of course. I find myself drawn into what seems on the surface rather dry material only to find myself a half hour later engrossed.
What a shame you are not around to comment in this most interesting of times.
Your deep dives are sorely missed.
I hope you make an appearance occasionally, I really do.
Kind regards
I have a question, if you’re still reading comments here. I’m wondering if chopping up the memory bus on the 3080 is mostly due to yields/redundancy or if Nvidia could have technically just left it at 384 bit with 12gb of gddr6 for a net bandwidth gain, or 6x for even more had they been willing to eat the cost. My question boils down to how often the memory bus requires redundancy/fusing off compared to the cores like we most often see segmentation on the die. There is also a matter of power but I believe 6x is responsible for a good chunk of it.
Just think;
IF ncrapia had kept up with Driver teaks for the 780 t.i. in new games, the $650 piece of shit may still today be a decent 1080p card like the much cheaper r9 2/390/x.
So with rumors now being that Nvidia is leaning more into TSMC again for 2021 – is there actually any chance that we might see a TSMC 7nm version of the 3080 before the next full generation? A 3080 Super/Ti? Or is it simply not feasible to make that switch in a reasonable amount of time for that node transition?
Maybe I am being naive, but I would think that since they already fab GA100 on TSMC 7nm it shouldn’t require a full redesign cycle. I mean – the GA100 and the GA102 are both ampere, even if they are optimized pretty differently. There I imagine there would still be a lot of overlap in the design of the internal modules.
I hope that could happen – because a 3080 "Ti" on 7nm TSMC with 20GB could be very impressive – and strongly mitigate the two main weak points (VRAM amount + powerdraw).
Nvidia is known to go to great lengths to not lose the performance "crown", so I wonder if they may have planned ahead for this for a while already in case it was needed to stay ahead of AMD. 8nm Samsung might have made sense at the time, but Nvidia surely must have been aware that the same architecture on TSMC 7nm would be a good deal better, so it doesn’t seem entirely unreasonable that they may have had some designwork for TSMC 7nm ready as an option, or a half-gen refresh. (or simply the next gen of Ampere…)
In many comparisons Ampere is less than 30% more efficient than Pascal. In the absolute best case scenario Ampere is 50-60% more efficient. At this efficiency level, buying a 3080 is almost like buying a 1090 Ti in 2016. I am not interested in using my computer to heat my room in a place that regularly reaches over 100F during the summer.
Why did you calculate the die count based on 300mm wafers rather than 400mm?
Better not be an amdelusional in sheeps clothing.
I look back 6 years at the GTX 970, lowest GPU of its class and it still holds up to the 1050. 1550, 1650,, they seemed to have gotten stuck on 1080p, no advancements came for the next 3 years, unless you wanna spend $2000 on a GPU just backwards steps to make the new cards cheaper, to seem more valuable, i mean 1650 vs GTX 970, 6 yr old card vs a 1 year old card, which you gonna pick you have 3 seconds to deci.. yeah you probably went with the 1650 didnt ya, silly… you lose 20% performance.. TI and Supers everywhere and they are all worse than a 6 year old bottom of the line GPU, what they did was crush the market for your old cards, so if you was selling your old 970 to get a 1080 TI an actual upgrade, you was gonna pay what $1300 and your old cards now worthless.. Yayyy
TI used to mean top of the line, a fucking Titan to be feared, now the first 3060 is a TI, so the bottom of the line first gen release of the 3000 series cards is the TI, you dont see yourselves doing better any time soon in that price/performance bracket, is that all it refers to nowdays, a marketing gimmick trying to sell you something worth less for more
It’s nice of them to give AMD the opportunity to make a little progress on their lead.
Look at the size of that nvidia card in that pc case?….21:08………..it’s massive…………lol.
PDK is Porsche Dual Clutch (transmission) they own the term PDK.