NVIDIA GeForce RTX 3060 Ti Founders Edition Graphics Card Review

Keeping their tradition alive of launching a new graphics architecture every two years, this year, NVIDIA introduced its Ampere GPU. The Ampere GPU is built upon the foundation set by Turing. Termed as its biggest generational leap, the NVIDIA Ampere GPUs excel compared to previous generations at everything. This time with the GeForce RTX 3060 Ti Founders Edition.

The Ampere GPU architecture has a lot to be talked about in this review, but so does the new RTX lineup. The Ampere lineup offers faster shader performance, faster ray tracing performance, and faster AI performance. Built on a brand new process node and featuring an architecture designed from the ground up, Ampere is a killer product with lots of numbers to talk about.

 

The fundamental of Ampere was to take everything NVIDIA learned with its Turing architecture and not only refine it but to use its DNA to form a product in a completely new performance category. Tall claims were made by NVIDIA when they introduced its Ampere lineup earlier this month & we will be finding out whether NVIDIA hit all the ticks with its Ampere architecture as this review will be your guiding path to see what makes Ampere and how it performs against its predecessors.

Today, we will be taking a look at the NVIDIA GeForce RTX 3060 Ti Founders Edition graphics card. The card was provided by NVIDIA for the sole purpose of this review & we will be taking a look at their technology, design, and performance metrics in detail.

NVIDIA GeForce RTX 30 Series Gaming Graphics Cards - The Biggest GPU Performance Leap in Recent History

Turing wasn't just any graphics core, it was the graphics core that was to become the foundation of future GPUs. The future is realized now with next-generation consoles going deep in talks about ray tracing and AI-assisted super-sampling techniques. NVIDIA had a head start with Turing and its Ampere generation will only do things infinitely times better.

The Ampere GPU does many traditional things that we would expect from a GPU, but at the same time, also breaks the barrier when it comes to untraditional GPU operations. Just to sum up some features:

  • New Streaming Multiprocessor (SM)
  • New Turing Tensor Cores
  • New Real-Time Ray Tracing Acceleration
  • New Shading Enhancements
  • New Deep Learning Features For Graphics & Inference
  • New GDDR6X High-Performance Memory Subsystem
  • New 2nd Generation NVLINK Interconnect
  • New HDMI 2.1 Display Engine & Next-Gen NVENC/NVDEC

The technologies mentioned above are some of the main building blocks of the Ampere GPU, but there's more within the graphics core itself which we will talk about in detail so let's get started.

Let's take a trip down the journey to Ampere. In 2016, NVIDIA announced their Pascal GPUs which would soon be featured in their top to bottom GeForce 10 series lineup. After the launch of Maxwell, NVIDIA gained a lot of experience in the efficiency department which they put a focus on since their Kepler GPUs. Two years go, NVIDIA, rather than offering another standard leap in the rasterization performance of its GPUs took a different approach & introduced two key technologies in its Turing line of consumer GPUs, one being AI-assisted acceleration with the Tensor Cores and the second being hardware-level acceleration for Ray Tracing with its brand new RT cores.

With Ampere and it's brand new Samsung 8nm fabrication process, NVIDIA is adding even more to its gaming graphics lineup. Starting with the most significant part of the Ampere GPU architecture, the Ampere SM, we are seeing an entirely new graphics core. The Ampere SM features the next-gen FP32, INT32, Tensor Cores, and RT cores.

Coming to the new execution units or cores, Ampere has both INT32 and FP32 units that can execute concurrently. This new architectural design allows Turing to execute floating-point and non-floating point operations in parallel which allows for higher throughput in standard floating-point operations. According to NVIDIA, the updated Ampere graphics core delivers up to 1.7x faster traditional rasterization performance and up to 2x faster ray-tracing performance compared to the Turing GPUs.

The Ampere SM is partitioned into four processing blocks, each with 32 FP32 Cores, 16 INT32 Cores, one Tensor Core, one warp scheduler, and one dispatch unit. This is made possible with an updated data path with one data path offering 16 FP32 execution units while the other offers either 16 FP32 or 16 INT32 execution units. This adds to 128 FP32 Cores, 64 INT 32 Cores,4 Tensor, 4 Wrap Schedulers, and 4 Dispatch Units on a single Ampere SM. Each block also includes a new L0 instruction cache and a 64 KB register file for a total of 256 KB register file per SM.

 One of the key design goals for the Ampere 30-series SM was to achieve twice the throughput for FP32 operations compared to the Turing SM. To accomplish this goal, the Ampere SM includes new datapath designs for FP32 and INT32 operations. One datapath in each partition consists of 16 FP32 CUDA Cores capable of executing 16 FP32 operations per clock. Another datapath consists of both 16 FP32 CUDA Cores and 16 INT32 Cores. As a result of this new design, each Ampere SM partition is capable of executing either 32 FP32 operations per clock, or 16 FP32 and 16 INT32 operations per clock. All four SM partitions combined can execute 128 FP32 operations per clock, which is double the FP32 rate of the Turing SM, or 64 FP32 and 64 INT32 operations per clock.

Doubling the processing speed for FP32 improves performance for a number of common graphics and compute operations and algorithms. Modern shader workloads typically have a mixture of FP32 arithmetic instructions such as FFMA, floating point additions (FADD), or floating point multiplications (FMUL), combined with simpler instructions such as integer adds for addressing and fetching data, floating point compare, or min/max for processing results, etc. Performance gains will vary at the shader and application level depending on the mix of instructions. Ray tracing denoising shaders are good examples that might benefit greatly from doubling FP32 throughput.

Doubling math throughput required doubling the data paths supporting it, which is why the Ampere SM also doubled the shared memory and L1 cache performance for the SM. (128 bytes/clock per Ampere SM versus 64 bytes/clock in Turing). Total L1 bandwidth for GeForce RTX 3080 is 219 GB/sec versus 116 GB/sec for GeForce RTX 2080 Super.

Like prior NVIDIA GPUs, Ampere is composed of Graphics Processing Clusters (GPCs), Texture Processing Clusters (TPCs), Streaming Multiprocessors (SMs), Raster Operators (ROPS), and memory controllers.

The GPC is the dominant high-level hardware block with all of the key graphics processing units residing inside the GPC. Each GPC includes a dedicated Raster Engine, and now also includes two ROP partitions (each partition containing eight ROP units), which is a new feature for NVIDIA Ampere Architecture GA10x GPUs. More details on the NVIDIA Ampere architecture can be found in NVIDIA’s Ampere Architecture White Paper, which will be published in the coming days.

The four processing blocks share a combined 128 KB L1 data cache/shared memory. Traditional graphics workloads partition the 128 KB L1/shared memory as 64 KB of dedicated graphics shader RAM and 64 KB for texture cache and register file spill area. In compute mode, the GA10x SM will support the following configurations:

  • 128 KB L1 + 0 KB Shared Memory
  • 120 KB L1 + 8 KB Shared Memory
  • 112 KB L1 + 16 KB Shared Memory
  • 96 KB L1 + 32 KB Shared Memory
  • 64 KB L1 + 64 KB Shared Memory
  • 28 KB L1 + 100 KB Shared Memory

Ampere also ties its ROPs to the HPC and houses a total of 16 ROP units per GPC. The full GA102 GPU feature 112 ROPs while the GeForce RTX 3080 comes with a total of 96 ROPs.

The block diagram of the NVIDIA Ampere SM Gaming GPUs.

The entire SM works in harmony by using different blocks to deliver high performance and better texture caching, enabling up to twice as much CUDA core performance when compared to the previous generation.

A block diagram of the GA102 GPU featured on the NVIDIA GeForce RTX 3080 graphics card.

Many of these Ampere SMs combine to form the Ampere GPU. Each TPC inside the Ampere GPU houses 2 Turing SMs which are linked to the raster engine. There are a total of 6 TPCs or 12 Ampere SM that are arranged inside the GPC or Graphics Processing Cluster. The top configured GA102 GPU comes with 7 GPCs with a total of 42 TPCs and 84 SMs that are connected to 10 MB of L1 and 6 MB of L2 cache, ROPs, TMUs, memory controllers, and NVLINK High-Speed I/O hub. All of this combines to form the massive Ampere GA102 GPU. The following are some perf figures for the top Ampere graphics cards.

NVIDIA GeForce RTX 3090

  • 35.58 TFLOPS of peak single-precision (FP32) performance
  • 71.16 TFLOPS of peak half-precision (FP16) performance
  • 17.79 TIPS1 concurrent with FP, through independent integer execution units
  • 258 Tensor TFLOPS
  • 69 RT-TFLOPs

NVIDIA GeForce RTX 3080

  • 30 TFLOPS of peak single-precision (FP32) performance
  • 60 TFLOPS of peak half-precision (FP16) performance
  • 15 TIPS1 concurrent with FP, through independent integer execution units
  • 238 Tensor TFLOPS
  • 58 RT-TFLOPs

NVIDIA GeForce RTX 3070

  • 20.3 TFLOPS of peak single-precision (FP32) performance
  • 40.6 TFLOPS of peak half-precision (FP16) performance
  • 10.1 TIPS1 concurrent with FP, through independent integer execution units
  • 162.6 Tensor TFLOPS
  • 39.7 RT-TFLOPs

In terms of shading performance which is the direct result of the enhanced core design and GPU architecture revamp, the Ampere GPU offers an uplift of up to 70% better performance per core compared to Turing GPUs.

It should be pointed out that these are just per core performance gains at the same clock speeds without adding the benefits of other technologies that Ampere comes with. That would further increase the performance in a wide variety of gaming applications.

NVIDIA Ampere "GeForce RTX 30" GPUs Full Breakdown:

Graphics Card NVIDIA GeForce RTX 2070 SUPER NVIDIA GeForce RTX 3070 NVIDIA GeForce RTX 2080 NVIDIA GeForce RTX 3080 NVIDIA Titan RTX NVIDIA GeForce RTX 3090
GPU Codename TU106 GA104 TU104 GA102 TU102 GA102
GPU Architecture NVIDIA Turing NVIDIA Ampere NVIDIA Turing NVIDIA Ampere NVIDIA Turing NVIDIA Ampere
GPCs 5 or 6 6 6 6 6 7
TPCs 20 23 23 34 36 41
SMs 40 46 46 68 72 82
CUDA Cores / SM 64 128 64 128 64 128
CUDA Cores / GPU 2560 5888 2944 8704 4608 10496
Tensor Cores / SM 8 (2nd Gen) 4 (3rd Gen) 8 (2nd Gen) 4 (3rd Gen) 8 (2nd Gen) 4 (3rd Gen)
Tensor Cores / GPU 320 (2nd Gen) 184 (3rd Gen) 368 272 (3rd Gen) 576 (2nd Gen) 328 (3rd Gen)
RT Cores 40 (1st Gen) 46 (2nd Gen) 46 (1st Gen) 68 (2nd Gen) 72 (1st Gen) 82 (2nd Gen)
GPU Boost Clock (MHz) 1770 1725 1800 1710 1770 1695
Peak FP32 TFLOPS (non-Tensor) 9.1 20.3 10.6 29.8 16.3 35.6
Peak FP16 TFLOPS (non-Tensor) 18.1 20.3 21.2 29.8 32.6 35.6
Peak BF16 TFLOPS (non-Tensor) NA 20.3 NA 29.8 NA 35.6
Peak INT32 TOPS (non-Tensor) 9.1 10.2 10.6 14.9 16.3 17.8
Peak FP16 Tensor TFLOPS
with FP16 Accumulate
72.5 81.3/162.6 84.8 119/238 130.5 142/284
Peak FP16 Tensor TFLOPS
with FP32 Accumulate
36.3 40.6/81.3 42.4 59.5/119 65.2 71/142
Peak BF16 Tensor TFLOPS
with FP32 Accumulate
NA 40.6/81.3 NA 59.5/119 NA 71/142
Peak TF32 Tensor TFLOPS NA 20.3/40.6 NA 29.8/59.5 NA 35.6/71
Peak INT8 Tensor TOPS 145 162.6/325.2 169.6 238/476 261 284/568
Peak INT4 Tensor TOPS 290 325.2/650.4 339.1 476/952 522 568/1136
Frame Buffer Memory Size and
Type
8 GB GDDR6 8 GB GDDR6 8 GB GDDR6 10 GB GDDR6X 24 GB GDDR6 24 GB GDDR6X
Memory Interface 256-bit 256-bit 256-bit 320-bit 384-bit 384-bit
Memory Clock (Data Rate) 14 Gbps 14 Gbps 14 Gbps 19 Gbps 14 Gbps 19.5 Gbps
Memory Bandwidth 448 GB/sec 448 GB/sec 448 GB/sec 760 GB/sec 672 GB/sec 936 GB/sec
ROPs 64 96 64 96 96 112
Pixel Fill-rate (Gigapixels/sec) 113.3 165.6 115.2 164.2 169.9 193
Texture Units 160 184 184 272 288 328
Texel Fill-rate (Gigatexels/sec) 283.2 317.4 331.2 465 509.8 566
L1 Data Cache/Shared Memory 3840 5888 4416 KB 8704 KB 6912 KB 10496 KB
L2 Cache Size 4096 KB 4096 KB 4096 KB 5120 KB 6144 KB 6144 KB
Register File Size 10240 KB 11776 KB 11776 KB 17408 KB 18432 KB 20992 KB
TGP (Total Graphics Power) 215 Watts 220W 225W 320W 280W 350W
Transistor Count 13.6 Billion 17.4 Billion 13.6 Billion 28.3 Billion 18.6 Billion 28.3 Billion
Die Size 545 mm2 392.5 mm2 545 mm2 628.4 mm2 754mm2 628.4 mm2
Manufacturing Process TSMC 12 nm FFN
(FinFET NVIDIA)
Samsung 8 nm 8N NVIDIA
Custom Process
TSMC 12 nm FFN
(FinFET NVIDIA)
Samsung 8 nm 8N NVIDIA
Custom Process
TSMC 12 nm FFN
(FinFET NVIDIA)
Samsung 8 nm 8N NVIDIA
Custom Process

NVIDIA Ampere GPUs - GA102 & GA104 For The First Wave of Gaming Cards

NVIDIA is first introducing two brand new Ampere GPUs which include the GA102 and the GA104. The GA102 GPU is going to be featured on the GeForce RTX 3090 and GeForce RTX 3080 graphics cards while the GA104 GPU is going to be featured on the GeForce RTX 3070 graphics cards. The Ampere GPUs are based on the Samsung 8nm custom process node for NVIDIA and as such, the resultant GPU dies are slightly smaller than their Turing based predecessors but do come with a denser transistor layout. There will be several variations of each GPU featured across the RTX 30 series lineup. Following is what the complete GA102 and GA104 GPUs have to offer.

NVIDIA Ampere GA102 GPU

The full GA102 GPU is made up of 7 graphics processing clusters with 12 SM units on each cluster. That makes up 84 SM units for a total of 10752 cores in a 28.3 billion transistor package measuring 628.4mm2.

NVIDIA Ampere GA104 GPU

The full GA104 GPU is made up of 6 graphics processing clusters with 8 SM units on each cluster. That makes up 48 SM units for a total of 6144 cores in a 17.4 billion transistor package measuring 392.5mm2.

NVIDIA has also introduced its 3rd Generation Tensor core architecture and 2nd Generation RT cores on Ampere GPUs. Now Tensor cores have been available since Volta and consumers got a taste of it with the Turing GPUs. One of the key areas where Tensor Cores are put to use for AAA games is DLSS. There's a whole software stack that leverages Tensor cores and that is known as the NVIDIA NGX. These software-based technologies will help enhance graphics fidelity with features such as Deep Learning Super Sampling (DLSS), AI InPainting, AI Super Rez, RTX Voice, and AI Slow-Mo.

While its initial debut was a bit flawed, DLSS in its 2nd iteration (DLSS 2.0) has done wonders to not only improve gaming performance but also image quality. In titles such as Death Stranding and Control, games are shown to offer higher visual fidelity than at native resolution while running at much higher framerates. With Ampere, we can expect an even higher boost in terms of DLSS 2.0 (and DLSS Next-Gen) performance as the deep-learning model continues working its magic in DLSS supported titles. NVIDIA will also be adding 8K DLSS support to its Ampere GPU lineup which would be great to test out with the 24 GB RTX 3090 graphics card.

With Ampere, Tensor cores add INT8 and INT4 precision in addition to FP16 which is still fully supported. NVIDIA has been at the helm of the deep learning revolution by supporting it since its Kepler generation of graphics cards. Today, NVIDIA has some of the most powerful AI graphics accelerators and a software stack that is widely adopted by this fast-growing industry.

For its 3rd Gen Tensor cores, NVIDIA is using the same sparsity architecture that they've used on the Ampere HPC line of GPUs. While Ampere features 4 Tensor cores per SM compared to Turing's 8 tensor cores per SM, they are not only based on the new 3rd Generation design but also get an increased count with the larger SM array. The Ampere GPU can execute 128 FP16 FMA operations per tensor core utilizing its entire INT16 cores and with sparsity, it can do up to 256. The total FP16 FMA operations per SM are increased to 512 and 1024 with sparsity. That's a 2x increase over the Turing GPU in terms of inference performance with the updated Tensor design.

2nd Gen RT Cores, RTX, and Real-Time Ray Tracing Dissected

Next up, we have the RT Cores which are what will power Real-Time Raytracing. NVIDIA isn't going to distance themselves from traditional rasterization-based rendering, but instead following a hybrid rendering model. The new 2nd Generation RT cores offer increased performance and offer double the ray/triangle intersection testing rate over Turing RT cores.

There's one RT core per SM and all of them combined accelerate Bounding Volume Hierarchy (BVH) traversal and ray/triangle intersection testing (ray casting) functions. RT Cores work together with advanced denoising filtering, a highly-efficient BVH acceleration structure developed by NVIDIA Research, and RTX compatible APIs to achieve real-time ray tracing on a single Turing GPU.

RT Cores traverse the BVH autonomously, and by accelerating traversal and ray/triangle intersection tests, they offload the SM, allowing it to handle another vertex, pixel, and compute shading work. Functions such as BVH building and refitting are handled by the driver, and ray generation and shading are managed by the application through new types of shaders.

To better understand the function of RT Cores, and what exactly they accelerate, we should first explain how ray tracing is performed on GPUs or CPUs without a dedicated hardware ray tracing engine. Essentially, the process of BVH traversal would need to be performed by shader operations and take thousands of instruction slots per ray cast to test against bounding box intersections in the BVH until finally hitting a triangle, and the color at the point of intersection contributes to the final pixel color (or if no triangle is hit, the background color may be used to shade a pixel).

Ray tracing without hardware acceleration requires thousands of software instruction slots per ray to test successively smaller bounding boxes in the BVH structure until possibly hitting a triangle. It’s a computationally-intensive process making it impossible to do on GPUs in real-time without hardware-based ray tracing acceleration.

The RT Cores in Ampere can process all the BVH traversal and ray-triangle intersection testing, saving the SM from spending the thousands of instruction slots per ray, which could be an enormous amount of instructions for an entire scene. The RT Core includes two specialized units. The first unit does bounding box tests, and the second unit does ray-triangle intersection tests.

The SM only has to launch a ray probe, and the RT core does the BVH traversal and ray-triangle tests, and return a hit or no hit to the SM. Also unlike the last generation, Ampere SM can process two compute workloads simultaneously, allowing ray-tracing & graphics/compute workloads to be done concurrently.

In a visual demonstration, NVIDIA has shown how RT and Tensor cores help speed up ray tracing and shader workloads significantly. A fully ray-traced frame from Wolfenstein Youngblood was taken as an example. The last-gen RTX 2080 SUPER will take 51ms to render the frame if it does it all with its shaders (CUDA Cores). With RT cores and shaders working in tandem, the processing times are reduced to just 20ms or less than half the time. Adding in Tensor cores to help reduce the rendering time even lower to just 12ms (~83 FPS).

However, with Ampere, each standard processing block receives a huge performance uplift. With an RTX 3080, the same frame can be rendered within 37ms on the Shader cores alone, 11ms with the RT+Shader cores, and 6.7ms (150 FPS) with all three core technologies working together. That's half the time of what Turing took to render the same scene.

With each new generation of graphics cards, NVIDIA delivers a new range of display technologies. This generation is no different and we see some significant updates to not only the display engine but also the graphics interconnect. With the adoption of faster GDDR6X memory which provides higher bandwidth, faster compression, and more cache, Gaming applications can now run at higher resolutions, supporting more details on the display.

The Ampere Display Engine supports two new display technologies, HDMI 2.1 and DisplayPort 1.4a with DSC 1.2a. HDMI 2.1 allows for up to 48 Gbps of total bandwidth and allows for up to 4K 240Hz HDR and 8K 60Hz HDR.

DisplayPort 1.4a allows for up to 8K resolutions with 60Hz refresh rates and includes VESA's display stream compression 1.2 technology with visually lossless compression. You can run up to two 8K displays at 60 Hz using two cables, one for each display. In addition to that, Ampere also supports HDR processing natively with tone mapping added to the HDR pipeline.

Ampere GPUs also ships with the Fifth Generation NVDEC decoder unit that adds AV1 hardware decode support. Ampere's new NVDEC decoder has also been updated to support the decoding of MPEG-2, VC-1, H.264 (AVCHD), H.265 (HEVC), VP8, VP9, and AV1.

Ampere also adds the 7th Generation NVENC encoder by offering seamless hardware-accelerated encoding of up to 4K on H.264 and 8K on HEVC.

NVIDIA RTX IO - Blazing Fast Read Speeds With GPU Utilization

As storage sizes have grown, so has storage performance. Gamers are increasingly turning to SSDs to reduce game load times: while hard drives are limited to 50-100 MB/sec throughput, the latest M.2 PCIe Gen4 SSDs deliver up to 7 GB/sec. With the traditional storage model, game data is read from the hard disk, then passed from the system memory and CPU before being passed to the GPU.

Historically games have read files from the hard disk, using the CPU to decompress the game image. Developers have used lossless compression to reduce install sizes and to improve I/O performance. However, as storage performance has increased, traditional file systems and storage APIs have become a bottleneck. For example, decompressing game data from a 100 MB/sec hard drive takes only a few CPU cores, but decompressing data from a 7 GB/sec PCIe Gen4 SSD can consume more than twenty AMD Ryzen Threadripper 3960X CPU cores!

Using the traditional storage model, game decompression can consume all 24 cores on a Threadripper CPU. Modern game engines have exceeded the capability of traditional storage APIs. A new generation of I/O architecture is needed. Data transfer rates are the gray bars, CPU cores required are the black/blue blocks.

NVIDIA RTX IO is a suite of technologies that enable rapid GPU-based loading and decompression of game assets, accelerating I/O performance by up to 100x compared to hard drives and traditional storage APIs. When used with Microsoft’s new DirectStorage for Windows API, RTX IO offloads dozens of CPU cores’ worth of work to your RTX GPU, improving frame rates, enabling near-instantaneous game loading, and opening the door to a new era of large, incredibly detailed open-world games.

Object pop-in and stutter can be reduced, and high-quality textures can be streamed at incredible rates, so even if you’re speeding through a world, everything runs and looks great. In addition, with lossless compression, game download and install sizes can be reduced, allowing gamers to store more games on their SSD while also improving their performance.

NVIDIA RTX IO plugs into Microsoft’s upcoming DirectStorage API which is a next-generation storage architecture designed specifically for state-of-the-art NVMe SSD-equipped gaming PCs and the complex workloads that modern games require. Together, streamlined and parallelized APIs specifically tailored for games allow dramatically reduced IO overhead and maximize performance/bandwidth from NVMe SSDs to your RTX IO-enabled GPU.

Specifically, NVIDIA RTX IO brings GPU-based lossless decompression, allowing reads through DirectStorage to remain compressed and delivered to the GPU for decompression. This removes the load from the CPU, moving the data from storage to the GPU in a more efficient, compressed form, and improving I/O performance by a factor of two.

GeForce RTX GPUs will deliver decompression performance beyond the limits of even Gen4 SSDs, offloading potentially dozens of CPU cores’ worth of work to ensure maximum overall system performance for next-generation games. Lossless decompression is implemented with high performance compute kernels, asynchronously scheduled. This functionality leverages the DMA and copy engines of Turing and Ampere, as well as the advanced instruction set, and architecture of these GPU’s SM’s.

The advantage of this is that the enormous compute power of the GPU can be leveraged for burst or bulk loading (at level load for example) when GPU resources can be leveraged as high-performance I/O processor, delivering decompression performance well beyond the limits of Gen4 NVMe. During streaming scenarios, bandwidths are a tiny fraction of the GPU capability, further leveraging the advanced asynchronous compute capabilities of Turing and Ampere. Microsoft is targeting a developer preview of DirectStorage for Windows for game developers next year, and NVIDIA Turing & Ampere gamers will be able to take advantage of RTX IO enhanced games as soon as they become available.

NVLINK For GeForce RTX 3090 And Titan Class Products Only!

NVIDIA has said farewell to their SLI (Scale Link Interface) interconnect for consumer graphics cards. They will now be using the NVLINK interconnect which has already been featured on their Turing GPUs. The reason is that SLI was simply not enough to feed higher bandwidth to Ampere GPUs.

A single x8 NVLINK channel provides 25 GB/s peak bandwidth. There are 4 x4 links on the GA102 GPU. The GA102 GPU features 50 GB/s of bandwidth in parallel and 100 GB/s bandwidth bi-directionally. Using NVLINK on high-end cards would be beneficial in high-resolution gaming but there's a reason NVIDIA still restricts users from doing 3 and 4 way SLI.

Multi-GPU still isn't optimized so you won't see many benefits unless you are running the highest-end graphics cards. That's another reason why the RTX 3080 & RTX 3070 are deprived of NVLINK connectors. The NVLINK connectors cost $79 US each and are sold separately.


The NVIDIA GeForce RTX 3060 Ti is a force to be reckoned with in the $400 category. It takes the throne in that category with nothing coming even close to it. It's a touch faster than the 2080 SUPER and that's a bit more than expected as the 60 class card generally touches the previous base 80 class card, without making it irrelevant.

NVIDIA designed the GeForce RTX 3060 Ti is designed to be the mainstream RTX champ, powering the next generation of AAA gaming titles with superb visuals and insane fluidity. It's not just the FPS that matters these days, it's visuals, and a smoother frame rate too and this is exactly what the GeForce RTX 30 series is made to excel at. There's a lot to talk about regarding NVIDIA's flagship Ampere gaming graphics cards so let's start off with the specifications.

Marvels of NVIDIA Ampere Architecture - 2nd Generation RTX
Enabling the blistering performance of the new RTX 30 Series GPUs and the NVIDIA Ampere architecture are cutting-edge technologies and over two decades of graphics R&D, including:

  • New streaming multiprocessors: The building block for the world’s fastest, most efficient GPU, delivering 2x the FP32 throughput of the previous generation, and 30 Shader-TFLOPS of processing power.
  • Second-gen RT Cores: New dedicated RT Cores deliver 2x the throughput of the previous generation, plus concurrent ray tracing and shading and compute, with 58 RT-TFLOPS of processing power.
  • Third-gen Tensor Cores: New dedicated Tensor Cores, with up to 2x the throughput of the previous generation, making it faster and more efficient to run AI-powered technologies, like NVIDIA DLSS, and 238 Tensor-TFLOPS of processing power.
  • NVIDIA RTX IO: Enables rapid GPU-based loading and game asset decompression, accelerating input/output performance by up to 100x compared with hard drives and traditional storage APIs. In conjunction with Microsoft’s new DirectStorage for Windows API, RTX IO offloads dozens of CPU cores’ worth of work to the RTX GPU, improving frame rates and enabling near-instantaneous game loading.
  • World’s fastest graphics memory: NVIDIA has worked with Micron to create the world’s fastest discrete graphics memory for the RTX 30 Series, GDDR6X. It provides data speeds of close to 1TB/s system memory bandwidth for graphics card applications, maximizing game and app performance.
  • Next-gen process technology: New 8N NVIDIA custom process from Samsung, which allows for higher transistor density and more efficiency.

NVIDIA GeForce RTX 3060 Ti Graphics Card Specifications - GA104 GPU & 8 GB GDDR6 Memory

At the heart of the NVIDIA GeForce RTX 3060 Ti graphics card lies the GA104 GPU. The GA104 is one of the many Ampere GPUs that we will be getting on the gaming segment. The GA104 GPU is the second-fastest Ampere chip in the stack. The GPU is based on Samsung's 8nm (N8) process node. The GPU measures at 392.5mm2 and features 17.4 Billion transistors which are almost 93% of the transistors featured on the TU102 GPU. At the same time, the GA104 GPU is almost half the size of the TU102 GPU which is an insane amount of density.

For the GeForce RTX 3060 Ti, NVIDIA has enabled a total of 38 SM units on its flagship which results in a total of 4864 CUDA cores. In addition to the CUDA cores, NVIDIA's GeForce RTX 3060 Ti also comes packed with next-generation RT (Ray-Tracing) cores, Tensor cores, and brand new SM or streaming multi-processor units.

In terms of memory, the GeForce RTX 3060 Ti features 8 GB of GDDR6 memory. The GeForce RTX 3060 Ti comes with memory at speeds of 14 Gbps. That along with a full uncut bus interface of 256-bit will deliver a cumulative bandwidth of 448 Gbps. The NVIDIA GeForce RTX 3060 Ti has a TGP of 200W.

NVIDIA GeForce RTX 30 Series 'Ampere' Graphics Card Specifications:

Graphics Card Name NVIDIA GeForce RTX 3060 Ti NVIDIA GeForce RTX 3070 NVIDIA GeForce RTX 3080 NVIDIA GeForce RTX 3090
GPU Name Ampere GA104-200 Ampere GA104-300 Ampere GA102-200 Ampere GA102-300
Process Node Samsung 8nm Samsung 8nm Samsung 8nm Samsung 8nm
Die Size 395.2mm2 395.2mm2 628.4mm2 628.4mm2
Transistors 17.4 Billion 17.4 Billion 28 Billion 28 Billion
CUDA Cores 4864 5888 8704 10496
TMUs / ROPs 152 / 80 184 / 96 272 / 96 328 / 112
Tensor / RT Cores 152 / 38 184 / 46 272 / 68 328 / 82
Base Clock 1410 MHz 1500 MHz 1440 MHz 1400 MHz
Boost Clock 1665 MHz 1730 MHz 1710 MHz 1700 MHz
FP32 Compute 16 TFLOPs 20 TFLOPs 30 TFLOPs 36 TFLOPs
RT TFLOPs 32 TFLOPs 40 TFLOPs 58 TFLOPs 69 TFLOPs
Tensor-TOPs 192 TOPs 163 TOPs 238 TOPs 285 TOPs
Memory Capacity 8 GB GDDR6 8 GB GDDR6 10 GB GDDR6X 24 GB GDDR6X
Memory Bus 256-bit 256-bit 320-bit 384-bit
Memory Speed 14 Gbps 14 Gbps 19 Gbps 19.5 Gbps
Bandwidth 448 Gbps 448 Gbps 760 Gbps 936 Gbps
TGP 175W 220W 320W 350W
Price (MSRP / FE) $399 US $499 US $699 US $1499 US
Launch (Availability) 2nd December 2020 29th October 2020 17th September 2020 24th September 2020

NVIDIA GeForce RTX 3060 Ti Graphics Card Cooling & Design- Next-Gen NVTTM Founders Edition Design

Unlike the new front and back cooling system that the GeForce RTX 3090 and GeForce RTX 3080 incorporate, the NVIDIA GeForce RTX 3060 Ti makes use of a dual-fan cooling system that blows air towards the central heatsink just like the RTX 3070.

The Founders Edition cooling makes use of a full aluminum alloy heatsink which is coated with a nano-carbon coating and should do a really good job at keeping the temperatures in control. The design is interesting in the sense that not only does it goes all out with a fin and heat pipe design.

 

The Founders Edition cooling makes use of a full aluminum alloy heatsink which is coated with a nano-carbon coating and should do a really good job at keeping the temperatures in control. The design is interesting in the sense that not only does it goes all out with a fin and heat pipe design.

NVIDIA GeForce RTX 3060 Ti Graphics Card PCB & Power - Designed To Be Overclocked!

The GeForce RTX 30 series Founders Edition cards including the GeForce RTX 3060 Ti will be featuring the 12-pin Micro-Fit 3.0 power connectors. These connectors don't require a power supply upgrade as the cards will ship with bundled 2x 8-pin to 1x 12-pin connectors so you can run your latest graphics card without any compatibility issues.

The placement of the 12-pin connector on the PCB is also noteworthy. It is placed in a standard horizontal position but right in the middle of the shroud which does help with better electrical signaling to the GPU and judging by the PCB design, we can tell why NVIDIA moved to a single 12-pin plug instead of the standard dual 8-pin design. There's limited room on the PCB to do stuff and as such, it was necessary to go for a more small and compact power input.

NVIDIA GeForce RTX 3060 Ti Graphics Card Price & Availability - Both Custom & Reference Designs at Launch

The NVIDIA GeForce RTX 3060 Ti is being announced today and will be launching to consumers on December 2nd. The first wave of graphics cards to hit the market would be the reference Founders Edition variant which will cost $399 US. The NVIDIA GeForce RTX 3060 Ti will feature a price of $399 (MSRP) however custom models will vary depending on their design and the extra horse-power that they have to offer.

 

The performance numbers that NVIDIA is sharing show the GeForce RTX 3060 Ti is faster than an RTX 2080 SUPER, but we'll see about that later.

 

The presentation has been key for NVIDIA since the 700 series when it comes to their Founders Edition and the GeForce RTX 3060 Ti is no different. Carrying over the design cues in the packaging of the previous 30 Series launches we are greeted by the GeForce RTX 3060 Ti Founders Edition laying down, ready to be moved to your system.

 

 

The RTX 3060 Ti card itself is very similar in size and design to the last generation RTX 2060 Super but with the flow-through cooler heatsink and fans of this generation. Also, the card doesn't light up, I'm not happy about that, it really should light up. I said the same thing about the RTX 3070 and this time I really thought it was going to be backlit just because of the way it looked when off.

The outward-facing side of the RTX 3060 Ti carries the design signatures of the bigger GA102 based Ampere cards, even holding on to the 12-pin cable despite it only having a single 8-pin on the adapter. Again, the GeForce RTX logo does not light up on this one just like with the GeForce RTX 3070.

It's easy to see that the open section of the heatsink is reinforced with small pipes throughout in addition to the heatpipes coming from the main GPU section of the card. The heatsink is dense, yet sparse enough to allow for as little noise as possible. Standing next to the RTX 3070 we can see the stark color contrast that NVIDIA gave the RTX 3060 Ti with the bright silver accent ring.

 

We tested the GeForce RTX 3060 Ti using the following test system for comparison between the different graphics cards. The latest drivers that were available at the time of testing were used from AMD and NVIDIA on an updated version of Windows 10. All games that were tested were patched to the latest version for better performance optimization for NVIDIA and AMD GPUs.

Test System

Components X570
CPU Ryzen 9 3900X 4.3GHz All Core Lock (disable one CCD for 3600X Results)
Memory 32GB Hyper X Predator DDR4 3600
Motherboard ASUS TUF Gaming X570 Plus-WiFi
Storage TeamGroup Cardea 1TB NVMe PCIe 4.0
PSU Cooler Master V1200 Platinum
Windows Version Latest verion of windows at the time of testing
Hardware-Accelerated GPU Scheduling On if supported by GPU and driver.

Copy of Graphics Cards Tested

GPU Architecture Core Count
Clock Speed Memory Capacity
Memory Speed
NVIDIA RTX 3060 Ti Ampere 4864 8GB 14Gbps
NVIDIA RTX 3070 FE Ampere 5888 1500/1730 8GB 14Gbps
NVIDIA RTX 2080 SUPER FE Turing 3072 1650/1815 8GB GDDR6 15.5Gbps
AMD Radeon RX 5700XT Navi 10 2560 1605/1755/1905 8GB GDDR6 14Gbps
NVIDIA RTX 2060 SUPER Turing 2176 1470/1650 8GB GDDR6 14Gbps

Drivers Used

Drivers  
Radeon Settings 20.11.3
GeForce Press driver
  • All games were tested on 1920x1080, 2560x1080, and 2560x1440 resolutions.
  • Image Quality and graphics configurations are provided with each game description.
  • The "reference" cards are the stock configs.

Firestrike

Firestrike is running the DX11 API and is still a good measure of GPU scaling performance, in this test we ran the Extreme and Ultra versions of Firestrike which runs at 1440p and 4K and we recorded the Graphics Score only since the Physics and combined are not pertinent to this review.

Time Spy

Time Spy is running the DX12 API and we used it in the same manner as Firestrike Extreme where we only recorded the Graphics Score as the Physics score is recording the CPU performance and isn't important to the testing we are doing here.

Port Royal

Port Royal is another great tool in the 3DMark suite, but this one is 100% targeting Ray Tracing performance. It loads up ray traced shadows, reflections, and global illumination to really tax the performance of the graphics cards that either have hardware-based or software-based ray tracing support.

Thermals

Thermals were measured from our open test bench after running the Time Spy graphics test 2 on loop for 30 minutes recording the highest temperatures reported. The room was climate controlled and kept at a constant 22c throughout the testing.

Forza Horizon 4

Forza Horizon 4 carries on the open-world racing tradition of the Horizon series.  The latest DX12 powered entry is beautifully crafted and amazingly well executed and is a great showcase of DX12 games.  We use the benchmark run while having all of the settings set to non-dynamic with an uncapped framerate to gather these results.

Shadow of the Tomb Raider

Shadow of the Tomb Raider, unlike its predecessor, does a good job putting DX12 to use and results in higher performance than the DX11 counterpart in this title and because of that, we test this title in DX12.  I do use the second segment of the benchmark run to gather these numbers as it is more indicative of in-game scenarios where the foliage is heavy.

Rainbow 6 Siege

Rainbow 6 Siege has maintained a massive following since its launch and it consistently in Steams Top Ten highest player count game.  In a title where the higher the framerate the better in a tactical yet fast-paced competitive landscape is essential, we include this title despite its ludicrously high framerates.  We use the Vulkan Ultra preset with the High Defenition Texture Pack as well and gather our results from the built-in benchmarking tool.

DOOM Eternal

DOOM Eternal brings hell to earth with the Vulkan powered idTech 7.  We test this game using the Ultra Nightmare Preset and follow our in game benchmarking to stay as consistent as possible.

Watchdogs Legion

Watchdogs Legions sees a return of the Disrupt Engine they've been using since the early days with the original Watchdogs but this time it has been updated to next generation feature support. Dropping DX11 for DX12 we see much better utilization than in the past. Being one of the recent top sellers it earned a place in our test suite.

Call of Duty Modern Black Ops Cold War

Call of Duty Black Ops Cold War is the latest installment of the Call of Duty Series. Returning with DX12 support just like the Modern Warfare remake we tested this game during the opening of the Fractured Jaw level with the highest settings selected.

Horizon Zero Dawn

Horizon Zero Dawn is one of the two major PS4 exclusives that rocked their way onto the PC scene with massive acceptance and sales. Horizon Zero Dawn is powered by the Decima Engine and has been ported to DX12. We used the in-game benchmark to account for performance.

Borderlands 3

Borderlands 3 has made its way into the test lineup thanks to strong demand by gamers and simply delivering MORE Borderlands. This game is rather intensive after the Medium preset but since we're testing the 'Ultimate 1440p' card, High it is. We tested using the built-in benchmark utility

Total War Saga: Troy

Total War Saga: Troy is powered by their TW Engine 3 (Total War Engine 3) and in this iteration, they have stuck to a strictly DX11 release. We tested the game using the built-in benchmark using the Dynasty model that represents a battle with many soldiers interacting at once and is more representative of normal gameplay.

Forza Horizon 4

Forza Horizon 4 carries on the open-world racing tradition of the Horizon series.  The latest DX12 powered entry is beautifully crafted and amazingly well executed and is a great showcase of DX12 games.  We use the benchmark run while having all of the settings set to non-dynamic with an uncapped framerate to gather these results.

Shadow of the Tomb Raider

Shadow of the Tomb Raider, unlike its predecessor, does a good job putting DX12 to use and results in higher performance than the DX11 counterpart in this title and because of that, we test this title in DX12.  I do use the second segment of the benchmark run to gather these numbers as it is more indicative of in-game scenarios where the foliage is heavy.

Rainbow 6 Siege

Rainbow 6 Siege has maintained a massive following since its launch and it consistently in Steams Top Ten highest player count game.  In a title where the higher the framerate the better in a tactical yet fast-paced competitive landscape is essential, we include this title despite its ludicrously high framerates.  We use the Vulkan Ultra preset with the High Defenition Texture Pack as well and gather our results from the built-in benchmarking tool.

DOOM Eternal

 

DOOM Eternal brings hell to earth with the Vulkan powered idTech 7.  We test this game using the Ultra Nightmare Preset and follow our in-game benchmarking to stay as consistent as possible.

Watchdogs Legion

Watchdogs Legions sees a return of the Disrupt Engine they've been using since the early days with the original Watchdogs but this time it has been updated to next generation feature support. Dropping DX11 for DX12 we see much better utilization than in the past. Being one of the recent top sellers it earned a place in our test suite.

Call of Duty Modern Black Ops Cold War

Call of Duty Black Ops Cold War is the latest installment of the Call of Duty Series. Returning with DX12 support just like the Modern Warfare remake we tested this game during the opening of the Fractured Jaw level with the highest settings selected.

Horizon Zero Dawn

Horizon Zero Dawn is one of the two major PS4 exclusives that rocked their way onto the PC scene with massive acceptance and sales. Horizon Zero Dawn is powered by the Decima Engine and has been ported to DX12. We used the in-game benchmark to account for performance.

Borderlands 3

Borderlands 3 has made its way into the test lineup thanks to strong demand by gamers and simply delivering MORE Borderlands. This game is rather intensive after the Medium preset but since we're testing the 'Ultimate UW 1440p' card, High it is. We tested using the built-in benchmark utility

Total War Saga: Troy

Total War Saga: Troy is powered by their TW Engine 3 (Total War Engine 3) and in this iteration, they have stuck to a strictly DX11 release. We tested the game using the built-in benchmark using the Dynasty model that represents a battle with many soldiers interacting at once and is more representative of normal gameplay.

Forza Horizon 4

Forza Horizon 4 carries on the open-world racing tradition of the Horizon series.  The latest DX12 powered entry is beautifully crafted and amazingly well executed and is a great showcase of DX12 games.  We use the benchmark run while having all of the settings set to non-dynamic with an uncapped framerate to gather these results.

Shadow of the Tomb Raider

Shadow of the Tomb Raider, unlike its predecessor, does a good job putting DX12 to use and results in higher performance than the DX11 counterpart in this title and because of that, we test this title in DX12.  I do use the second segment of the benchmark run to gather these numbers as it is more indicative of in-game scenarios where the foliage is heavy.

Rainbow 6 Siege

Rainbow 6 Siege has maintained a massive following since its launch and it consistently in Steams Top Ten highest player count game.  In a title where the higher the framerate the better in a tactical yet fast-paced competitive landscape is essential, we include this title despite its ludicrously high framerates.  We use the Vulkan Ultra preset with the High Defenition Texture Pack as well and gather our results from the built-in benchmarking tool.

DOOM Eternal

DOOM Eternal brings hell to earth with the Vulkan powered idTech 7.  We test this game using the Ultra Nightmare Preset and follow our in-game benchmarking to stay as consistent as possible.

Watchdogs Legion

Watchdogs Legions sees a return of the Disrupt Engine they've been using since the early days with the original Watchdogs but this time it has been updated to next generation feature support. Dropping DX11 for DX12 we see much better utilization than in the past. Being one of the recent top sellers it earned a place in our test suite.

Call of Duty Modern Black Ops Cold War

Call of Duty Black Ops Cold War is the latest installment of the Call of Duty Series. Returning with DX12 support just like the Modern Warfare remake we tested this game during the opening of the Fractured Jaw level with the highest settings selected.

Horizon Zero Dawn

Horizon Zero Dawn is one of the two major PS4 exclusives that rocked their way onto the PC scene with massive acceptance and sales. Horizon Zero Dawn is powered by the Decima Engine and has been ported to DX12. We used the in-game benchmark to account for performance.

Borderlands 3

Borderlands 3 has made its way into the test lineup thanks to strong demand by gamers and simply delivering MORE Borderlands. This game is rather intensive after the Medium preset but since we're testing the 'Ultimate UW 1440p' card, High it is. We tested using the built-in benchmark utility

Total War Saga: Troy

Total War Saga: Troy is powered by their TW Engine 3 (Total War Engine 3) and in this iteration, they have stuck to a strictly DX11 release. We tested the game using the built-in benchmark using the Dynasty model that represents a battle with many soldiers interacting at once and is more representative of normal gameplay.


Shadow of the Tomb Raider

Shadow of the Tomb Raider, unlike its predecessor, does a good job putting DX12 to use and results in higher performance than the DX11 counterpart in this title, and because of that, we test this title in DX12.  I do use the second segment of the benchmark run to gather these numbers as it is more indicative of in-game scenarios where the foliage is heavy. SotTR features Ray Traced Shadows and enabled in the benchmarks with the game set to the 'Highest' preset and RT Shadows at High. DLSS was used only when labeled.

Call of Duty Modern Black Ops Cold War

Call of Duty Black Ops Cold War is the latest installment of the Call of Duty Series. Returning with DX12 support just like the Modern Warfare remake we tested this game during the opening of the Fractured Jaw level with the highest settings selected with RT set to Ultra on Sun Shadows, Spot Shadows, and Ambient Occlusion.

Control

Control is powered by Remedy's Northlight Storytelling Engine but severely pumped up to support multiple functions of ray-traced effects. We ran this through our test run in the cafeteria with all ray tracing functions on high and the game set to high. DLSS was enabled for this title in the quality setting when it was available.

Battlefield V

Battlefield V was one of the earlier games in the RTX 20 Series lifecycles to receive a DXR update. Battlefield V was tested on the opening sequence of the Tirailleur war story as it's been consistently one of the more demanding scenes for ray traced reflections that are featured in this game. DLSS was enabled for this game when available.

Metro Exodus

Metro Exodus was the third entry into the Metro series and as Artym ventures away from the Metro he, and you, are able to explore the world with impressive RT Global Illumination. RTGI has proven to be quite an intense feature to run. Metro Exodus also supports DLSS so it was used in our testing. Advanced PhysX was left disabled, but Hairworks was left on.

Watchdogs Legion

Watchdogs Legions sees a return of the Disrupt Engine they've been using since the early days with the original Watchdogs but this time it has been updated to next generation feature support. Dropping DX11 for DX12 we see much better utilization than in the past. Being one of the recent top sellers it earned a place in our test suite. DXR is enabled at the High Setting.

Boundary

Boundary is a multiplayer tactical shooter...in space. It's not out yet so treat this one as more of a synthetic benchmark as there are likely to be quite a few improvements but for now, we had access to the benchmark and it's a doozy to run. Featuring full raytracing effects for the benchmark as well as DLSS, we ran that in Quality mode when available.

Amid Evil

Amid Evil is a high energy old school shotoer that seems like an unlikely recipient of RT features, but here we are with insane DXR support in a modern retro shooter. Feature RT Reflections, RT Shadows, and NVIDIA's DLSS support we had to put this one through the rounds and see how things went. The RTX version of this game is still in beta but publicly available for those who want to try it. We tested with all RT features on and DLSS enabled.

Shadow of the Tomb Raider

Shadow of the Tomb Raider, unlike its predecessor, does a good job putting DX12 to use and results in higher performance than the DX11 counterpart in this title, and because of that, we test this title in DX12.  I do use the second segment of the benchmark run to gather these numbers as it is more indicative of in-game scenarios where the foliage is heavy. SotTR features Ray Traced Shadows as well as DLSS and we used both in the benchmarks with the game set to the 'Highest' preset and RT Shadows at High with DLSS enabled when available.

Call of Duty Modern Black Ops Cold War

Call of Duty Black Ops Cold War is the latest installment of the Call of Duty Series. Returning with DX12 support just like the Modern Warfare remake we tested this game during the opening of the Fractured Jaw level with the highest settings selected with RT set to Ultra on Sun Shadows, Spot Shadows, and Ambient Occlusion.

Control

Control is powered by Remedy's Northlight Storytelling Engine but severely pumped up to support multiple functions of ray-traced effects. We ran this through our test run in the cafeteria with all ray tracing functions on high and the game set to high. DLSS was enabled for this title in the quality setting.

Battlefield V

Battlefield V was one of the earlier games in the RTX 20 Series lifecycles to receive a DXR update. Battlefield V was tested on the opening sequence of the Tirailleur war story as it's been consistently one of the more demanding scenes for ray-traced reflections that are featured in this game. DLSS was enabled for this game.

Metro Exodus

Metro Exodus was the third entry into the Metro series and as Artym ventures away from the Metro he, and you, are able to explore the world with impressive RT Global Illumination. RTGI has proven to be quite an intense feature to run. Metro Exodus also supports DLSS so it was used in our testing. Advanced PhysX was left disabled, but Hairworks was left on.

Watchdogs Legion

Watchdogs Legions sees a return of the Disrupt Engine they've been using since the early days with the original Watchdogs but this time it has been updated to next generation feature support. Dropping DX11 for DX12 we see much better utilization than in the past. Being one of the recent top sellers it earned a place in our test suite. DXR is enabled at the High Setting.

Boundary

Boundary is a multiplayer tactical shooter...in space. It's not out yet so treat this one as more of a synthetic benchmark as there are likely to be quite a few improvements but for now, we had access to the benchmark and it's a doozy to run. Featuring full raytracing effects for the benchmark as well as DLSS, we ran that in Quality mode.

Amid Evil

Amid Evil is a high energy old school shotoer that seems like an unlikely recipient of RT features, but here we are with insane DXR support in a modern retro shooter. Feature RT Reflections, RT Shadows, and NVIDIA's DLSS support we had to put this one through the rounds and see how things went. The RTX version of this game is still in beta but publicly available for those who want to try it. We tested with all RT features on and DLSS enabled.

Graphics cards and power draw have always been quite synonymous with each other in terms of how much performance they put out for the power they take in. Measuring this has not always been the most straight forward when it comes to accuracy and methods for reviewers and end-users. NVIDIA has developed their PCAT system, or Power Capture Analysis Tool in order to be able to capture direct power consumption from ALL graphics cards that plug into the PCIe slot so that you can get a very clear barometer on actual power usage without relying on hacked together methods

The Old Way

The old method, for most anyway, was to simply use something along the lines of a Kill-A-Watt wall meter for power capture. This isn't the worst way, but as stated in our reviews it doesn't quite capture the amount of power that the graphics card alone is using. This results in some mental gymnastics to figure out how much the graphics card is using by figuring the system idle, CPU load, and the GPU load and estimating about where the graphics card lands, not very accurate to say the least.

Another way is to use GPU-z. This is the least reliable method as you have to rely entirely on the software reading from the graphics card. This is a poor method as the graphics cards vary in how they report to software when it comes to power usage. Some will only send out what the GPU core itself is using and not consider what the memory is drawing or any other component.

The last way I'll mention is the use of a multi-meter amperage clamp across the PCIe slot by way of a riser cable with separate cables then more power clamps on all the PCIe power cables going into the graphics card. This method is very accurate for graphics card power but is also very cumbersome and typically results in you having to watch the numbers and document them as you see them rather than plotting them across a spreadsheet.

The PCAT Way

This is where PCAT (power capture analysis tool) comes into play. NVIDIA has developed quite a robust tool for measuring graphics card power at the hardware level and taking the guesswork out of the equation. The tool is quite simple to set up and get going, as far as components used there are; a riser board for the GPU with a 4-pin Dupont cable, the PCAT module itself that everything plugs into with an OLED screen attached, 3 PCI-e cables for when a card calls for more than 2x 8-pin connectors, and a Micro-USB cable that allows you to capture the data on the system you're hooked up to or a secondary monitoring system.

Well, that's what it looks like when all hooked up on a test bench, you're not going to want to run this one in a case for sure. Before anyone gets worried, performance is not affected at all by this and the riser board is fully compliant with PCIe Gen 4.0. I'm not so certain about those exposed power points however, I will be getting the hot glue gun out soon for that.  Now, what does this do at this point? Well, two options: Plug it into the computer that it's all running on and let FrameView include the metrics, but that's for NVIDIA cards only so a pass, OR (what we do) plug it into a separate monitoring computer and observe and capture during testing scenarios.

The PCAT Power Profile Analyzer is the software tool provided to use to capture and monitor power readings across the PCI Express Power profile. The breadth of this tool is exceptionally useful for us here on the site to really explore what we can monitor. The most useful metric on here to me is the ability to monitor power across all sources, PCIe power cables (individually), and the PCIe slot itself.

Those who rather pull long-form spreadsheets to make their own charts are fully able to do so and even able to quickly form performance per watt metrics. We've found a very fun metric to monitor is actually Watts per frame, how many watts does it take for the graphics card to produce one frame at a locked 60FPS in various games, we'll get into that next.

Control Power

Control was the first game that we wanted to take a look at running at 1440p with RT and DLSS on, and then again with DLSS off, this is the game that NVIDIA used when showcasing the performance per watt improvements of Ampere, and well..they were right in the claim there.

 

From these results for Control is shows that NVIDIAs measurements and claims of improvements were accurate, but it's not always the case. We tested Forza Horizon 4 in a spot to test the same way again but this time at 4K and looking at when we target at 4K60 scene in this game

 

Overclocking the GeForce RTX 3060 Ti went quite a bit as it did for the RTX 3070. But this time we managed to squeeze out +100MHz out of the core resulting in a steady clock just over 2000MHz where the stock clock stayed in the 1920MHz range. Memory fared much much better with us being able to see upwards of +1250Mhz to the memory but settling on a much more comfortable +1000Mhz making the effective speed 16Gbps with a memory bandwidth of 512GB/s over the stock 14Gbps and 448GB/s.

The biggest ask of this overclock was to see how much it could close the gap between the $399 RTX 3060 Ti and the $499 RTX 3070.

Firestrike

Firestrike is running the DX11 API and is still a good measure of GPU scaling performance, in this test we ran the Ultra version of Firestrike which runs at 4K and we recorded the Graphics Score only since the Physics and combined are not pertinent to this review.

Time Spy

Time Spy is running the DX12 API and we used it in the same manner as Firestrike Extreme where we only recorded the Graphics Score as the Physics score is recording the CPU performance and isn't important to the testing we are doing here.

Forza Horizon 4

Forza Horizon 4 carries on the open-world racing tradition of the Horizon series.  The latest DX12 powered entry is beautifully crafted and amazingly well executed and is a great showcase of DX12 games.  We use the benchmark run while having all of the settings set to non-dynamic with an uncapped framerate to gather these results.

Rainbow 6 Siege

Rainbow 6 Siege has maintained a massive following since its launch and it consistently in Steams Top Ten highest player count game.  In a title where the higher the framerate the better in a tactical yet fast-paced competitive landscape is essential, we include this title despite its ludicrously high framerates.  We use the Vulkan Ultra preset with the High Defenition Texture Pack as well and gather our results from the built-in benchmarking tool.

Horizon Zero Dawn

Horizon Zero Dawn is one of the two major PS4 exclusives that rocked their way onto the PC scene with massive acceptance and sales. Horizon Zero Dawn is powered by the Decima Engine and has been ported to DX12. We used the in-game benchmark to account for performance.

 

 

 

The RTX 3070 hit the market not long ago and challenged the previous RTX 2080 Ti handily, this time the RTX 3060 Ti is doing the same to the RTX 2080 SUPER and quite handily. This is the expected behavior for quite some time where the incoming 60 series class tackles the outgoing 80 series class. But if I'm honest I expected this round to more target the vanilla RTX 2080 and not the RTX 2080 SUPER, so that is a welcome move.

The card itself comes in the exact same design and cooler as its bigger brother the RTX 3070 but this time is a much brighter and more cheerful silver finish.  I am not going to let the RTX 3060 Ti FE off the hook here for no illuminated GeForce RTX logo, especially since I thought this one was going to based on the way it looked when pulled from the box.

The new cooler design does allow for the card to run very cool, much cooler than expected for the unusually high 200w TDP for a 60 class series card. The GeForce RTX 3070 does retain the now expected 12pin connector but only needs a solid 8-pin to be plugged in. I did ask NVIDIA if there was any harm in using a dual 8-pin to 12-pin adapter because most of the modular PSU cables plug into dual 8-pin on the PSU itself and they assured me there was no issue.

 

The performance was absolutely solid. Want to pick the GeForce RTX 3060 Ti for a high refresh rate 1080p gaming machine? It'll kill it. 1440p? It'll kill that too, but I would recommend stepping up a class but the 3070 will get you in the game if you're okay with dialing back some settings. You're going to find RTX enabled games running very well at the 1440p mark when paired with DLSS, especially if the game is one of the ever-growing titles that support DLSS 2.0 which is quite killer and upcoming games like Cyberpunk 2077 and Watchdogs Legions are titles you'll want to take advantage of the RTX suite with.

The real kicker here for this card is if you're a high fidelity 1080p gamer then you're going to get the absolute best experience with the latest games here. The RTX 2060 SUPER got kicked a lot for still requiring DLSS in order to have playable framerates with RT effects enabled in games, but with the RTX 3060 Ti, this is not the case. it is able to deliver very fluid and smooth performance at 1080p with everything cranked wide open, something you would have needed either a heavily overclocked RTX 2080 SUPER or an RTX 2080 Ti to pull off the last generation.

Features like AV1 might not be fully mainstream yet but the GeForce RTX 3070 carries support for it. And other features like the NVIDIA Broadcast suite are absolutely undeniable in the value department, but that one is available across a large swath of GeForce cards, but something worth mentioning still.

As far as overclocking goes the RTX 3060 Ti has fared much better than previously released RTX 30 Series cards and manages to come so close to the stock RTX 3070 performance that it really makes it an easy choice for the more cost-conscious gamers who aren't trying to push crazy high resolutions.

 

The NVIDIA GeForce RTX 3060 Ti has raised the bar on what to expect in terms of performance at $399 for now. It's able to deliver high frame rate gaming at 1080p and Ultrawide 1080p while still being able to punch about in the 1440p class with the RTX 3070. But its solid RTX feature support and performance all-around at 1440p keeps it firmly in the strong arm of the market for that highly coveted 1440p high refresh market but it's spectacular RT effects performance at 1080 gives those holding onto their quality 1080p panels the ability to enjoy those features without relying on DLSS to deliver excellent performance. Of course, as such with all at the time of the launch, we can only hope availability is good and pricing holds close to the MSRP otherwise you might be better off stepping up to the RTX 3070 if the pricing gets too close to that card.

 

The post NVIDIA GeForce RTX 3060 Ti Founders Edition Graphics Card Review by Keith May appeared first on Wccftech.



source https://wccftech.com/review/nvidia-geforce-rtx-3060-ti-founders-edition-graphics-card-review/
Post a Comment (0)
Previous Post Next Post