Dual gpu: Two GPUs VS a Single High-End GPU

Two GPUs VS a Single High-End GPU

Skip to content

Does the age-old saying, “Two heads are better than one,” ring true when it comes to your gaming GPUs? If you’re an avid gamer, you might have asked yourself,  “Are two GPUs better than a single high-end GPU?”

While dual-graphics cards do have significant benefits, opting for two cards also comes with a few negatives. So, if you are wondering if two GPUs are better than a single high-end GPU, we’re here to help. Below, we take a closer look and explore the pros and cons. 

What is a GPU?

Before we dive in, let’s do a little refresher. Also known as a graphics or video card, a graphics processing unit (GPU) is a single-chip processor that’s used to manage and enhance videos and graphics. As you can imagine, a GPU is a critical component of your gaming system.

There are two types of GPUs: 

  1. Type 1: This is an integrated or embedded GPU that lives directly on your machines’ processor.
  2. Type 2: The second type of GPU is a separate GPU with its own card and memory.

It’s important to note that not all gaming PCs can actually run two cards in an SLI setup. In order for your PC to run multiple cards, you need specific software from either AMD or NVIDIA. AMD’s graphic solution is CrossFire, and NVIDIA’s graphic solution is SLI. Until you have installed one of these software programs, your PC will not be able to run multiple cards.

To run SLI or CrossFire on your PC, you need: 

  1. An SLI compatible motherboard
  2. Two of the same compatible video cards
  3. A bridge that connects the two cards together –these usually come with your motherboard or video cards

When it comes to SLI, you need two cards with the same GPU, but they don’t have to be the same brand. (Two GTX 780 Tis.) Meanwhile, with CrossFire you can pair some GPUs with other similar cards. (Radeon 7950 with a Radeon 7900.) If you need further help with which cards are compatible, both websites have detailed information on this.

Once you have installed both cards and the bridge, you can open your drivers’ control panel and enable SLI or CrossFire. Make sure your drivers are up to date (you can do this by downloading Driver Support) and test it by playing one of your favorite games that you are very familiar with. You should notice a significant performance boost. 

The Pros and Cons of Two GPU SLI Setups

The Pros 

There are a few main benefits of running multiple video cards, which include:

  • Multiple graphics cards can offer an enhanced 3D gaming experience.
  • Two GPUs are ideal for multi-monitor gaming.
  • Dual cards can share the workload and provide better frame rates, higher resolutions, and extra filters. 
  • Additional cards can make it possible to take advantage of newer technologies such as 4K Displays.
  • Depending on the make, running two mid-range cards is likely to be slightly cheaper than running one comparable high-end card.
  • It can be cheaper to buy a second of your current card than upgrading to a newer model

The Cons 

As with anything, there are a few disadvantages that come with running multiple GPU cards over one. These include:

  • Running two cards requires both significant power and storage from your PC. So, be sure your gaming PC has enough wattage before purchasing multiple cards.
  • Not all games perform well with multiple cards, and some games may run even slower.
  • Two video cards running in close proximity will produce more heat and additional noise.
  • SLI and CrossFire can sometimes cause a glitch called micro stuttering that makes the video look choppy.
  • Not every game supports SLI and CrossFire. This often depends on your video driver, not the game itself. So, you might have to tweak your driver settings to get the game working yourself.

So, What’s the Best Option?

As you can see, investing in two GPUs has both advantages and disadvantages. And, of course, it all depends on your budget, storage, and personal requirements. 

For the Average Gamer

For the average gamer, we think two graphics cards is unnecessary. If you’re not after extreme graphics performance, have multi 4k monitor setups, a singular high-end graphics card will be more than enough. Some new graphics cards don’t even have SLI support.

For Hardcore Frame Chasers

However, if you know that you’ll benefit hugely from an enhanced 3D performance that multiple cards offer, go for it. Just remember, you’ll need a motherboard, extra storage space, and might have to tweak your drivers, and only certain games even support it or benefit from it.

For Streamers

The biggest advantage gamers can look forward to in an SLI setup is when it comes to streaming. Having a dedicated card for the game to run on, and one dedicated card for rendering a stream for Twitch, Youtube, or Facebook can help keep your game from having frame drops when you hit the “go live” button.

For Multimedia Experts

The Adobe Suite has select SLI support, and it can help when you use software like Adobe Premiere or Adobe Lightroom. Make sure you’ve enabled GPU Acceleration and your exports may take up less of your computer’s resources.

For Everyone Else

There’s a good chance you don’t need to buy a second card to upgrade your computer’s performance. Consider a faster GPU, or checking to see if other components in your machine are holding you back first. Use a tool like UserBenchmark to see how your components compare to other builds.

Categories

  • Buyer’s Guide
  • Driver Support Help Articles
  • Driver Support News
  • Software Support
  • Tech Tips
  • Technology News
  • Uncategorized

Recent Posts

  • A Guide for Gamers: Two GPUs VS a Single High-End GPU 
  • Do Chromebooks Still Have a Place in 2021?
  • How to Decide What GPU to Get
  • Logitech G Expands Gamers Choices with New PRO X Mechanical Gaming Keyboard
  • Selecting the Desktop That’s Right for You

Page load link

Are dual graphics cards worth it in 2022?

Purchases through our links may earn LEVVVEL a commission.

Having an SLI setup back in the day could give you enough performance boost to justify the high price. SLI and Crossfire were all the rage because many games supported dual graphics card setups. By having a dual graphics cards system you could, in many titles, get noticeably higher performance than having a single flagship GPU. Further, in some cases, running two mid-range cards in SLI or Crossfire would match and even exceed the performance of top tier GPUs. But what about today? What are your dual graphics cards options right now and are they worth it?

After Nvidia introduced SLI in 2004 (which was an upgrade of the SLI technology developed by 3dfx in 1998) users got a way to play games at framerates that were impossible to achieve even with flagship graphics cards of the era. ATI brought Crossfire in 2005 and from then on, the two mGPU technologies have had their ups and downs.

During the first couple of years, these systems were the only way to play Crysis at max settings and playable framerates but even in their golden days, both SLI and Crossfire had a multitude of caveats. First of all, not all games supported dual GPU setups. Next, even fewer games were properly optimized for SLI and Crossfire setups, in the sense you could get enough performance to justify the price. Also, by running a dual GPU setup you would get 50 percent more performance at best, with most titles having much lesser gains, if at all.

Then, we had driver issues, high price (unless you opted for mid-range setups, like running dual 8800GT cards for instance) and high power requirements. There’s also the fact that you had dual GPU cards, like the GTX 690, which were usually better choice for users who went over the top. Over the years SLI and Crossfire slowly lost their popularity.

Due to the increased costs and complexity of developing new games, most developers didn’t bother to optimize their games for SLI and Crossfire setups. DirectX also included some limitations. With DirectX 12 most of the burden of enabling SLI shifted to the developer side. Since there’s no incentive to optimize for mGPU setups (because almost no one uses them anymore) most modern games don’t even support SLI setups.

Next, each subsequent GPU generation brought higher and higher performance improvements that, at one point, flagship GPUs offered more than enough performance for 99 percent of users. These days, a single GPU such as the RTX 3080 or the RTX 3090, can run almost any game at 4K with high framerates, let alone at lower resolutions.

Finally, seeing how developers slowly abandoned dual GPU support and how fewer and fewer users are interested in rocking dual graphics cards, Nvidia and AMD abandoned the tech themselves. AMD shut down CrossFire in 2017 and since then, running a dual AMD GPU setup is possible but mostly pointless. Nvidia first limited SLI to only two cards in 2016 and then limited SLI to only RTX 3090 when Ampere came out. Also, the company announced it won’t develop new SLI profiles in the future, practically killing the tech, at least when it comes to gaming.

So, in 2021, you have a couple of options when it comes to dual GPU setups. Either get two RTX 3090s or try running two AMD RX 6800 XTs in mGPU mode. Is it worth it? Well, first of all, the number of games natively supporting SLI setups is thirteen. Yes, out of all games you could play at the moment only thirteen work with SLI. On the AMD side, the story’s a bit different. There’s no official support but Radeon software does allow running mGPU setups. There’s no physical connection, like the Nvidia link. Two cards work via software, like when you use dual Nvidia cards for rendering purposes.

See also

Next, the experience of running dual RTX 3090s is less than ideal. Some games have solid gains, others won’t even start, so the overall value is quite low. The same can be said (although with less stuttering) when running two RX 6800 XTs. Finally, you need a beastly PSU for this kind of adventure along with the rest of the system that’s also top-notch.

So, for gaming, dual graphics card setups are definitely not worth the money. They’re extremely expensive, especially at the moment when the GPU prices are skyrocketing. And for the price, you get support in a dozen games and extremely slim chances any future titles will come with SLI support.

The story takes a turn for the better if you need dual graphics cards for rendering, at least for Nvidia cards. Depending on your workflow, you can get massive gains by combining two graphics cards. So here, coupling your old card with an RTX 3080 or 3090 or getting two Ampere cards from the start does make sense, if the apps you’re using can take the advantage of the second GPU.

So, if you’re gaming, having an SLI setup doesn’t make any sense. It’s extremely expensive, lacks support both from developers and Nvidia/AMD, with most games not even working with SLI setups. On the flip side, for rendering, mGPU setups can be worth the price. But only in selected applications.

Purchases through our links may earn LEVVVEL a commission.

Goran is a PC hardware expert whose years in the field has given him knowledge in everything gaming tech related.

Radeon VII doesn’t handle double precision well

3DNews Technologies and IT market. News Radeon VII video card does not cope well with computing…

The most interesting in the reviews


01/15/2019 [08:03],

Andrey Sozinov

Last week, AMD quite unexpectedly introduced its new flagship Radeon VII graphics card based on the 7nm Vega II graphics processor. And almost immediately after the announcement, some interested users wondered how good the novelty is in computing on double precision floating point numbers (FP64). nine0007

This question arose not by chance. The Radeon VII graphics card is very similar to the Radeon Instinct M50 compute accelerator introduced last year. Both use the Vega II GPU in a 3840 stream processor configuration, which is complemented by 16 GB of HBM2. Only the GPU clock speed is slightly different: 1800 MHz in the case of the gaming Radeon VII and 1746 MHz in the case of the professional Radeon Instinct M50.

Techgage reached out to AMD CMO Saša Marinkovič to find out how the Radeon VII handles double precision calculations. And he directly stated: «Double precision is not enabled in Radeon VII» . This means that instead of the 6.7 TFLOPS FP64 that the Instinct M50 has, AMD’s gaming novelty will only be able to offer about 862 TFLOPS, that is, 1/32 of the performance in half-precision operations (FP16), by analogy with the Radeon RX Vega.

In general, this is quite logical, since gaming video cards do not need high performance in double precision operations. Such abilities are needed for accelerators used in other areas, for example, in various kinds of process modeling and financial analysis. And perhaps if AMD still decided to give Radeon VII support for double precision calculations, then these video cards would not have reached gamers at all. nine0007

It is also worth noting that despite the «cut down» performance in FP64, the Radeon VII video card is the leader in this area among competitors. For example, the maximum performance in FP64 for the Titan RTX graphics card is 509 Gflops, and for the GeForce RTX 2080 Ti it is 420 Gflops. But the half-precision performance of the GeForce RTX 2080 Ti and Radeon VII does not differ so much: about 27 and 28 teraflops, respectively.

Source:


If you notice an error, select it with the mouse and press CTRL+ENTER.

Related materials

Permanent URL: https://3dnews.ru/981030

Headings:
News Hardware, video cards,

Tags:
radeon vii, amd, vega, video cards, performance

← В
past
To the future →

GPU Servers Inside | entry

05/05/2021 | Servers | ASUS
| AMD EPYC
| NVIDIA
| GPU

Today’s graphics processing units (GPUs) are replacing conventional central processing units (CPUs) in parallel computing. Machine learning, neural networks, voice and image recognition, mathematical modeling, visualization in games and design are tasks for GPU servers. The range of options is huge: while the «premier league» masters the top-end NVIDIA DGX A100 with a performance of 5 petaflops and a price of $200K, ordinary users make do with democratic platforms based on NVIDIA A10/A30/A40 and A4000/A5000/A6000 accelerators. nine0007

AMD EPYC processors are ideal for GPU servers. They have up to 64 cores and 128 PCIe Gen4 lanes. It is no coincidence that NVIDIA is switching its servers to AMD EPYC. The current second-generation Intel Xeon SPs have up to 28 cores and 48 PCIe Gen3 lanes. The third generation of Intel Xeon SP, which will appear by the fall, will have 64 PCIe Gen 4 lanes. Both technologically and economically, the advantage is with AMD.

How graphics platforms work

There are many variations of GPU platforms on the market based on one and two AMD EPYCs for different numbers of GPU accelerators. There are two Intel Xeon SP. The most productive option can be considered the ratio of four GPUs to one central. nine0007

These include the ASUS ESC4000A-E10 uniprocessor server based on AMD EPYC. It accommodates four double-wide or eight single-wide GPUs.

The 2U platform has a «compartment» layout.

GPU accelerators for installation in servers differ from household ones — they have double width versus almost triple, longitudinal blowing (turbine) instead of fans with side heat removal.

The GPUs are mounted in pairs in cassettes before being installed in the server. nine0007

2U platform height is enough to accommodate four GPUs horizontally.

The block diagram shows a supply of bus lines for connecting not only GPUs, but also peripheral controllers and NVMe SSDs.

Our test interest

Let’s focus on visualization tasks in design, video production, games, augmented reality applications. We want to check how GPU-scaling — adding GPUs to the system — affects the calculation of complex scenes. Let’s use standard rendering programs. nine0007

V-Ray

V-Ray rendering works as a plugin for Autodesk 3ds Max, Cinema 4D, SketchUp, Rhino, Revit, ArchiCAD, Maya, Blender and many more. Designed and optimized by the creators to take full advantage of all hardware components: CPU, GPU, RAM, storage, networking and motherboard. The CPU and GPU can be used simultaneously with V-Ray — for example, the CPU cores for actual rendering, and the GPU for noise reduction and optical effects. Or vice versa, V-Ray works on graphics cards, but uses the CPU to calculate the Light cache GI. Hybrid rendering in V-Ray GPU is possible, where the GPU and CPU components can render at the same time. nine0007

The developer offers a set of tests V-Ray Benchmark to evaluate the capabilities of a workstation running V-Ray.

Octane Render

This is a CUDA real-time renderer running on nVidia GPUs. Built on ray tracing. Supports and scales performance in multi-GPU configurations. The acceleration is most noticeable in complex scenes. For testing, use the RTX OctaneBench utility.

Redshift

A powerful GPU-accelerated renderer, Redshift offers a variety of features and integrates with standard computer graphics applications. The demo version is functionally identical to the commercial one, it is free, it contains plugins for Maya, 3dsMax, Softimage, C4D, Houdini, Katana.

Test table

GPU- Server

CPU

AMD EPYC 7302P 16 Core

Platform

ASUS ESC4000A-E10

RAM

8 x DDR4-3200 16GB Reg ECC

nine0005 Video Card

4 x ASUS GeForce RTX 3090 TURBO (TURBO-RTX3090-24G)

Hard Drive

2 x 960 GB SSD Western Digital Ultrastar SN640 U. 2 NVMe

Software

Windows 10 Pro 64-bit

V-Ray 5.0.20

Octane Bench 2020.2.3

Redshift 3.0.36

Generally speaking, professional applications should use professional NVidia graphics accelerators. For example, A6000 instead of RTX3090.

Consumers have fallen in love with the RTX 3090 Turbo cards — double-width, with a turbine, they go into GPU servers in several pieces. With consumer versions of RTX 3090 won’t work.

NVidia did not put up with the cannibalization of sales of its own «true server» Axxx GPUs for long. Six months have passed and manufacturers have been “strongly advised” to stop producing turbo versions of the RTX 3090. In fact, our tests are the “demobilization chord” of the RTX 3090 Turbo.

Benchmarks

In a server with 4 GPUs, we turned off the accelerators one by one to evaluate the performance drop.

V-Ray and OctaneBench show almost linear performance scaling in terms of the number of active GPUs. nine0007

In RedShift, the addition of a GPU does not lead to a proportional reduction in rendering time, but significant time savings are obvious. In the visualization of complex scenes that require many hours of calculations, every hour counts.

Aftertaste

If we are talking about one GPU per server (workstation), it is not difficult to choose a constructive solution. But for two, and even more so four GPUs, you need a specialized platform. When all four GPUs are running at full power, the temperature on them exceeds 80 ° C. Not only the turbines of the GPUs themselves, but also the cooling systems of the server must have «power reserve» of the fans — taking into account the high ambient temperature and the heat generated by other components. Such servers are noisy and power hungry. In our case, a 1600 W power supply was enough back to back.