Dynamic scaling is a major feature of modern games and the latest and greatest graphics cards, but there are different modes and patterns to choose from. Intel’s Xe Super Sampling (XeSS), Nvidia’s Deep Learning Super Sampling (DLSS), and AMD’s Fidelity FX Super Sampling (FSR) all do things in their own way and aren’t always the same in terms of performance, visual quality, game support and hardware support.
While there is an argument to be made for just enabling whatever your hardware and games support, if you have a choice between them or are considering different graphics cards based on their XeSS, DLSS support and FSR, it is important to know the differences between them. . Here’s a key breakdown of these oversampling algorithms and which one might work best for you.
Table of Contents
In general, DLSS leads the pack in image quality thanks to its AI approach, but it’s no longer the clear leader because of FSR 2.0. The original FSR implementation was pretty poor, but the new one The 2.0 update puts it almost on par with DLSS. We really like FSR 2.0 for its hardware support, as it has worked on almost every GPU made in the last five years.
XeSS is a bit different. Unlike DLSS and FSR, there is no definitive version. Instead, there is an Intel Arc-exclusive version of XeSS that takes advantage of XMX cores on Arc GPUs, as well as a vendor-neutral version of XeSS that resembles FSR in that it requires no AI hardware.
So where does XeSS fit in? Well, the AI-powered version is significantly behind DLSS and FSR 2.0 in terms of image quality, which is basically the same position occupied by FSR 1.0: not terrible, but not amazing either. When we compared the performance modes of XeSS and DLSS in Shadow of the Tomb Raider, we found DLSS to have higher quality in general. We haven’t tested XeSS against FSR 2.0 yet, but the conclusion would probably be similar, as FSR 2.0 is generally as good as DLSS.
For detailed images see our Shadow of the Tomb Raider XeSS performance comparison (click, drag, scale).
The higher quality modes of XeSS aren’t much better either. In this snapshot of Hitman 3, you can see that in each mode the foliage is always blurrier compared to the scene at native resolution. For performance gains (which we’ll discuss in a moment), the trade-off in image quality isn’t impressive.
For detailed images see our Hitman 3 XeSS comparison (click, drag, resize).
We expected XeSS to be more comparable to DLSS through its use of hardware AI, but that’s clearly not the case. At the same time, FSR 2.0 proves that AI hardware isn’t even necessary to make a good upscaler (although FSR 2.0 isn’t without its issues). Intel’s approach to scaling seems to be in a very delicate middle ground between AMD and Nvidia, as there is a version that uses hardware AI like DLSS and a version aimed at GPUs without hardware AI that works like FSR.
However, DLSS and FSR had bad starts, so there’s no reason to believe XeSS can’t get good. Hopefully XeSS 2.0 will put Intel on a level playing field once people start buying the new Arc GPUs.
Performance is the other side of the coin of scaling, because it’s not worth over-performance if it sounds terrible, but then it has to impact FPS. Otherwise, you might as well become a native. It basically comes down to image quality which is worth sacrificing for a higher frame rate, which is why all of these upscalers offer different modes so you can adjust the quality and performance to your liking. .
Since Nvidia GPUs support XeSS, FSR, and DLSS, these are the ideal cards to compare performance between each upscaler (minus the AI-powered version of XeSS for Intel Arc). In our Intel Arc A770 and A750 review, we tested the RTX 3060 in Shadow of the Tomb Raider and Hitman 3 using all available quality modes for XeSS and DLSS, and the results are quite conclusive.
In Shadow of the Tomb Raider, XeSS was able to improve performance by up to 43% using Performance mode, but DLSS was able to achieve a frame rate increase of 67% with its own Performance mode. In Ultra Performance mode, DLSS was able to double the frame rate, a far greater improvement than XeSS was able to offer. When you consider the difference in image quality between each upscaler’s Performance mode (which we showed in the previous section), DLSS is the clear winner.
It’s a similar story in Hitman 3. The margins here are basically the same as in grave robber except for DLSS’s Ultra Performance mode, which couldn’t double the frame rate. Even without Ultra Performance, however, DLSS is still the clear winner when it comes to performance.
We should also note that DLSS is expected to get even faster with the upcoming 3.0 version, which brings AI-generated images into play. Nvidia promises big performance gains with DLSS 3, but game support will be limited for a while, and image quality takes a noticeable hit from our testing with the RTX 4090. DLSS 3 is not a existential threat to XeSS at the moment, but it’s not great for Intel to miss out on a feature that might be more useful in the future.
As for FSR 2.0, generally it’s about on par with the performance of DLSS, so although we haven’t tested it directly against XeSS, it’s highly likely that we’ll see FSR in the lead and XeSS significantly behind, as we do it with XeSS versus DLSS. FSR doesn’t have AI-generated frames like DLSS 3, however, and it’s unclear how AMD will bridge that gap in the future since its GPUs don’t have AI hardware, at least for now.
Still, FSR 2.0 was good enough at launch that we started to wonder if DLSS was still needed. DLSS 3 might change that if you can afford an RTX 4000 series graphics card, but given that most can’t, it may leave FSR as the scaling king in the long run.
DLSS is the oldest of the three scaling technologies and, unsurprisingly, it supports the most games. It’s available in dozens of titles, including Cyberpunk 2077, Marvel‘s Avengersand Riders, and Nvidia is constantly adding support for new games. It is tiered though, with the most games supporting DLSS 1 and 2, with DLSS 3 support still limited at this time.
FSR is much newer, but that hasn’t stopped it from developing a impressive list of supported titles. At the time of publication, the heavyweights are God of the war, Death Loopand Red Dead Redemption II. FSR 2.0 support is also planned for Hitman 3, Microsoft Flight Simulatorand upcoming games like the prophesied and UnexploredPC port.
Generally speaking, if a game has FSR, it will have DLSS, and vice versa, although older titles that released before DLSS will often only have DLSS. It looks like we’ll see a similar trend with XeSS, as several games that have XeSS or will support it in the near future also have DLSS and FSR 2.0. For example, the two games we tested for image quality support (Shadow of the Tomb Raider and Hitman 3) support DLSS and XeSS.
The biggest difference between DLSS, FSR, and XeSS is hardware support – and it’s perhaps the difference that defines which scaling option is best. DLSS requires an Nvidia RTX graphics card. Not only is the feature limited to Nvidia hardware, but it’s also limited to the latest generations of Nvidia hardware: specifically, you need at least an RTX 2000 card to use DLSS and an RTX 4000 to use DLSS 3.
This is because DLSS requires Tensor cores on recent Nvidia graphics cards, which handle the AI calculations. FSR does not use AI, so it does not require any special hardware. The strength of fsr is not that many games support it or that it has better image quality compared to DLSS, because it has neither; is that anyone can use it.
Apart from graphics cards from AMD and Nvidia, FSR also works on integrated graphics, APUs and graphics cards that are more than two generations old. There is a compromise in quality, but most players don’t have a recent Nvidia graphics card. The majority of people are still using older GPUs, an AMD card, or integrated graphics.
XeSS has a good middle ground between the two. Like DLSS, XeSS uses dedicated cores – called XMX cores on Intel graphics cards – to handle AI calculations. XeSS requires these cores to work, so the full version of XeSS will only work on Intel graphics cards. But Intel does of them versions.
This is something we wanted to see outside of DLSS. Essentially, Intel offers developers two different versions of XeSS: one that requires the dedicated XMX cores and another that is a general-purpose solution for a “wide range of hardware.” In theory, it’s the best of DLSS and FSR rolled into one, but there’s a clear problem: two versions means twice the work for developers if XeSS is difficult to implement, so it’s possible that developers aren’t adopting it as widely in their games.
DLSS leads the pack when it comes to game quality and support. If it worked on multiple generations of graphics cards from different brands, it would make FSR obsolete. Instead, FSR fills the gap that DLSS can’t by delivering similar image quality and performance to a wider variety of cards. It’s a classic battle between closed source and open source, and it’s hard for either solution to render the other obsolete when each has distinct advantages and disadvantages.
XeSS tries to outperform both DLSS and FSR by combining the best things about both: the image quality of DLSS powered by AI hardware and the broad compatibility of FSR. Unfortunately, the visual fidelity and performance gains are not in favor of XeSS. At the moment, XeSS is simply useless for Nvidia users because DLSS is simply better; as for AMD and Intel users, XeSS is only attractive if FSR 2.0 is not an option.
A future version of XeSS could change things, or rather a future version of XeSS Needs to change things. We saw it with DLSS and FSR: the second version made the feature interesting to use. XeSS is in exactly the same position and hopefully Intel can replicate the progress of Nvidia and AMD.