Wondering what in the world “triple buffering” actually means and how its affecting your PC? This guide has got you covered!
The short answer is that triple buffering is a technology that can smooth out video game framerate and reduce screen tearing, but it can also cause notable input lag in some situations. Triple buffering also often requires a fairly powerful PC to run properly.
That’s the abbreviated short-and-sweet. Realistically there’s a lot more to cover to fully understand triple buffering and determine whether or not you should be using it.
This quick guide will walk you through all of the important stuff and help you make the right call.
Introduction
You’ve probably heard the term buffering before, especially if you’re old enough to remember what watching videos on your phone was like back when 3G coverage was spotty and 4G wasn’t even in the planning phases.
If that’s the case, you may be under the impression that buffering is what you call the irritating pauses that happen when your internet connection isn’t fast enough to keep up with your viewing habits. Like a lot of other computing terms, however, buffering means different things in different contexts.
Take “triple buffering,” for example. It can’t mean multiplying those irritating pauses by three or anything like that, so it stands to reason that the “buffering” in “triple buffering” means something else entirely.
Triple buffering isn’t a common term or technique, and it’s only tangentially related to buffering’s colloquial definition. It isn’t a difficult concept to understand, but you need a little background and recontextualizing before it makes sense.
It won’t help much yet in our explanation, but here’s a quick video of a YouTuber trying to demonstrate some visual differences with triple buffering on and off:
What Actually Is a Buffer?
You need to learn to walk before you can run, and you need to learn what one buffer is before you know why you’d want to triple buffer.
In computing terms, a buffer is a piece of your computer’s memory that’s used to shuttle data from one place to another. Buffers can be virtual or have a dedicated physical location in your computer’s RAM, and they can be used for everything from shuttling data from input devices to output devices to moving data between processes. They’re frequently utilized to smooth out dataflows when there’s a difference between the rate data is received and how quickly it’s processed, or if those rates vary from moment to moment.
Think of it like this: Say you have a big garden that needs watering, so you grab your watering can and walk to the nearest spigot. You put it under the tap, fill it up, turn off the tap, and take the watering can to give your thirsty plants a drink. In this case you can think of the spigot as the process generating the data, the watering can as the buffer, and the plants as the process receiving the data.
That garden-watering method gets the job done, but you can’t help getting a little impatient while you wait for the watering can to fill up, and you’re sick of turning the spigot on and off every couple minutes. And sure, it’s only a few seconds at a time, and yeah, you could probably let it go, but the only thing you love more than gardening is efficiency. So you think about it. Irrigation’s too expensive, sprinklers aren’t precise enough, and throwing water balloons at your garden seems like it’d cause a whole bunch of other problems…so what do you do?
Then, suddenly, you’ve got it.
Double Buffering
What’s better than one watering can? Two watering cans.
Suddenly your whole system is a lot more efficient. Now you can fill up one watering can, set the second one under the spigot, and go water your plants while the second can fills up. If you time it right you’ll get back to the spigot right as the second can is full, swap it out with the first one, and go back to watering the plants. No more waiting for the watering can to fill up, no more turning the spigot on and off, and no more wasted time.
That method—trading off between two watering cans—is basically the principle behind double buffering. Double buffering is used in a lot of different areas, but it’s especially useful when it comes to rendering graphics and maintaining a steady framerate.
Computers typically redraw the images on the screen every time they render and display a new frame, and it takes them a split second to replace the old frame with the new one. That split-second delay and constant refreshing can create distracting flickering effects and annoying screen tearing, so programmers have implemented various double buffering techniques to reduce or prevent such visual artifacts entirely.
There are a number of double buffering methods, but let’s stick to one for simplicity’s sake: the page-flip method. This method, much like the gardening scenario above, works by keeping one buffer in the background while the other is at work being displayed on the monitor. As soon as the back buffer is finished drawing it switches places with the front buffer and displays its image on the monitor. Doing this reduces the time between new frames, simultaneously improving efficiency and mostly eliminating flickering and screen tearing. Mostly.
Now you’ve got the gist of buffering and double buffering, so let’s move on to the matter at hand.
Triple Buffering
As you can probably guess by now, triple buffering is basically what you get when you add another buffer to double buffering. Double buffering is a huge improvement for single or non-buffered computer graphics, but it isn’t perfect. Programs using double buffering have to wait for the front and back buffer to switch before they can start drawing the new image, forcing the computer to wait for several milliseconds between frames. Triple buffering was created to reduce or eliminate that downtime.
Triple buffering theoretically eliminates those milliseconds-long delays by adding another back buffer to the mix. That means there’s always one front buffer being displayed on the monitor, a back buffer ready to swap with or copy to the front buffer, and a second back buffer ready to jump in and start getting drawn onto while the other is swapping or copying. The result is (usually) higher, smoother framerates, but it also comes at the cost of added input latency.
You won’t find triple buffering as a togglable setting or built-in amenity in your operating system or in most software. It doesn’t have many uses outside of gaming, and even then you’ll typically only find it tucked away in settings menus for games or graphics cards. It also isn’t recommended for many games; triple buffering can deliver higher, more stable framerates, but the added input latency makes it a bad choice for fast-twitch games like CS:GO, Battlefield 2042, or any games that rely on fast reflexes and quick decision making.
NVIDIA’s Fast Sync feature is one of the most prominent implementations of triple buffering. Fast Sync, intended as an improvement/alternative to V-Sync (we have a whole guide to Fast Sync vs V-Sync), uses triple buffering to smooth out the framerate and all-but eliminate tearing without adding substantial input lag. It does this by taking advantage of the difference between your monitor’s refresh rate and the maximum number of frames your graphics card can render.
Where V-Sync works to prevent tearing by waiting to flip the front and back buffers until your monitor is ready for the next frame (a process that can introduce a lot of input lag), Fast Sync works by constantly rendering frames and displaying the most recent frame in time with the monitor’s refresh rate. In other words: V-Sync slows down your computer’s internal framerate to match your monitor’s refresh rate, while Fast Sync accelerates the internal framerate so there’s always a frame available when the monitor’s ready for the next one.
AMD’s Enhanced Sync technology works very similarly to NVIDIA’s Fast Sync. Both are built to function like better versions of V-Sync, both use three frame buffers, and both work by leveraging the difference between your computer’s maximum framerate and your monitor’s refresh rate.
Like Fast Sync, Enhanced Sync works best when your computer’s maximum internal framerate is significantly higher than your monitor’s refresh rate (NVIDIA recommends a framerate as much as three times the refresh rate), and both prevent screen tearing without adding much input lag. They aren’t exactly the same technologies, of course, but they’re similar enough to be more or less interchangeable.
The Effects of Triple Buffering
Regular triple buffering—the kind you’ll find in some game options menus—tends to add about as much, if not more, lag to the equation than V-Sync, and even Enhanced Sync and Fast Sync add some lag in spite of their more advanced technologies. This means that, like V-Sync, standard triple buffering isn’t recommended for fast-paced games, and some users hesitate to use Enhanced Sync or Fast Sync while they play.
Enhanced Sync and Fast Sync work by making your graphics card render frames as quickly as possible, which means they demand more electricity and put more wear and tear on your hardware than if they were disabled. More electricity and more activity also means more heat generation, so you’ll need to keep an eye on your computer’s internal temperatures if you want to play with either technology enabled.
Final Thoughts & Recommendations
Triple buffering has its pluses and minuses. It can smooth out your framerate and eliminate screen tearing without adding appreciable input lag, but it also requires very powerful hardware to work properly—hardware that will be under quite a lot of stress during your game sessions.
Triple buffering will also add some input lag to your game (though not nearly as much as V-Sync), and that can be a deal-breaker for serious gamers. If you’re more interested in smooth, tear-free gameplay than lightning-fast reactions, however, you may find that triple buffering is exactly the solution you’re looking for.
This kind of setting is similar to what we covered with ray tracing and with SSAA: it can drastically improve graphics, but just as easily come at a steep performance cost.