This is really unnecessary and more placebo than helpful, not to mention old:
> 2 years ago
> It [sic] tested it on Windows 10 LTSB (1607)
If you want to reduce input lag for exclusive fullscreen, turn off “fullscreen optimizations”. But that specific bug, introduced in build 1903, was fixed a few years ago. (But if you still somehow get it — you can just turn this off per-game instead of this wide hack!)
Modern DXGI flip presentation model doesn’t add input lag anymore and properly detects when the exclusive fullscreen app is the only one showing, and gives it direct render control. (E.g. you can frame tear again!)
I think some UIs will also just not work nowadays if you kill DWM, even more likely in Windows 11 since more UI got moved to WinUI.
> DXGI flip presentation model doesn’t add input lag anymore
This is only true in certain situations. For non-fullscreen windows it requires that your GPU driver support a feature called "Multiplane Overlay" which is not supported on Nvidia cards before RTX 20 series (probably not older AMD cards either though I'm not sure). A lot of people have disabled it because they've experienced bugs like flickering or stability issues. And a lot of random features will also disable it without notice, like monitor rotation or 10-bit color.
I've also seen Intel GPUs that will only enable this with a single display. Turn on two displays and no windows will ever enter Independent Flip mode according to PresentMon.
I even have weird (most likely) overlay-related issues on my RTX2070, like trying to scroll in Chrome with a YouTube video playing stutters like hell, and the video even occasionally blacking out, or when moving/sizing the browser window, it takes the video a noticeable amount of time to adjust. It's extremely annoying.
That’s an NVIDIA driver bug recently introduced, it’s super annoying. Videos increase the chance of that flicker appearing if you’re on YouTube in their “ambient light” mode thing, since it just blacks out the whole back canvas when it happens.
Right, but DXGI flip in windowed mode is a relatively recent addition and one that’s a net improvement. (Windowed always had an extra delay added due to composition)
> Modern DXGI flip presentation model doesn’t add input lag anymore
There's also this weirdness that DXGI always adds 3 frames of latency, because that's the default value unless explicitly changed by the application (https://learn.microsoft.com/en-us/windows/win32/api/dxgi/nf-...) - don't be confused by the "maximum" in the name, it's just consistently adding 3 frames of latency (at least on my Win10 machine).
Don't know if this has changed in more recent DXGI versions, but if you want to be backward compatible as much as possible, those are not an option.
The DXGI APIs are such a completely random collection of functionality that I really wonder if there's any forward thinking at all. If anything, at least the defaults are all wrong.
On the Windows machine I spent the past year using for work, I absolutely massive (many seconds, and I could induce it to as many seconds as desired, even for minutes) input lag when moving windows around that required me to kill DWM.exe every single day, sometimes multiple times a day.
I didn't permanently kill it like in the article, though. Just killed the process and let it restart.
I don't know for sure that dwm.exe itself was the cause. It could have been an issue with some of the endpoint management crap my employer forces on users, somehow, or maybe MS PowerToys (though I think I eventually ruled that out at some point). But killing dwm.exe fixes it.
Office 2019 did this for me on my work laptop. Just grab any MS Office window with the mouse, describe a full circle or two, then sit back and watch as the laptop's CPU melts through the floor while the Office window slowly crawls over the screen for the next minute or two.
IIRC that’s not composition’s fault but some application window delaying message posting with a poorly-done global hook. That’s also the cause for your mouse freezing for half a second then appearing where it should’ve been.
Maybe DWM restarting also kills and reinitializes the window messages mechanism?
The program probably works exactly as it did back 2 years ago, that's not the point. It's the advice of killing dwm.exe to reduce input lag that might be outdated.
No, I still encounter massive lag spikes with dwm.exe after a few weeks of uptime, and restarting the process fixes it. However, I get lag in the Windows UI itself, not games. I think it has something to do with one of my monitors always disconnecting from the computer when it goes to sleep, so Windows acts like I'm constantly plugging and unplugging the monitor multiple times a day, for weeks. Leaks like 200-500MB+ each time, and dwm.exe is often using close to 3-4GB when I kill it.
That’s extremely odd, and you’re not on some older Windows 10 build or LTSC? I definitely have long uptime bugs on my work computer but they’re more to do with Explorer than DWM/UI.
Never had any issue like this until I had to replace a monitor, and the newer one doesn’t keep HDMI/DP signals alive at all when the display sleeps, causing Windows to think it disconnects.
It’s such a stupid and common thing that the “PersistentWindows” tool exists to put the windows back in position after W10 fucks up everything from the monitor disconnect. So it’s not a stretch to think dwm.exe has issues with it as well.
Not to mention most of these now fixed problems likely weren't even problems if you had a G-Sync/Freesync display and a compatible GPU.
As an extra note, Windows 11 ships with a feature to force the DXGI flip presentation model in older games.
"Gamers" can be like audiophiles in the way the spread half-truths and even outright misinformation. The number of people I still see thinking you should turn G-Sync off is downright silly.
> Not to mention most of these now fixed problems likely weren't even problems if you had a G-Sync/Freesync display and a compatible GPU.
I have a Freesync display and a compatible GPU and i can feel the difference between no vsync and freesync. There is a slight lag there, if you cannot feel it do not assume that others cannot too (some people even seem to not be able to feel a difference between having vsync on and off or having a compositor or not enable - nowadays really only possible on Linux but in the past you could also do it with WinVista/Win7 and in Win8 with hacks like the one linked - which is mind boggling to me).
Of course the input lag depends on the game and how you play it. First person games with raw input, 1:1 mapping of the input (i.e. no filtering whatsoever) and little to no frame queuing (some engines run 1-2 frames "behind" the game state and if you add things like triple buffering you end up with extra lag) are the best to notice this whereas third person games played with a gamepad (where you have only indirect control of the camera - you control the camera's rotation speed, not the camera directly) are way harder to notice, as are games where you mainly use the mouse to click where to go (especially if the game uses a hardware cursor instead of rendering the cursor itself as part of the game UI).
Also high refresh rate monitors make it slightly easier to overlook the lag (though it still is possible to tell - mine is a 165Hz monitor after all and i can still notice, it just doesn't annoy me as much as on a 60Hz monitor).
One of the big things to remember with VRR is that you will incur a latency penalty of a couple of frames if your game is running right at the maximum refresh rate of your display. This is why it is usually recommended to set a system wide framerate cap a few frames below the maximum supported value.
For most modern gaming monitors, this isn't a huge deal. Cap your game at 140 FPS on a 144hz display and it will feel as snappy as it probably would have uncapped. Though if you have a lower end (or ultra high resolution) display that caps out at something much lower, like 60, then I can definitely see how leaving it uncapped with VRR disabled would feel a lot snappier.
Early Freesync-branded displays could also be a bit of a crapshoot for a variety of reasons, but this has gotten much better in the last 5-6 years.
> One of the big things to remember with VRR is that you will incur a latency penalty of a couple of frames if your game is running right at the maximum refresh rate of your display. [..] For most modern gaming monitors, this isn't a huge deal. Cap your game at 140 FPS on a 144hz display and it will feel as snappy as it probably would have uncapped.
That sounds very weird to me, why would that be the case?
(also FWIW i'm using Linux so a system-wide cap isn't exactly possible, though at least with games i can simply run them via mangohud)
On Linux I use Mangohud or libstrangle to cap framerates. For me it is not a major annoyance though, as I only play a few games regularly on Linux. I still boot into Windows after I'm done with work and play most games there.
My understanding of why this causes a problem is that the CPU starts running ahead of the GPU, and queues up frames to be rendered. So it shouldn't be a problem with games that support Nvidia Reflex or some other feature to limit pre-rendered frames.
Yeah freesync is just causing uneven frame times if the game is not running higher than the monitor refresh rate and with monitor pushing above 144hz is harder and harder. At some point frame time will be so small that this unevenness will be unnoticeable but in the most common price range adaptive sync at half monitor rate is a better experience
Playing rhythm games, I can easily feel ~3ms of input lag. However it's pretty easy to adapt even to 100-200ms of lag, so I'd say most people are in the audiophile placebo territory. I can hit a 5-8ms window reliably even with a ton of lag if I spend 20 mins getting used to how it feels (running a game at 10fps under wine on a 10 year old chromebook with bad wireless gear). Network lag is far more significant than any input lag due to input processing or display pipelines.
Are you mixing up jitter and lag? Even if you hit perfect 120fps synced to refresh, your input to display lag goes up to 8ms. And that's ignoring the input device itself, processing time, etc. which also stack up. 3ms is not really doable with typical modern hardware.
> Playing rhythm games, I can easily feel ~3ms of input lag.
Hrmmm. I’m not sure this sentence even makes sense. I’m not sure what the definition of 0ms input lag even is. And I say this as someone who has deeply profiled true end-to-end input/output systems. 0ms relative to what exactly?
If you spend 20 minutes getting used to something you could maybe tell the difference if 3ms of input lag was artificially injected. Maybe. But that’s an exceedingly small value and if I had a toggle switch that turned it on/off I’d bet against you being able to tell when it’s on.
It does depend on the context. In a game like Guitar Hero 3ms is going to be exceedingly difficult to distinguish imho.
Humans are incredibly sensitive to audio rhythms/lag, and we’re talking about a rhythm game. Visual processing isn’t likely to notice, but this is probably between hearing/feeling the keystroke and hearing the reaction. 3ms is entirely believable; detection is based on a frequency domain transformation of the overlaid waveforms, not tuning as such.
Yeah I don’t buy that. It’s a long pipeline and there’s almost definitely more than 3 milliseconds of jitter. I’d wager that success/failure sounds are quantized to audio buffers that are a good bit longer than 3ms.
This is a deep and interesting enough topic that you could publish a pretty good and valuable paper on what the JND is for poor, average, elite, and “professional” players is.
If the line is 3ms for pro I will gladly bet the over!
Have you ever had the experience of having a malfunctioning phone system play back your own voice to you on a slight delay? I have. It's maddening to the point where I find it difficult to continue the phone conversation, and I'd bet that's on the order of 3-5 ms audio delay.
You can set up a double bound really easily for this. Play on speaker, and have someone move the speaker a couple of metres. See if you really can tell whether it has been moved.
After reading some of the science on auditory perception it seems you are wrong. If you perceive a signal in the left ear and then get a louder signal on the right ear a few milliseconds later you will perceive this as one sound coming from the right. The precedence effect breaks down.
Edit: I can also tune a guitar by plucking two strings at once, but I can't by plucking them one at a time.
This really has nothing to do with noticing the absence or presence of a 3ms latency difference in a situation where your baseline is already dozens of ms of latency in the best case.
Adapting isn't the issue though? It just feels much nicer to play on my 144Hz monitor than it does on my 60Hz monitor. This is first-world problems territory at most. It's not just gaming either. Doing anything on my 144Hz feels so much nicer.
>Do you notice the difference in audio/video sync between sitting in the front or the back of the cinema?
rhythm games are literally scoring your timings, a cinema offers no such equivalent of a scored feedback loop.
No, I wouldn't notice 3ms on the lips of an actor. Drummers are known to be able to perceive 1-3ms differences in timing; again, a tactile action.
If a human is involved in the action their sense of rhythm and prediction takes over to allow for manipulation within those timescales. It's not uncommon, either.
I do absolutely not doubt a 3ms difference in timing being perceptible with two rythmic signals. I absolutely do doubt a 3ms latency though.
You are claiming that someone can tell the difference between playing the game oneeter or two metres from the speakers.
Put another way, on a large stage the sound from the bands monitors takes ~10ms to reach the drummer.
I don't have data, and neither does anyone else here apparently, but in the absence of evidence it seems exceedingly likely that any perceived differences due to 3ms latency (difference ?) are placebo. We know that placebo is real and large so unless someone has an experiment that tries to account for this (e.g. properly blinded) my prior is that you can't tell when playing a rythm game whether the speaker is 1m or 2m from you.
There's also a difference in volume and freq response due to the head being in the way and also the shape of the ear that helps you determine the direction.
I don't think we could tell direction based on the distance and less than 1ms difference alone.
Going on sort of a tangent, but since it was mentioned: I hate variable refresh rate.
I suppose my eyes are just sensitive to changes in rendering speed, because I tried it and I couldn't stand the variable ghosting. For me it's worse than screen tearing, which for me is perfectly resolved with vertical syncing or just capping the frame rate to something less than my monitor's refresh rate.
That sounds like more of a problem with your display than with VRR itself.
You could probably fix it by capping your game's framerate to whatever it can maintain consistently within your monitor's VRR window. This would give you a smooth consistent experience, without screen tearing or vsync-induced lag. Since it's not working in a fixed refresh rate container, you can do more unconventional numbers too. I like to run a lot of games at 90 FPS.
First off, I almost never get screen tearing. The only times I get screen tearing is if a program renders faster than my monitor can refresh, and my monitor (a Dell S2522HG) refreshes at 240Hz; screen tearing simply isn't happening. As for input lag, if there is any it's insignificant enough that I don't care.
Second of all, my problem is with the the way variable refresh rate works and not with anything on the software side or the monitor itself. Variable refresh rate is exactly what it says on the tin, the refresh rate varies.
My problem is because the refresh rate varies, the difference in refresh rate becomes very distracting. One moment my monitor is at 240Hz, the next at 135Hz, then 170Hz, then 70Hz, then 200Hz, and so on for an exaggerated example. Anything that's moving (eg: my mouse cursor) leaves more ghosts the lower the refresh rate becomes, and vice versa. This is very distracting because it's not consistent.
Imagine you're listening to music, and the pitch goes up and down arbitrarily. You will probably agree that it's distracting, and that's basically what I find variable refresh rate to be. It is distracting to a fault, and I would rather deal with my monitor going at full speed and limiting my software as appropriate to prevent tearing for a consistent experience that isn't distracting.
You mentioned the variable ghosting, which absolutely is a monitor problem. Either the algorithm determining how much to overdrive the pixels from frame to frame is bad, or the panel itself is too slow for overdrive to work effectively without being noticeable. There may be a setting you can turn down in the OSD to improve performance, as many gaming monitors come with this set too high out of the box.
My suggested fix should still solve your other problem, which is the wild swings in framerate. Bouncing between 200 is 100 certainly is jarring, and capping your game's framerate to make those swings smaller or nonexistent will fix it.
I have a VRR monitor, but I limit it to 120Hz specifically for that reason. Most of the time it just sits there. The exception is while gaming, where sticking to 120 was never an option; I find the variation less of a bother than the more dramatic shifts down to 60/30 would be without VRR.
You arent alone. I despise variable refresh. I cant even stand VFR videos on top of VRR displays. But my eyes are extremely sensitive to various things that almost everyone I know either cant perceive or have to focus really hard to notice.
It can be a bit of a curse. LED headlights and taillights give me headaches. A lot of LED street lighting and business lighting hurts me. Etc.
I used to be basically unable to stand a CRT under ~85hz. :(
Somehow managing the page file and turning other services off in windows has turned into the Autoexec.bat and Config.sys of the modern age. Although, tweaking both autoexec.bat and config.sys was a worthwhile thing to do back in the day.
There is, or at least was, tangible benefit if you understand how it works.
A page file is, as it says, a file that's located in the root of whichever partitions that they are configured on. By default it's only configured on the C: partition and the size of the page file is variable, managed by Windows according to system demands. The page file is equivalent to a swap partition, for the Linux and BSD users out there.
Manually configuring the page file has a number of benefits, if you know what you're after (and you're using HDDs):
* You can specify a size, fixed or variable, for your page files. By default Windows allocates a fairly small page file, and if your page file use exceeds that allocation Windows has to allocate more space on the fly. This takes time, especially if the page file resides on a HDD. If you configure a fixed size, or a variable size with a large minimum size, you don't need to worry about that dynamic allocation which translates to more performance. Another benefit to a fixed page file is that you can have the page file occupy one continuous area on a HDD platter, for better access times and thus better performance; a dynamic allocation more often than not leads to a fragmented page file and thus worse performance.
* You can specify which partitions you want to use for your swap space. It wasn't unusual to just dedicate a partition to a page file, effectively the same thing as a swap partition, especially if your C: partition wasn't that big. This is important for the next bullet point.
* Back when we still ran our programs off of HDDs, making a partition earlier in the drive meant locating that partition on the outer areas of the platter. The outer areas of the platter travel faster than the inner areas, because physics, and faster platter speed means faster access times. By placing the page file on such a partition, the page file has better access times and thus you get better performance.
A lot of these benefits have gone away thanks to RAM becoming both cheaper and more plentiful, and the storage medium where the page file resides changing from HDDs to SSDs in general. So most people usually don't need to mess with their page files anymore.
But for people with specific needs or just a desire to really fine tune their systems, tweaking the page file by hand is still a worthwhile endeavour if you understand what you can get out of it.
The problem with this IMHO is that most users who are trying to do this do not understand a couple of things:
* The NT kernel does not overcommit; and
* As a closed-source OS, client and server editions of Windows do not use build configurations targeting embedded systems; they essentially require ample swap space. They won't run well if users attempt to forcibly minimize or disable their page file.
Every time a game comes out with performance issues caused by shader compilation or poorly optimized loading of assets, reddit threads and game forums are flooded with people claiming these nonsense fixes magically made their games perfectly smooth. It's asinine.
The thing that gets me is for all the attention gamers put on benchmarks when a new game or bit of hardware comes out to compare them, you never see benchmarks for this class of tweaks. If the benefit was so obvious, it should be easy and clear to demonstrate/repeat. Over the years a huge variety of these geek equivalent to old wives tales have accumulated, and a lot of them persist.
Turning off "fullscreen optimizations" doesn't always work. It may not even be possible for DirectX 12.
If you press Start and the taskbar appears, its not fullscreen.
I wrote a simple Direct3D test program and entering fullscreen using alt-enter prevents start from appearing so DirectX 11.0 at least still supports fullscreen, for now.
It’s lovely to see a short, simple WinAPI program written in C — often these days they use other frameworks. It’s old-school and bare metal.
The source was simple enough: get debug privileges, kill and suspend some processes, resume and if still missing restart, but I was puzzled at this function:
This is used while iterating the list of running processes for string comparison. I wonder if others on HN can share any light, please?
a) Why is an optimized string compare needed here? Even iterating a few hundred processes I would not have thought a simple (and likely already optimized) strcmp would be a hotspot
b) Why does this work as a good hashing function? It seems very simple: to the existing value, add a bitshift of the existing value plus the character. Googling shows some people referring to it as the Wang hash, in the context of pseudorandom numbers on GPUs. The actual content I can find on Wang’s hashing shows much larger (and to me more expected) algorithms, eg http://burtleburtle.net/bob/hash/integer.html
It's redeeming quality is that the code size is absolutely tiny, and that 33*h mod 2^64 is a reversible map, so if your input data is randomly distributed at least your output will be as well.
Thanks! Yeah, not being good was my expectation but I thought, if it was there, it had to have a good reason. I appreciate the input and confirmation, thankyou :)
Random distribution of outputs does seem to minimise collisions on as a small an input set as a list of processes so I suppose the risk of killing the wrong process is minimal. I do think if I used the app I’d replace it though.
Also (a little tongue in cheek) uninstall MS Teams. It doesn’t work well when there’s enough free memory and brings the system to a crawl if some other applications are using more RAM. Once you do that, get rid of Outlook.
One colleague uses Teams in the browser. We can only chat when he calls me. When I call him, it simply doesn't work. Is it a missing feature? Bug? Who can tell. I suppose those things, and the absence of features like avatars make the browser version leaner.
We got in a teams meeting with the school recently. Everyone had the same sound and could see the same slides, but the video went insane: windows teams users could see each other, linux teams users could see each other, and the chrome users could see each other(don't know if edge was mixed in this group). Each group could not receive video from the other 2 groups. I keep wondering what the hell goes on in Teams that it has these kind of bugs.
In Windows 7, you could switch to classic mode, which kills dwm.exe, and in my experience seemed to mean a snappier, more responsive desktop. I thought they removed this ability in Windows 8.
It's actually the opposite, modern mode uses gpu for compositing, classic modes uses cpu, and any gpu would beat cpu handily in that task including integrated ones.
Unless, you didn't install the correct driver and used standard vga driver, or like in a vm which doesn't provide gpu acceleration.
The parent post refers to responsiveness, not throughput. Even if the GDI was fully accelerated, the compositor adds additional input lag because it has to sync with the monitor's refresh rate whereas without DWM the monitor displays whetever is on the framebuffer. The drawback is that you get tearing and damage artifacts (like the classic crash window[0]).
Not sure if this is exactly related, since it's beyond my understanding, but it seems game devs have been struggling to get things drawn on the screen at the expected time. (The speaker says things were much more straightforward on the Amiga. Progress works in funny ways!)
That is a different thing (i've seen the issue in my own games), it is really about how even if the game updates at 60Hz, producing stable 60fps and shows on a 60Hz monitor with vertical synchronization (vsync) enabled you can occasionally have some "stuttering" (IIRC he calls them "heart beats" in the article the video is about).
The main reason is that your game time and the monitor's "time" are not really progressing in sync and even with vsync you are actually reacting on what the monitor did in the "past". This is why it isn't a problem on fixed platforms like the Amiga since you more or less know how fast the system is and can simply use vsync for game updates too (in other words make the game dependent on the framerate and just ensure the framerate is more or less constant, which is easy on fixed hardware but much harder on something like the PC).
FWIW this is still a problem, but at least a "hack" that has become a bit common since the video and article were published (and Croteam also did) is to "smooth out" the time progression by averaging the time deltas of the last few cycles. This doesn't fix the core problem but it makes the "heartbeat" less likely to happen and be noticeable at the cost of -mostly imperceptible- drift between game time and real time (that you can always reset if it becomes too large anyway).
Classic mode does not use compositing at all, but the drawing routines of GDI etc were hardware accelerated already. Keeping a buffer around for every window for compositing could become a big performance problem and was part of the reason why Vista was considered slow. Mac OS X had this problem too.
"GDI is hardware accelerated on Windows XP, and accelerated on Windows 7 when the Desktop Window Manager is running and a WDDM 1.1 driver is in use. Direct2D is hardware accelerated on almost any WDDM driver and whether or not DWM is in use. On Vista, GDI will always render on the CPU."
I've spent the last few days messing with Linux in both VirtualBox and VMWare and I found that disabling 3D acceleration leads to ~2x faster boot, and disabling desktop compositing (in the Linux guest) leads to 5x less lag when moving things around on screen.
I was surprised, because I thought using 3D acceleration and compositing would be faster.
Also, I run the VM at half res and used DPI workaround (.manifest file) to let Windows scale the VM instead of VirtualBox (VirtualBox scaling is very slow for some reason).
Linux in a VM doesn’t behave like Linux on bare hardware, and graphics acceleration in particular is a house of cards. You can’t draw any real conclusions from this.
The whole "in my experience" arguments do contain some truthful anecdotes, but "from my experience" they're difficult to filleter from all the FUD, placebos and snake-oil.
Would really appreciate an explanation in the README about how/why this works.
WinAeroTweaker is a good example of a program that does things to your OS the OS probably doesn't want you to do, but has an explaination of how/why it works for every different option so you can feel slightly more comfortable.
8gb of DD3 ram, 1.1ghz 4c/4t CPU from 2016. Many people who buy this will have it for at least five years. This is what the average computer experience looks like, just imagine how windows runs on this.
The article is about a 7 year old build of Windows 10. Windows 11 has made some improvements since then so using such apps could actually make it worse for you.
> 2 years ago
> It [sic] tested it on Windows 10 LTSB (1607)
If you want to reduce input lag for exclusive fullscreen, turn off “fullscreen optimizations”. But that specific bug, introduced in build 1903, was fixed a few years ago. (But if you still somehow get it — you can just turn this off per-game instead of this wide hack!)
Modern DXGI flip presentation model doesn’t add input lag anymore and properly detects when the exclusive fullscreen app is the only one showing, and gives it direct render control. (E.g. you can frame tear again!)
I think some UIs will also just not work nowadays if you kill DWM, even more likely in Windows 11 since more UI got moved to WinUI.