-
-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
USB DMX output timing odd behaviour on Windows #1656
Comments
I recently performed a study to instruct users of a uDMX interface to adopt and tweak their timing settings by buying a uDMX interface. This led to this forum thread: https://www.qlcplus.org/forum/viewtopic.php?t=17954&hilit=uDMX What I did see was, indeed, the usleep timers ... I'm not acquainted enough to make the judgement - I only studied the outcome, and ^^ observations reflect almost exactly what I have measured. I commented @mcallegari asking for the non-existent influence by setting values in the registry. ^^ Observations might strike lightning. I never considered the qt vs API vs sleep deviations or why data output was sluggish. Link to the DMX timing chart I'm holding (since I had my cat measuring device): https://docs.google.com/spreadsheets/d/1mah0i1ffiByNordS4c7oKktONagDmoJiu1nrE-5VC6Y/edit?usp=sharing Sorry, I cannot contribute to the code - I'm more than willing to test the outcome... |
Yes, @GGGss, your observations seem to match mine. But after a little confusion and investigation, I think you have some nomenclature mix-up with the name "uDMX". I am not too familiar with these things, but as far as I can discern, the "uDMX" that QLC+ supports is the Anyma uDMX, which is an open-source DMX interface based on an Atmel AVR microcontroller, whereas cheap interfaces labelled "uDMX" available for purchase online (e.g. AliExpress, eBay) - and what you have - are typically based on FTDI chips. Be aware they work in very different ways! The 'original' uDMX cannot suffer from frame timing issues like these, as the embedded microcontroller generates the DMX break, MAB, etc. by itself. That is, the timing is not decided by the host computer, but independently by the interface. But with the FTDI-based interfaces, the host computer (i.e. QLC+) is "in the driving seat" so to speak. So, whatever those registry values you refer to don't apply here - that's for the "uDMX" interface plug-in, but we here with FTDI interfaces are dealing with the "USB DMX" plug-in. The frame frequency is settable within the plug-in config window. By the way, that was one thing I forgot to mention before: I played with adjusting the frame frequency down to a lower rate, and found that it too has poor timing accuracy. For example, I set it to 20 Hz, but what I actually got was around 16 Hz. With regard to allowed timings by the standard, I did wonder whether any of this was verging into failing to meet specifications, but after checking the ANSI E1.11 specifications, it's technically okay. The standard specifies no maximum break length (only minimum), and MAB has a very high maximum length of 1 second. Zero MBB time is also allowed. No minimum number of slots is specified either, so technically outputting all 512 is at the controller's discretion. But, as we all know, something that is technically within specs may equally be of no reliable use in the real world. 😄 |
I have been doing some more investigation. It is my understanding that in Windows editions before Windows 10 version 2004, the system timer resolution will globally be whatever the smallest value requested by any application or process is - e.g. if Process A requests 8 ms, and Process B requests 1 ms, everyone gets 1 ms timer resolution. If no-one requests anything, the default is 16 ms. However, in Windows 10 version 2004 (released May 2020) and onwards (presumably including Windows 11) the behaviour was changed. Now timer resolution is essentially per-process. If a process requests a certain resolution, it may still end up with a lower value that some other process has already requested. But if a process makes no request (i.e. never calls There's also one other wrinkle: with Windows 11, whenever a process is minimised (or otherwise not visible to the user), Windows will also temporarily downgrade the system timer resolution for that process, even if it has requested a lower resolution, as part of the default power-saving scheme. If applications want to opt-out of this they need to call the The change in behaviour described above explains why I (on Win 10 22H2) wasn't having any luck forcing a lower system timer resolution. I happened to discover that running a game in the background lowered it to 1 ms, and running the Microsoft SysInternals ClockRes utility showed as much. But yet QLC+ DMX timing was still unaffected and bad. So, I really wanted to know how the code that calls Windows API functions After reading the code more thoroughly I note that what
Where does the So I created that registry key and value, setting it to However, after some experimentation, I am now more confused than ever... 😕 Multiple different values of I also discovered that the Windows built-in This is an example of a report where an app (a game) does request different system timer resolution: So if QLC+ never triggers inclusion in the report, it appears never to actually be calling |
I have discovered some odd behaviour with regard to timing of DMX output frames when using a USB DMX interface on Windows.
Basically, it seems the simple act of opening and closing the output plug-in configuration window - without adjusting any of the settings - ruins the timing of DMX frames being output using that interface. The break and MAB periods become excessively long (15+ ms each) and QLC+ is no longer able to output full frames of all 512 slots at the required frame frequency because there isn't enough time - the slots get cut off (i.e. <512) by the break of the subsequent frame.
Bizarrely, when this happens, system timer accuracy is reported by the plug-in to be "Good", despite that not actually being the case. In fact, the only time I have managed to get sensible frame timing is when accuracy is reported as "Bad". (This I believe is because under that situation, QLC+ doesn't attempt to do any timing for break or MAB, and just lets them occur as quickly as possible.)
From digging through the QLC+ source code, I can't figure out exactly the root of the problem, but I think it is probably something to do with the following:
usleep
implementation on Windows rounds up the givenusecs
argument to the nearest millisecond. This renders QLC+'s attempt to do microsecond-resolution sleeps (usingDMX_MAB
of 16 andDMX_BREAK
of 110 µs) pointless.usleep
functionality is delegated to Windows'Sleep
API function, which also only deals in milliseconds. And, importantly, the minimum resolution ofSleep
is reliant on the current resolution of the system timer.usleep
actually takes more than 3 ms, and is "Bad" if so. But if it is taking 15 ms, how can it be less than 3 ms? Perhaps the accuracy ofQElapsedTimer
is also affected by system timer resolution; if so, it would not be surprising if such a test gives flawed results.timeGetDevCaps
andtimeBeginPeriod
/timeEndPeriod
to request a certain timer resolution for the life of the application.MasterTimerPrivate::start
) that uses those API functions, but I couldn't figure out under what circumstances they are called. Whenever or wherever it is used, it doesn't seem to have any effect on DMX USB timing.To Reproduce
Steps to reproduce the behaviour:
Expected Behaviour
DMX frame timing of break and MAB should be minimal (i.e. approx. 1 ms for each, within the limitations of single-millisecond-resolution timing available on Windows), and frame slots should not be prematurely cut off by the transmission of a subsequent frame.
Screenshots
Logic analyser DMX traffic capture showing bad timing:
Detail of above capture showing cut off frame (only 339 slots transmitted):
Desktop
The text was updated successfully, but these errors were encountered: