You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Translate input from any device - e.g. mouse, Wacom, Wii or XBox controller, eye tracker - to generic input, independent of device origin, with support for two or more devices operating in parallel, such as two mice.
Motivation
The coloured squares in the example application are currently controlled by click + dragging with your mouse. It should also work with a touch screen, courtesy of GLFW translating those hardware events into mouse events for us.
Next I'd like to "map" the position of a square to one input - such as the mouse position - and the rotation of another to another input - such as the angle of my Wacom pen - and the color of another to whether or not the H-key on my keyboard is pressed, red if it is, blue otherwise.
I figure there are a total 4 different kinds of input that we as humans are able to provide the computer, irrespective of hardware, in either relative or absolute form, at various resolutions.
Types
On/Off for any number of keys
1D Range
2D Range
3D Range
Examples
Event
Type
Mode
Resolution
Mouse 2D
2D Range
Rel
16-bit
Mouse 2D
2D Range
Abs
0-screen
Mouse Key
On/Off
3-10
Keyboard Key
On/Off
20-50
Keyboard WASD
2D Range
Rel
20-50
Wacom Position
2D Range
Abs
16-bit
Wacom Pressure
1D Range
Abs
0-4096
Wacom Angle
2D range
Abs
0-4096
Playstation Key
On/Off
4-12
Playstation D-Pad
2D Range
Abs
4-12
Playstation Range
1D Range
Abs
4-12
Playstation Touch
2D Range
Rel
256
iPad Touch
2D Range
Abs
256
iPad Gyro
2D Range
Abs
256
Index 3D
3D Range
Abs
16-bit
Index Key
On/Off
5-10
Index Finger
1D Range
Abs
256
GPS
2D Range
Abs
32-bit
Midi Key
On/Off
0-127
Midi Knob
1D Range
Rel
127
Midi Slider
1D Range
Abs
127
Midi 2D
2D Range
Abs
127
Midi 2D
2D Range
Rel
127
Midi Velocity
1D Range
Abs
0-127
Midi Aftertouch
1D Range
0-127
Motion Capture
3D Range
16-bit
Gesture
On/Off
1-n
I'd like to build my application around these 4 fundamental input types, and enable the user to pick any of these as sources from which to generate it.
Implementation
I'm not sure.
I figure there must at least be a translation layer. Something dedicated to interpreting the data coming from the device, like the mouse, Xbox or Valve Index controllers, the keyboard and so forth.
Your average application already provides two of these translation layers, for your mouse and for your keyboard.
And now we can respond to these throughout our application, instead of to the mouse or keyboard directly. Then, when support for a new device is added, we can simply translate it to one or more of these 4 general purpose events.
To poll or not to poll
This one is always tricky. We don't care for events that happen more often than once every frame, and when we do care we want them to happen either at the beginning or end of each iteration.
For example, if an event comes in before the scene is rendered, then we can take it into account. If it comes in during a render, it's somewhat pointless. But that's exactly what could happen in the current example application, as drawing and receiving events are entirely separate and happen independently. (As far as I can tell?)
So polling seems the better option; at least in terms of predictability which I would trade for performance, if that is actually a tradeoff.
Unknowns
So input can come at any time, great. But some input have a distinct beginning and an end. Like dragging. Dragging is a combination of a button being pressed, a series of range2d's (in the case of a mouse) followed by a button being released.
Other input is a fire-and-forget type deal, like keyboard presses. Those are easier to conceptualise.
The text was updated successfully, but these errors were encountered:
A discussion topic regarding the way input is managed in the example application and how it can be improved.
Goal
Translate input from any device - e.g. mouse, Wacom, Wii or XBox controller, eye tracker - to generic input, independent of device origin, with support for two or more devices operating in parallel, such as two mice.
Motivation
The coloured squares in the example application are currently controlled by click + dragging with your mouse. It should also work with a touch screen, courtesy of GLFW translating those hardware events into mouse events for us.
Next I'd like to "map" the position of a square to one input - such as the mouse position - and the rotation of another to another input - such as the angle of my Wacom pen - and the color of another to whether or not the H-key on my keyboard is pressed, red if it is, blue otherwise.
I figure there are a total 4 different kinds of input that we as humans are able to provide the computer, irrespective of hardware, in either
relative
orabsolute
form, at variousresolutions
.Types
Examples
I'd like to build my application around these 4 fundamental input types, and enable the user to pick any of these as sources from which to generate it.
Implementation
I'm not sure.
I figure there must at least be a translation layer. Something dedicated to interpreting the data coming from the device, like the mouse, Xbox or Valve Index controllers, the keyboard and so forth.
Your average application already provides two of these translation layers, for your mouse and for your keyboard.
That's great, we can translate these into our 4 general-purpose input handlers.
And now we can respond to these throughout our application, instead of to the mouse or keyboard directly. Then, when support for a new device is added, we can simply translate it to one or more of these 4 general purpose events.
To poll or not to poll
This one is always tricky. We don't care for events that happen more often than once every frame, and when we do care we want them to happen either at the beginning or end of each iteration.
For example, if an event comes in before the scene is rendered, then we can take it into account. If it comes in during a render, it's somewhat pointless. But that's exactly what could happen in the current example application, as drawing and receiving events are entirely separate and happen independently. (As far as I can tell?)
So polling seems the better option; at least in terms of predictability which I would trade for performance, if that is actually a tradeoff.
Unknowns
So input can come at any time, great. But some input have a distinct beginning and an end. Like dragging. Dragging is a combination of a button being pressed, a series of range2d's (in the case of a mouse) followed by a button being released.
Other input is a fire-and-forget type deal, like keyboard presses. Those are easier to conceptualise.
The text was updated successfully, but these errors were encountered: