-
-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Input Composer: The Roadmap 🚀 #251
Comments
Dont think the update() at the top-level is avoidable. In many of my use cases especially when dealing with webxr stuff, I like being being able to place the input state updating at a specific point in the render loop. Also unavoidable since all the data from the Gamepad API and WebXR have to be polled, its not pushed out with events. WebXR comes with lots of extra states. like touched. So you can test if the user has their finger on the trigger but its not currently being pressed. Don't forget hand input, thats just another level of crap to deal with. You may need to put together a controller profile collection that can translate the gamepad API. I would recommend getting some sort of Hotas controller for prototyping since it'll have some "differences" in its input that you'd normally not encounter in regular gamepads. For example, the Thrustmaster Flight stick has a button / joystick-ish thing called the hat, its treated as a single input axis but the data returned is very specific because it treats it like a 8 direction DPad, you'll need to remap the values to something like that to N, NE, E, etc for devs to use it easily instead of -0.7142857313156128 == NW. Here's the json mapping I've made for the Thrustmater Hotas and Xbox360 controller. And the Quest 2 Controllers json Code of how I handle Hand tracking input from the quest 2 I haven't tried a Racing Wheel controller yet on the web, but curious what the input would like for that. Maybe if there is a popular and affordable pc one, i'll be incline to buy one for testing. |
Thanks for the input, @sketchpunk! I have to admit I have so far avoided diving into WebXR for this -- in part because I've been avoiding WebXR in general (it's not been the focus of much of my work) -- but I should absolutely do it for this. I have a Quest 2 here, and even a HOTAS (a Thrustmaster T.16000M), and can borrow a racing wheel from a friend. Having said that, in general I am striving for, instead of getting support for a huge number of devices right out of the box, making it easy for the community to contribute mappings and similar, either as PRs to this package, or as separate packages. I probably won't get the APIs right for that in the first couple of iterations, but hopefully steadily inch closer to something that works. Also good point about needing to schedule the |
Sounds good. I wouldn't expect huge devices out of the box. My thinking is to map out various possible input groups like, Keyboard, Mouse, GamePad, Hotus, Wheel, VR Headset, VR Controller, VR Hand. What is the various inputs available in each group which will help find common denominators that can be shared across various groups & which ones are very specific to the group. Like does the Hotus Hat Button and a Gamepad DPad both use axes input? Are the values the same or controller specific? Having all this information can help you define your API and Data Structures. Allows you to ask yourself questions if what your designing is flexible to handle all the groups in the long term. Like a button is a cross group entity, you can say well it just has 3 states isInitialPress, isHolding, isReleased but you may handle that in a rigid way. You can tell yourself, Well... buttons on VR Controllers also have a Touched state, so you have to think that the button might need to handle these states differently or maybe you feel its to complicated and break the button into CommonButton and VRButton. But wait buttons on a controller can have a gradient value, So how do I treat the states in relation to the gradient value? Bla Bla Etc Etc. Long story short, Know your playing field to save yourself time by making well informed decisions, better educated guesses :) Plus make a nice simple 3d example that can use as many inputs as possible. In my baller XR prototype I can control the vehicle with a keyboard, gamepad and hotas, later on will add vr controllers, maybe hand gestures if im seriously over caffeinated. The end user wouldn't have to do anything other then plugin in the device & drive :) |
Before I move onto the other goals, I want to work out the immediate Goal 1. I follow Input Controls description and endorse the FP strategy of having processing mixed in, returning the final state. I think it's a nice dev experience with a simple mental model. I do have concerns about the Input Actions and how in your example it is mixing game state in with the input state. Why would the Input Action hold an additional state of "charge power"? Wouldn't that be for the user to track in their game logic? I can see maybe having convenience states like I think the main goal of actions is to provide semantic abstractions, such as described in Unity's docs (I know I saw this a lot): https://docs.unity3d.com/Packages/[email protected]/manual/Actions.html Here the goal for And then I think we should define how events fit into Goal 1. You brought up the FP example before but I think it would be worth fleshing that out. |
The overall approach I'm taking is that everything the game is interested in -- state of virtual Input Controls, event-like Input Actions, complete controllers encapsulating multiple of these -- is essentially derived state (originating at actual physical device input data.) You can imagine this as a series of layers, starting with the actual physical inputs:
Now you can virtualize these into a single piece of state:
Let's assume a function Games aren't always just interested in the "is it pressed" or "is it not pressed" state of buttons, but rather changes in this state. This is also state derived from the one above:
Let's look at what an implementation of the first one might look like (pseudo-ish code, but based on my current prototyping code): const buttonJustPushed = () => {
let isPushed = false
return (button: boolean) => {
if (button && !isPushed) {
isPushed = true
return true
} else if (!button) {
isPushed = false
}
return false
}
} This function will now return As an API, it might be more convenient to let the user specify a callback function that gets invoked when the button was just pushed down. An implementation could look like this: const onButtonPushed = (callback: () => void) => {
let isPushed = false
return (button: boolean) => {
if (button && !isPushed) {
isPushed = true
callback()
} else if (!button) {
isPushed = false
}
return button
}
} Note that this implementation doesn't actually change the state (it will always return the Similar functions could now be implemented for the other two events listed above (triggering events when the button is released, and when the button is held for a minimum amount of time.) Once you have functions like let charge = 0
const weaponFiring = flow(
onButtonPushed(() => { charge = 0 }),
onButtonHeld(0.2, () => { charge += deltaTime }),
onButtonReleased(() => { fireWeapon(charge) })
) This flow can then be applied to the fire button: pipe(getFireButton(), weaponFiring) And possibly abstracted into a higher-order function for easy reuse/redistribution: const ChargeAndFire = (onFire: (charge: number) => void) => {
let charge = 0
return flow(
onButtonPushed(() => { charge = 0 }),
onButtonHeld(0.2, () => { charge += deltaTime }),
onButtonReleased(() => { fireWeapon(charge) })
)
}
const weaponFiring = ChargeAndFire((charge) => fireWeapon(charge))
/* In game loop: */
pipe(getFireButton, weaponFiring) I hope this clears things up a little? |
i'm curious on how the "flow" and "pipe" thing is going to work in general. Flow looks to be just taking in an array of button handlers. Dont get how it works/functions behind the scenes. |
Both |
I should add an important bit of extra context: I've been talking about and using functional programming primitives and techniques in the (to some extend pseudo/prototypey) examples here, but these do not require to be the centerpiece of the user-facing API. For example, Material Composer -- which has a very similar design to this -- asks the user for a /* Create a shader graph from a list of modules */
const graph = compileModules([
PlasmaModule({ offset: Mul(time, -0.2) }),
SurfaceWobble({ offset: Mul(time, 0.4), amplitude: 0.3 }),
]) This translates extremely well into JSX: <composable.meshStandardMaterial>
<PlasmaModule offset={Mul(time, -0.2)} />
<SurfaceWobble offset={Mul(time, 0.4)} amplitude={0.3} />
</composeable.meshStandardMaterial> What the user doesn't see, though, is that each of these modules is just a function that transforms one piece of state to another. Once the user wants to implement their own modules, it's just a matter of writing a function. I would probably want to go for a similar API for this library, applying the same techniques. With the library exporting a collection of primitives to compose player input, most users would just use those -- and not necessarily be exposed to FP concepts like pipe and flow --, while keeping things straight forward to extend for advanced users, and keeping our own API surface minimal. |
Yes that makes sense, and I see where you can separate the game logic in the callbacks. I usually have event callbacks in a specific, documented order so I think it's a little odd to see you be able to mix them around as you see fit but it does give a lot of flexibility and I can see how you can abstract all sorts of interfaces from this base. Looking forward to it. |
Summary
Input Composer aims to be a powerful, but easy-to use library for handling player input in games, with a support for a wide-range of different devices and device types.
Game Input is a complex topic and we won't be able to do everything we want to do in the library's first version, so this issue is an attempt at tracking its long-term goals.
Terminology note:
user
here refers to the user of the library;player
to the player of the game.Goal 1: Composable Input Controls and Input Actions
Input Controls are user-composable functions returning the state of a virtual control (like a button, an axis, or a vector.) They are typically composed from smaller functions using typical FP approaches. An input control that represents a movement vector might look like this:
Input Actions are events that are triggered by input (like button presses, tapping, buttons pressed while another button is being held, and so on.) They are implemented as functions within the same function pipes used for controls. For example, here's a pseudo implementation of a "Fire" button that can be held to charge the weapon:
Goal 2: Composable Input Controllers
Input Controllers are implemented as closures that provide a number of user-provided Input Controls and Input Actions (see above) and also keep some state, like tracking which control scheme is currently active.
Realistically speaking, in the first few iterations of the library, the creation of controllers will happen in userland, with the library providing some example implementations, until we figure out what patterns work best. In the long term, the library should provide either some good "ready to use" controller implementations that work for most games, or provide one that is configurable enough to cover all of them (but there's going to be a thin line between doing that, and just letting users build their own implementations using our primitives. We'll find out.)
Goal 3: Normalized Input State
With the first two goals reached, the user will be able to build Controllers for their game that provide all the Controls and Actions it needs. But they'll still be working relatively close to the metal, as they need to query different devices depending on the currently active control scheme. This can probably be abstracted away to a certain point, but what if the user wants to allow their players to rebind controls using a mixture of different devices?
For this, it would be nice to have a store of normalized input data that Control implementations can source their data from instead of querying devices directly:
This would open the door to some higher-order normalization. For example, we could normalize the notion of a horizontal main axis like so, taking input from all connected devices into account:
Device Support
Input Composer intends to support a wide variety of devices and device APIs.
Checklist
General Design Goals
update()
functions beyond a single top-level one (but try to avoid even that.)The text was updated successfully, but these errors were encountered: