Making a full playable published videogame is a huge undertaking, and requires 100% commitment as an art medium. On the other hand, small interactive sketches showing style/narrative ideas or testing out game mechanics are a perfectly viable form of output and are much more accessible. For example, what comes to mind are Flash games from the 00’s and content on indie platforms like Itch.io.
I wanted to work out a practiced toolset+skillset for making those kind of sketches – simple homage snippets (or even “videogame shitposts”?). Instead of current default choice of Unity or UE4 I ended up basing it on browser-based React, with ThreeJS integrated via @react-three/fiber.
The “engine features” that I wanted to support in the initial version:
- 3D rendering pipeline
- lightmapping and UV2 auto-mapping
- WASD+mouse input
- top-down physics with wall collisions, movable obstacles, sensors, pleasant “weighty” feel
- CSG-based level geometry
- basic animations for level props like doors
Other features were left until further iterations.
Needless to say, ThreeJS is the de facto default for displaying in-browser 3D content. React offers a very effective orchestration layer to apply on top of ThreeJS scene content to manage loading/adding/removing props and lighting, as well as control the frame loop and post-processing effects. Logic can be organized into components with multiple bundled pieces of functionality, not unlike Unity entities and their behaviours.
Input and Physics
I intentionally stuck to a 2D abstraction for in-game physics, similar to older 90s pseudo-3D first-person games. This felt like a good creative constraint to help focus on narrative rather than simulation. Use of 3D physics often does not add any extra value and often brings in strange movement glitches and “jank”.
For this I used Box2D, controlling 3D shapes on a top-down plane.
First-person mouselook and WASD keys were connected to the player’s collision body in Box2D via the use of simple per-frame linear impulse. Combined with heavy linear damping on the body this reproduced a satisfying first-person movement mechanic, again similar to iconic 90s first-person games – sense of inertia combined with reliable “start torque” and a constant maximum movement speed.
Level Geometry and Lightmapping
Classic game engines like Quake and Unreal used constructive solid geometry to define the level shape – for both display and physics collision boundaries. I started out using JSCADv2 modeling primitives to emulate this approach, but switched to three-csg-ts because latter allowed support for per-face materials.
The level is modeled using subtractive room “brushes”, which produces reasonably robust geometry. Physics collision walls are computed from the CSG shape by projecting any horizontal faces onto the XY plane and computing a total union polygon shape. Using Box2D ChainShape with the continuous “bullet mode” for the first-person controller body resulted in a reliable way to keep player inside level bounds, even at high movement speeds.
For lightmapping, I used my prior work with @react-three/lightmap. The ability to compute the lightmap directly in-browser proved to be a huge productivity boost. The computed radiosity does not have a very high fidelity, but it reminded me of the soft “wax-like” light bounce from 2001’s Max Payne which was an effect I wanted to hold on to.
Lightmap pixelation and “striation” was helped by increasing the probe render target size. Even when keeping lightmap UV2 texels at low resolution (e.g. 0.5m or 1m effective “physical size” per texel) the lighting worked well as a visual cue towards “realism”, while the bake time ran under only 1 second for a small level.
Interactivity and Animation
I did not want to bother with mesh/character animation yet, but it was important to at least have basic moving level props, like doors. Because I was using React, I could rely on an existing animation library – react-spring – to automate prop movement.
I implemented simple elevator doors by using a Box2D sensor as a trigger, and then animating door meshes directly with react-spring. The door position was then fed back into Box2D collision logic via the use of a kinematic body. It might seem redundant to integrate a second physics simulator in addition to Box2D, but it allowed to stick to the “right tool for the job” for adding a simple transform transition.
The demo level implements about 20-40 seconds of gameplay. In addition to the above features it implements “chapter transitions” by generating/baking new content while the player is inside an elevator – on load, the player is “teleported” into the newly generated section of level content.
What appeals to me about React-based workflow as opposed to e.g. Unity is that the focus shifts to the narrative intent. Rather than herding a simulated “soup of objects” in one shared scene, game logic defined in React explicitly spells out what should be present in the virtual world at a given moment, and nothing else. This kind of tight declarative lifecycle control feels like an effective way to define story scripting and levels – up to a certain degree of complexity of course!
Obviously, for larger projects – i.e. “real” games and not short sketches – the workflow needs are very different, and team skillset is focused on industry-standard tools. But for me, as a developer with existing Web proficiency, this was an exciting experiment to attempt.