Mallet Pinball: Behind the scenes

About a year ago I wrote an article about the early days of the pinball table, now after several mothballed months it’s done and here is how it works.

The basic control setup

A GVMachineController actor is on the map and a kismet node makes sure that the player takes control of it when the level loads. After that, all input is processed by the machine which then adjusts linked actors (changes position, rotation, etc). For example here is how the right flippers are set up:

Machine name is just a user friendly way of telling multiple controllers apart. The Machine components list is for all the pieces of a machine which can be controlled separately (i.e., linked to a different button or axis). In this case there are 4 of them: left flipper, right flippers, the mallets and the ball. We’ll get to why the ball is there but first let’s see the guts of the “right flippers” component:

The Input Source is now set to “User” which enables live input. The other option is “Matinee” in which case recorded performance is replayed. Then come the supported input devices where the actual buttons and axes are set.

With the Allow Recording flag set to true the incoming input events will be dumped into a text file in JSON format. That file can be converted into a matinee to drive the machine.
Seamless Resume is not relevant right now so let’s move on to the Outputs list: each item in there defines an actor property which will be controlled by the input chosen earlier. This way multiple actors and/or multiple properties can be changed by a single device, in this case the right control key will rotate both right hand side flippers.
The first output is the lower one of them:

In the Machine Part property we reference an interpolating actor and specify that we want to work with its Rotation property. (Other supported actor types are light, vehicle, constraint, material instance, forcefield and thruster.)

Button Value Mappings: Here we define the produced values for the two states of the button: what numbers we should output if the button is pressed or released. They are XYZ vectors just like everything else control related: every input device and actor property is treated as vector3 so they can be mixed and matched. (Although this does not necessarily mean that all members of the vector “do” something. For example a joystick will only change X and Y values, Z will stay at 0.)

Since a rotation property was chosen earlier, the XYZ members correspond to Pitch, Yaw and Roll. Now it’s easy to see what’s happening: when the button is released then rotation is not modified while pressing a button will change Yaw by 40 degrees. The time it takes to blend between the two states are defined in the next two Blend To…properties.

Button Component Mapping defines which member in the target actor’s property is controlled by which member of the output we generate. In this case X and Z values are not used (nothing will change there in the target actor), only the generated Y (Yaw from 0 to 40 degrees) will be applied.

And finally Button Output Mode determines how the output values are applied to the target property’s existing values. Here it’s set to “Add” so the the Pressed Output‘s 40 degrees will be added to whatever angle the flipper was placed on the map. Other modes include “Multiply” and “Direct”. The latter is for replacing the original value with the new one, in which case the Button Component Remapping comes in handy: using “None” on certain members will make sure that those values wont be modified at all, they don’t become the default 0 set in Released/Pressed outputs.

And that concludes the lower right flipper’s setup in the “Right Flippers” machine component. The other output in there is for the top right flipper: the basics are the same only the Pressed Output value is different to produce a slightly different rotation amount.

The next machine component deals with the two mallets. I won’t get into their setup but the idea is pretty much the same as with the buttons: take input (mouse movement), massage it until the produced output values are appropriate, add them to the mallet actor’s positions. The two mallets are two items in the Outputs list with the only difference between them is the referenced actors and the range of motion.

The final component is the ball which might seem weird since it’s controlled by physics not the user. The reason it’s included is the Allow Recording flag which will make sure that the ball’s motion gets recorded.
The other reason one might want a rigid body here is that user input can be used to apply force or torque to the actor.

The interactive table elements

There are three classic interactive pieces on the table: the purple bumpers, the green drop targets and the two slingshots above the flippers.

The bumpers have rings moving up and down around the mushroom shaped base, but they are just non-colliding decorations, the ball is repelled by an NxForceFieldRadial.

The ball is sensed through the touch event of an invisible, non colliding cylinder around the base: when a KActor passes through the ring moving matinee is played and the force field is turned on for a split second.

Let’s move on to drop targets: they are animated movers with a Kismet subsequence for each of them:

Animation starts when the interpactor takes damage caused by the ball. (The ball is our  GVKactor class which has speed dependent damage functionality.) The “Hit” sequence output is activated which will increase a counter outside this subsequence. That counter is linked to all four drop targets so it’s value reaching 4 indicates that all targets are down. In that case some points are given and the drop targets are reset.

Parallel to relaying the impulse to the output, we also start playing a matinee. It contains both the dropping and rising (reset) motions as one curve. However using an event track the matinee pauses itself after the target has dropped: that’s the “Target Down” output on the matinee node. (The NOP nodes don’t do anything, they are just there for link management purposes.)

So after starting the matinee it pauses itself and it stays paused until the impulse arrives through the “Reset” sequence input. (Since the mover is almost fully covered by the floor there is no way the ball could trigger it so there are no safeguards against that scenario.)

Originally drop targets (and also the bumpers) were prefabs but the prefab system is broken in so many ways that it was simpler to just copy-paste my way through this.

The slingshots are also activated by damage but for some reason here it’s even less reliable than in the case of the drop targets, as the video clearly shows. However when it does work then a matinee rotates two invisible, colliding cylinders into that triangle shape so they smack the ball. (Otherwise they are pulled back into the body of the slingshot.)

The matinee also changes the “MorphAmount” scalar parameter in related material instances which takes care of the cosmetic changes of the mesh using the World Position Offset surface property. The vector displacement is stored as vertex colors: in Luxology modo a morph map is converted to an RGB vmap by a script. In the Unreal material the offset values are unpacked to the proper range then mesh scaling, orientation and mirroring is factored in. Since polygon normals also change in the morphed state, a second normal map is blended in.

Special effects

Pinball games usually feature a shiny steel ball but I wanted something a bit different: an almost totally diffuse material without any sharp reflections, like unpolished aluminium.
First I tried turning on light environment on the ball but that wasn’t updated fast enough, the approximated lighting couldn’t catch up to the ever changing lighting conditions.

That left me with a choice: either place a lot of small, dynamic lights or use a dynamic cubemap reflection to somehow light the ball. The former option has several disadvantages: multiple dynamic lights affecting the ball could really hurt performance, lot of manual work recreating the static environment as dynamic lights and keeping them in sync with any subsequent changes on the map.

Using a reflection had it’s problems (no shadows, cubemap blurring is not straightforward) but it involved much less manual labor and the performance aspects were easier to fine tune so I went with this option.

The ball carries a SceneCaptureCubeMapActor which updates a 64×64 (per cubemap side) texture target 20 times a second. I planned to extend the scene capture class so the framerate varies based on distance to camera but the overall performance was reasonable so I never implemented it.

The far plane is set in a way so it renders nothing beyond the immediate vicinity of the table. The rest is filled in by another cubemap, captured only once when the map starts, only showing the environment further away (the skydome and other decorations).

The first step of blurring the reflection is done through a post process referenced in the capture actor: depth of field is applied with focus distance and radius of 0. That blur is easy to fine tune and much more efficient than the brute force method possible in the material editor. Unfortunately it produces seams at the edges of the cube and that limits the usefulness of this technique. However with some tweaking it does provide a good enough base which can be processed further in a material where the cubemap is

blurred by rotating it on each axis and averaging the passes. A single scalar parameter defines the angle of rotation and that value is used to rotate in positive and negative directions, then do the same with half that angle, all that on the three axes which adds up to 12 passes. At 6 the ball looked “spotty” while 24 didn’t make that much of a difference at common view distances.

By the way, for the sampling of the cubemap the surface normal is used instead of the reflection vector. This makes the surroundings “stick” to the surface and not “slide” on it.

The final touch is a screen space noise which gets rid the remaining black lines. The strength of the noise (a scalar parameter) defines the amount the ball’s surface normals are jittered. The noise is animated and it’s strength decreases as the surface gets further away from the viewer which provides a more even blurring and a somewhat sanded look.

The material has other optional features like dynamic range adjustment for the cubemap, contrast tweaking, emissive map for heating the ball and so on. The most expensive combination costs 194 instructions but that’s fine considering that the ball is reasonably small on the screen.

The cubemap is reused in the dark ball as a true reflection. The slight blur and the subsequent black seams are there but not very apparent.

However there was something missing from these captured cubemaps: specular highlights, i.e., the reflections of light actors. First I added lensflares but they had two problems: they made the scene too busy and the sprites were oriented toward the main camera even during the render to texture process.
Next I tried to place white spheres and exclude them from the main render pass so they only show up in reflections. As it turns out that’s currently not possible so I ended up with a less than elegant solution: one sided spherical caps (so they can’t be seen from above) with a lensflare texture.

The rails are also reflective but since they are thin and long they use four static cubemaps instead of one. The effect is somewhat subtle so to make it more apparent I replaced the rails with a cuboid:

Each cubemap has a spherical falloff defined in the the material instance: radius, hardness and position can be set, the latter is supposed to be the coordinate where the cubemap was captured from. The variable radius allows uneven placement of the reflection areas so interesting places can be sampled at the ideal location.
The cubemap resolution is pretty low to void texture aliasing on the thin bars of the rails.

The final reflective element is the table: it uses a SceneCaptureReflectActor with a post process blurring the reflection a bit, in a similar way as before. However I wanted to give a hard, smooth, glass like look to the counter so there I sharpened the reflection with simple posterization.

I mentioned the counter before but here is a quick rundown of its features: it shows the input number in an arbitrary numeric system and to arbitrary digits*. The material can work either as a digital or an analog (rolling) counter, although here I only used the former mode.
(* In practice since floating point precision problems make it impossible to display certain values above 999,999.)

The first row of the display is split into three parts: the first and last two digits are just there for decorations, they never change while the middle 6 digits show the score.
The second row follows a similar structure: the two characters at the ends are for simple animations and the center part is for the messages. Both of them have a non-numeric font so the input numbers show up as different shapes. The limited selection of letters explains the displayed words and expressions.

By the way, the border around the display (and all the other borders on the table) are using the 9-slice material function. The other decals are contained by a single texture atlas as several small distance field textures so experimenting with different visual effects was simple.

The ice ball supposed to leave decals behind when colliding but they’re barely visible in replay mode due to problems I discussed earlier so here is a closeup:

Melting ice decal.
(Click to view animation)

The melting is also utilizing a distance field and the coverage parameter is animated by a time variable material instance.

And that’s about it. Please feel free to contact me if you have any questions!