Wednesday, November 14, 2012

Generic European Driving World


Our most recent military project is a tank training simulation. As with most of our vehicle training simulations, we have built hardware that closely resembles the internal cabin of the real vehicle (to the point where it feels as if you're in the real tank). For this particular sim the driver compartment is mounted upon a three degree of freedom motion platform. Since we're simulating multiple episcopes, external cameras, thermal imaging, night vision etc, we use multiple 'nodes' (i.e. multiple PCs connected via a network), with different graphics nodes controlling the visuals of different screens, and a core dynamics node running the physics engine, controlling vehicle internal logics and general driving dynamics (this is common practice for us, our sim framework is geared toward this kind of distributed architecture).

I can't really provide too much information about the vehicle aspect of the simulation (to avoid non-disclosure clause issues, I prefer to be cautious when it comes to military projects). However, what I can talk about is the generic European world we have built for this project.




My role was lead software developer for the world team, as well as technical lead for implementing new sim features to meet the requirements of the world. This is the largest and most complex world we've built, and so getting started on the project was a daunting task initially. There were many new, high-risk features and a limited time frame, and so the beginning R&D phase required a fair bit of planning and brain-storming.


Some of the features and tasks required for this project were:
  • Taking fuller advantage of our dynamic time and weather system. I ported this code across from my own engine, Transcendence, a few years back, but we haven't been using it to its full capacity until now.
  • Support for season transitions. We now support summer and winter.
  • Support for snow precipitation. Previously the only precipitation model supported was rain. Besides the visual aspect (implemented via particle systems), this had repurcussions for our dynamics team (driving surfaces are affected by precipitation, for example, by becoming more slippery).



  • A procedural terrain texturing system. The clients wanted vast tracts of drivable terrain, and so we needed to provide high fidelity, detailed terrain throughout the world. The world was far too massive for the artists to texture it by hand in the given timeframe, and so I coded a terrain shader that did this procedurally (with some degree of user control via the use of what I call the 'control map'). We were very satisfied with the outcome, the terrain looks good.
  • Support for snow-covered terrain during winter. I modified the aforementioned terrain shader to dynamically alter the amount of snow on the terrain based on the surface normal of the landscape at any given point. This gives a more realistic look, with upward facing surfaces receiving more snow than steeply-sloped surfaces. So, for example, cliff faces are rocky rather than snowy, but plains and mountain peaks are covered in snow.
  • Support for dry snow, wet snow and icy roads.


  • A complete re-working of the way we construct pathing data. The world was just too massive for it to be feasible to manually construct pathing (this was the case for Durban driving world as well). For this particular project the art department used a 3rd-party city construction toolkit to build the roads. The framework allows the exporting of the spline information used to construct the roads, but unfortunately provides no documentation about the formatting of this information, and so I to figure it out (this entailed a fair bit of trial-and-error and guess-work). This turned out to be a success, and it allowed me to procedurally generate all pathing data from this spline information (I basically wrote a configurable command-line application to convert their splines to our pathing format).
  • The previous change (as well as the scope of the world) required various modifications to our traffic system (specifically, I needed to re-work the pathing algorithms to no longer rely on pre-computed paths).


  • Implementation of various miscellaneous world features (for example, see the train in the image below).
  • Implementation of a set of exercises and associated scoring checks (too many to discuss in any reasonable amount of space).



Sunday, September 30, 2012

Shroomsters


I've been working on a casual game, targeted at iPhone and Android, called Shroomsters. I was initially inspired by an old game I loved as a kid, Wonder Boy 3: the Dragon's Trap (I re-played it again recently). However, as development progressed, it morphed into something quite different. At some stage I thought it would be interesting if we reversed the situation from Mario Brothers, and have mushrooms as the good guys against evil Italion plumbers. So the main protagonists are mushrooms, though the evil plumbers concept never materialized.



When I was in primary school and high school, I was spent a lot of time day-dreaming and drawing, though after school I focused on other things and neglected drawing / illustration altogether. For this project I decided to do the art myself, and so I picked up illustration again (this time in Photoshop, occasionally scanning in hand-drawn art and re-creating it on PC). Although the art style is very simplistic, but I'm quite happy with the style of the end result.



Similarly, musicianship is something that I was passionate about that I have neglected in recent years, so I decided to make the music myself. I deliberately went for an old-school chiptunes feel, inspired by the music of the old 8-bit and 16-bit platform games, and arcade games of that era. Examples of games whose music I particularly liked are Sonic the Hedgehog, Wonderboy 3 and Ghosts 'n Ghouls.



The game features four campaigns, each with its own theme. These are differentiated by their visual backdrops, audio tracks and play styles (for example, some favour puzzle-type play, others favour the good old-fashioned left-to-right platformer style). Additionally, each campaign features a different (though sometimes overlapping) set of enemy monsters. The campaigns are called 'Mushroom Dawn', 'Ice Crags', 'Enchanted Garden' and 'Apocalypse City'.

The game was developed using the Unity engine. All programming was done in C#.


Tuesday, August 9, 2011

LHD Mining Vehicle Project

I am currently working on a mining sim vehicle. The vehicle is called an LHD (load-haul dumper). The company has done quite a few of these in the past and often each project requires some degree of repitition. As such, we have identified various common components so that we can design a generic LHD framework as we develop this project, moving common functionality into various generic classes (that project-specific classes will inherit from) as well as providing interfaces for project-specific classes to implement. Much of the design work is now out of the way, leaving internal logics, scoring and faults to be implemented from my side.


Sunday, March 20, 2011

Truck Simulation Using Durban Driving World

The project I'm currently working on is a truck driving training simulation. My role is to developed the world in such a way that AI vehicles follow the rules of the road, and so that we can evaluate the driver performance via a fairly extensive collection of scoring tests that penalize the driver for failing to adhere to the rules of the road. We are attempting to implement tests for each of the rules specified in the K53. The world that is being built is to be used as a generic driving world for future projects of this nature, but is based on Durban city.

The artificial intelligence engine is an extension of the system that we developed for a military project called CJOps a few years ago. That particular project was based in a rural environment, whereas this simulation is based in the bustling CBD of an urban environment. Significant additions were thus required in order to meet the criteria for this project.



AI vehicles and pedestrians are semi-autonomous. Their navigation behaviour requires that routes and paths be laid down in our in-house editor. These include such information as traffic flow direction, speed limits, spawn points and the placement of world features such as stop streets, yield, traffic circles, traffic light and intersections. Additional information such as traffic density and the specific types of vehicles that appear in different parts of the world is also specified in our editor and read in by the sim.

The computer-controlled vehicles and pedestrians choose their own objectives and destinations, follow routes as necessary, perform collision avoidance behaviour and react to one another in realistic manner. Pedestrians will move out of the way of oncoming traffic and will wait until it is safe to cross streets. One of the modifications that was previously required was to limit the extent to which pedestrians avoided vehicles -- they were too effective at this and we were concerned that this would provide negative training as the driver of the sim could drive fairly recklessly without hitting pedestrians. Ironically, we had to dumb them down to rectify this.



Collision avoidance behaviour is implemented using predictive collision zones and vector math. By predictive collision zones I mean a collection of collision boxes owned by each AI vehicle that extend outward ahead of the vehicle by a distance determined by the current speed of the vehicle. These turn to follow the path of the vehicle rather than stretching directly ahead. As such, they basically tell you where the vehicle will be N seconds from now. When these intersect those of another vehicle, we can determine whether a collision is going to occur in future and have the vehicles behave as appropriately (for example, slowing down, yielding or stopping).

Much information can also be gleaned using vector math (using dot product and cross product operations on various vectors such as forward vectors, right vectors and the vectors from one entity to another). For example, if we assume that vehicles drive on the left side of the road (as is the case in South Africa), then whenever a pedestrian crosses the road, vehicles will approach from the right hand side. By acquiring a list of all nearby vehicles and checking the dot product of the pedestrian's right vector and the forward vector of each vehicle, taking the movement speed and distance of the vehicle into consideration, we can determine whether the vehicle is approaching the pedestrian and likely to hit it. In this instance (assuming both vectors are normalised), if the dot product of the pedestrian's right vector and the vehicle's forward vector is close to -1 (to determine 'close to' some threshold is necessary) then the vehicle is approaching from the pedestrian's right hand side. This kind of logic is, for example, used by pedestrians to know whether it is safe to cross a road.



Of course, in order to ensure the AI engine performs well, world partitioning is required. Rather than using an octree or a quadtree we chose to use a dynamic grid-based system. Since the world is very large, the dynamic grid is re-calcuted periodically based on the main vehicle's position. This grid allows us, for example, to filter out large numbers of vehicles when acquiring lists of nearby vehicles.

In order for the logic governing the behaviour of computer-controlled entities and the logic for evaluating the drivers performance to work as intended, the world must be built to certain specifications, both from the artists side (in the way the terrain geometry is modelled and the naming convention of materials) and from within the editor (much of the behaviour is determined by the placement of various zone types).

Painted lines on the road such as dotted lines, solid lines, parking lines, the lines at stop streets and yield and so on all use specific material names. Separate materials are also mapped to each lane. Ray-casting downwards allows us to check terrain height at each wheel, as well as the material at that point. This allows us to determine when vehicles change lanes, cross solid white lines etc. The same data used by the AI vehicles is also available to the main vehicle for scoring checks.


Tuesday, September 28, 2010

Military Sim Promotional Images

This blog needs some prettying up, so I thought I'd post some work-related images. I work for a company that builds / develops mining and military training simulations. Part of my job is to maintain and enhance the company's in-house 3D simulations engine, but my most of my time is spent on the military side of things. These are some promotional images we have lying around, I really like the look of these particular shots and thought they deserved some exposure.




Sunday, September 26, 2010

Implementing Particle Systems in Transcendence.

This weekend I added a particle system framework to Transcendence Engine (my own 3D games and simulations engine which I intend on using for future 3D projects). I already had a sprite / billboard class in the engine, and so one would think that implementing particles would be as simple as creating many of these and controlling their behaviour at run-time based on various parameters read in from some data file.

However, this would have been a very inefficient solution because it would mean that each particle is rendered in a separate draw call. Once we reach a few hundred particles we would incur significant driver overhead and our graphics engine would quickly become CPU limited, preventing us from leveraging the power of the GPU.

The solution instead was to create a dynamic mesh class, which has a dynamic vertex buffer (but static index buffer), allowing us to modify the vertex data on the fly. In this way, the entire particle system is treated as a single object and is rendered using a single draw call instead of potentially many hundreds of draw calls.

I spent a fair bit of time on designing this part of the system, because having a well-designed dynamic mesh class is hugely advantageous to other areas of the engine as well. For example, my ocean surfaces are implemented as flat, well-tessellated grids whose vertices are deformed in the vertex program based on the summation of sine waves. This looks pretty decent by older standards, but does not hold up against some of the uber-realistic water shaders we see in demos and cutting edge games today. Using sine waves is limiting -- ideally you want perlin noise. This is doable in vertex shaders if you use vertex texture sampling (and I do intend to explore this route). However, performing the vertex deformation in the application itself rather than in the shaders is advantageous because (a) vertex texture sampling is not always available, and (b) your application has full knowledge of what is happening in the scene and can thus have the water surface respond to other stimuli (for example, barrels or crates being dropped into the water). In short, a dynamic mesh class has many uses beyond being used only for particle systems, and so some careful design and planning was warranted.

Having implemented and tested my dynamic mesh class I was ready to move onto using it to implement a particle system class. This part was pretty straight-forward. I've worked on this sort of thing at both Digital Extremes and ThoroughTec, and have also worked with Ogre's particle systems a lot; most particle system classes tend to expose a similar set of parameters.

Mine is at a fairly early stage and does not support anything too fancy. Currently the parameters that control particle system behaviour are:
- particle life time
- particle fade in time
- particle fade out time
- initial particle dimensions
- scale rate
- emission frequency
- number of texture divisions
- initial velocity in object space
- acceleration in object space
- acceleration in world space

I chose to have explicit fade in and fade out times, assuming that particles will always start off fully transparent and fade in, and will also always fade out again at the end (any other behaviour always tends to look jarring). My scale rate is additive rather than multiplicative because I prefer a stable, constant change in size rather than having particles shrink drastically the smaller they are, or quickly balloon out when they become large. Most particle systems I've worked with expose an emission rate rather than an emission frequency (for example, emit 10 particles per second rather than emit a particle every 0.1 seconds). I decided that an emission frequency is a little more intuitive, perhaps not to us programmer types, but to artists. For example, a particle emission rate of 0.5 means emit a particle every two seconds, but may confuse anybody who is not mathematically inclined (does it mean we emit half a particle every second?). I'm not trying to insult the intelligence of the intended users here -- I just feel that if there is an alternative that is clearer and less likely to cause confusion, that is the better option.

The last point is the velocity and acceleration. Ordinarily you would just need a single acceleration value in world space to allow for forces like gravity and wind. This would allow, for example, smoke or dust to sail across the scene as if being carried by the wind, or water emitted from a fountain to eventually slow and fall back to the ground.

However, I chose to add an additional parameter here, differentiating between acceleration in object space and in world space. Having an acceleration that works in object space (by this I mean that it works relative to the position and orientation of the emitter) allows for interesting particle behaviours that work as intended even when you rotate or move the particle system as a whole. For example, I have used it for an explosion effect where the particles, after rushing outward for a while, suddenly reverse direction and rush back inward, giving an imploding effect instead. Not particularly realistic, but great for fantasy or sci-fi games.





Another important part of implementing the particle system framework was integrating it into Transcendence Engine's resource system. Transcendence supports hot-loading, which in our case means either that (a) if the relevant option is enabled, if you modify a resource the engine picks it up immediately and the results are instantly visible in-game, without you having to re-start, or (b) if the relevant option is not enabled (for performance reasons) you simply need to instruct the engine to check for resource changes by selecting an option in a menu or hitting a hotkey. It is very gratifying to tweak a few textures, re-size a mesh, alter a line in a shader, then alt-tab back to the engine (which was running the whole time), hitting F11 and watching all of the changes instantly take effect.

It seemed to me that tweaking particle parameters was another situation where instant feedback would be very advantageous, and so from the start I planned it so that it would fit into Transcendence's resource framework. The particle system itself is not regarded as a resource (since multiple instances of the same type may exist). Instead, I provide a separate class called a ParticleSystemTemplate, which inherits from the Resource class, provides its own resource handler, and thus gets all of this behaviour for free. Each time a particle system is created, it asks the particles resource handler for a pointer to the particle system template associated with the given file name. If we modify that file while the engine is running, the changes are detected and the resource handler instructs the particle system template to re-load itself. The particle system class, in its update function, detects that a dirty flag has been set on the particle system template, and it then re-constructs itself.

While developing the particle system framework I was made aware of some short-comings in my graphics object class hierarchy. The intention of this class hierarchy is to automatically allow new graphics objects types to be created and inserted into this hierarchy, and have them automatically support frustum culling, casting and receiving shadows, having their transform matrices handled, having their materials applied and having their shader parameters updated. A good few hours of re-factoring were required to make the system work as intended, but in the end it turned out well. Particle systems are currently culled using axis-aligned bounding boxes (which are updated on the fly). I do not currently intend to provide support for individual particle culling (I do not believe it would provide performance benefits). As for casting and receiving shadows, this is currently untested, but in theory, once the shaders are correctly setup, it should just work.

At this point the technical work for my particle systems is close to complete. The end result is a particle system that is efficient (fewer draw calls, fits into the culling framework) and which supports hot-loading.

In future I intend to implement soft particles in Transcendence. Soft particles is a technique that improves the visual appeal of particle systems by having them become transparent at points where they intersect other surfaces. For example, without support for soft particles, smoke rising through a vent on the ground would not look quite right, because the hard, straight line where the particle intersects the geometry representing the vent would be very obvious and jarring. Soft particles work by having the pixel shader sample the scene's depth buffer so it can compare its own depth to the existing scene depth. If the two results are very similar the pixel is made transparent. The end result is that particle's no longer have that unrealistic hard-edged appearance when intersecting other scene geometry. Another great benefit of soft particle support is that other surfaces (most notably water) can be rendered in the soft particle layer. I implemented soft particles in ThoroughTec's in-house engine a few months ago. Although it works really well for dust and smoke, the most noticeable visual improvement was to our water, which now has a soft transition as it meets the shore. The terrain for our current project has an extensive network of rivers, as well as oceans and lakes. The lack of hard edges makes it look significantly more realistic and has improved the overall look of the terrain.

There is a moderate amount of work required before Transcendence Engine is ready for this, but it should be pretty straight-forward to repeat the process.

Implementing thermal imaging.

Over the last two days I've been working on simulating the thermal imaging sensor (TIS) for a military training simulator project. This supports two modes, white hot and black hot. In white hot mode hotter objects are brighter. The sky tends to be dark grey to black, the terrain varies from dark grey to white (for hot sand and roads during the days), and people and vehicles tend to be light grey to white. Black hot is just the opposite (hotter objects are darker), which means I only really needed to implement white hot, and added a final step in my post-processing shader which inverts the image when black hot is enabled.





Previously I had implemented night vision and incorporated it into our post-fx pipeline. This entailed adding a custom high contrast bright pass filter, a green filter (tweaked in various ways to match the reference videos) and a grainy effect (similar to film grain). The excessive contrast works well with the bloom part of the pipeline to give a realistic looking dramatic blooming effect when parts of the rendered image becomes brighter (for example, when weapons are fired).

I used the same part of the post-fx pipeline to implement thermal imaging, but applied a greyscale filter instead, and used significantly less contrast. At this stage the result was a grainy greyscale image, but there was no differentiation between hot and cold objects.

Looking at the reference videos I observed that there is significantly more contrast on thermal imaging camera during the day than at night. I implemented this in the application code itself rather than in shaders. We have a dynamic time of day and weather system which allows for any time of day and weather combination, each with its own set of diffuse, ambient, fog and sky parameters. When thermal imaging is enabled I intercept the calls to set these parameters and supply an alternative set of parameters that match the reference videos. Fortunately, I had reference videos showing TIS enabled during transitions from day to night and vice versa, so I could run the video alongside our simulator, dramatically increase the time elapse rate in our simulator, and observe the results as I tweaked the values. As for the sky, I modified the dynamic skydome shader so that it bypasses the part where it derives colour for various regions of the sky based on time of day and weather, and outputted a single colour based on time of day instead.

In the reference videos the foliage tends to range from dark grey to black (it would seem that plants aren't particularly warm). This was easy enough, I simply intercepted the calls to the SpeedTree API and modified lighting and fog parameters to give the desired look.

At this stage the scene looked very close to the reference videos for the most part, except for hot objects -- we needed some way to make infantry and vehicles stand out white against the dark backdrop. Initially I was going to do this with shaders -- either in the specific shaders for the infantry and vehicles or via post-processing. However, in the real world various vehicles have very specific heat signatures. For example, some tanks are specifically designed so that they have low heat signatures, so that they don't stand out when TIS is enabled. Similarly, many tanks, trucks and even infantry show very specific heat patterns. For example, a tank may appear dark everywhere except around its engine, which glows white, allowing it to be uniquely identified by a knowledgeable observer. As a result, a generic solution such as this would not suffice.

To obtain realistic results we needed the artists to provide an additional set of customized textures, one for each vehicle and infantry unit type. When thermal imaging is enabled, the graphics engine is instructed to swap out the default set of diffuse maps for an alternative set of heat maps. The final result is a final rendered image that is very close to the real thing.