Friday, June 21, 2013

Voxel Systems on a Distributed Architecture

One of our current projects is a continuous miner -- a mining machine designed to cut coal and soft minerals. The operator will have be, to some extent, free to dig / cut where he pleases, which introduced the need for the development of new technology to meet this requirement. At a tech meeting a few months back we came to the conclusion that voxels would be a suitable fit for these requirements, and so for the past few months I have been working on developing a voxel framework and integrating it into our engine. For this particular project there will be three separate systems in the world, each representing a 'diggable' / destroyable part of the world.




We were a bit wary starting off, knowing that developing voxel technology isn't an undertaking to be taken lightly, especially given time constraints and some relatively unique aspects of our architecture.

One of the challenges I've faced is ensuring that the voxel systems work with our recording / playback system. A feature of our sims is the ability to store and play back exercises, which works by storing the state of the system at short intervals into a database. Voxels require a massive amount of data to store, and saving it all to disk many times a second simply wasn't feasible.

In addition to this, we have a distributed architecture, whereby the core sim logic is performed on a dynamics node, which then pumps out network packets to multiple graphics nodes. To clarify: we have one PC acting as a server, receiving the input from the driver and handling the main logic and physics. The state of the sim is handled on this main server and is propogated to various client PCs via network packets. Each of these client PCs is a graphics node, each rendering to its own screen (the operator will typically be surrounded by several screens showing the 3D world).

Again, the massive amount of data required for a voxel representation of part of the world is much larger than what most people realize. This data set needs to exist on each node. As the vehicle digs into the rock (i.e. as voxels are destroyed), the data set changes and these changes need to be propogated to the client machines so that the visual representation is updated as well.

My initial implementation was to go the typical sparse voxel octree route. However, the fact that I needed to maintain the same data set over several nodes made this option less attractive than a slightly simpler, flat representation. I chose to rather partition the entire system into a fixed number of 'clumps'. So, for example, one might create a voxel system that is 10m x 10m x 10m, and specify that you want each clump to be 1m x 1m x 1m large, and that you want each voxel to be 10cm x 10cm x 10cm. In this instance, you would have 1000 voxel clumps, and each of those clumps would consist of 1000 voxels.

Rather than transmitting the entire data set per network packet update (and similarly, rather than save the entire dataset to the database each time), I transmit only a single voxel clump. Currently, each voxel is represented as a single byte. So for each network update I would be sending across (and storing), in the example described above, 1000 bytes, which is not too bad.



I was considering bit-packing them so that each voxel was one bit instead (which would mean 8 times less memory required). However, unfortunately voxels do require more than a binary state, and so this isn't feasible. One would think that a binary state is sufficient (after all, each voxel represents either solid space or empty space). But its a bit more complicated than that, since I need to keep track of other information that allows me to generate renderable geometry representing only the shell (that is, the outer layer of voxels). So a voxel's state can be one of the following:

- empty space
- solid space adjacent to empty space
- solid space surrounded by solid space
- solid space at one of the outer edges of the system

One of the issues I have faced so far is due to the fact that, as you dig into the rock, the data needs to be sent over a network and you're only sending information for a single clump at a time. If you destroy many voxels very quickly it takes a while for all of that information to be propogated, and so you see the visual representation kind-of fizzling and eating away at itself more slowly than it should. In the actual sim this shouldn't be a problem for two reasons -- firstly, you can only dig through rock at a fairly slow rate. Secondly, your vision will be obscured by the massive machinery, dust etc.

Another problem is the vehicle physics. One of the main ways our physics engine handles collision detection and response is using collision lines. We perform intersection tests on various programmer-defined lines against the world geometry and the dynamics team has the vehicle's behaviour based on reacttions to those. I had to implement line collisions tests for the voxel system. Although there are many, many optimizations that can be (and have been) made, in the end there is always going to be some unavoidable brute force 'is this line intersecting this box' logic going on. So far though, we seem to be ok. We won't really know if the current optimizations are enough to yield suitable performance until we're a bit further in.