You can get Voxel Farm now. For more information click here.

Saturday, July 4, 2015

Export your creations

We just completed a new iteration on the FBX export feature. This new version is able to bake textures along with the geometry. Check it out in this video:


The feature seems rather simple to the user, however there are massive levels of trickery going on under the hood.

When you look at a Voxel Farm scene  a lot of what you see is computed in realtime. The texturing of individual pixels happens in the GPU where the different attributes that make each voxel material are evaluated on the fly. If you are exporting to a static medium, like an FBX file, you cannot have any dynamic elements computed on the fly. We had no choice but to bake a texture for each mesh fragment.

The first step is to unwrap the geometry and make it flat so it fits a square 2D surface. Here we compute UV coordinates for each triangle. The challenge is how to fit all triangles in a square while minimizing wasted space and any sort of texture distortion.

Here is an example of how a terrain chunk is unwrapped into a collection of triangles carefully packed into a square:


The image also shows that different texture channels like diffuse and normal can then be written into the final images. That is the second and last step, but there is an interesting twist here.

Since we are creating a texture for the mesh anyway, it would be a good opportunity to include features not present in the geometry at the current level of detail. For instance, consider these tree stumps that are rendered using geometry at the highest level of detail:


Each subsequent level of detail will have less resolution. If we go ahead five levels, the geometric resolution won't be fine enough for these stumps to register. Wherever there was a stump, we may get now just a section of a much larger triangle.

Now, for the FBX export we are allocating unique texture space for these triangles. At this level of detail the texture resolution may still be enough for the stumps to register. So instead of evaluating the texture for the low resolution geometry, we project a higher resolution model of the same space into the low detail geometry. Here you can see the results:


Note how a single triangle can contain the projected image of a tree stump.

This process is still very CPU intensive as we need to compute higher resolution versions for the low resolution cells. This iteration was mostly about getting the feature working and available for users. We will be optimizing this in the near future.

The algorithms used here are included in the SDK and engine source code. This sort of technique is called "Detail Transfer" or "Detail Recovery". It is also the cornerstone for a much better looking LOD system, as very rich voxel/procedural content can be captured and projected on top of fairly simple geometry.

Friday, June 26, 2015

Castle by the lake

Here is a castle made using our default castle grammar set. This was capture from a new version of the Voxel Studio renderer we will be rolling out soon. The rendering is still work in progress (the texture LOD transitions are not blended at all, the material scale is off, etc.) so some imagination on your side is required. Also the castle is partially completed as we intend to fill the little island.


There are no mesh props here, everything on screen comes from voxels. Also there are no handmade voxel edits, this is strictly the output of grammars. Also note that the same few grammars are applied over and over. For instance curved walls are created with the same wall grammar used for the straight walls, they just run in a curved scope.


This is a scene we are creating to test some new cool systems we are developing to handle high density content.

Sunday, May 31, 2015

Evolution of Procedural

This post is mostly about how I feel about procedural generation and where we will be going next.

A couple of months ago we released Voxel Farm and Voxel Studio. Many of you have already played with it and often we get the same questions: why so much focus on artist input, what happened to building entire worlds with the click of a button?

The short answer is "classic" real time procedural generation is bad and should be avoided, but if you stop reading here you will get the wrong idea, so I really encourage you to get to my final point.

While you can customize our engine to produce any procedural object you may think of, our current out-of the-box components favor artist generated content. This is how our current workflow looks like:


It can be read like this: The world is the result of multiple user-supplied data layers which are shuffled together. Variety is introduced at the mixer level, so there is a significant procedural aspect to this, but at the same time the final output is limited to combinations of the samples provided by a human as input files.

This approach is fast enough to allow real time generation, and at the same time it can produce interesting and varied enough results to keep humans interested for a while. The output can be incredibly good, in fact as good as the talent of the human who created the input files. But here is also the problem. You still need to provide reasonably good input to the system.

This is the first piece of bad news. Procedural generation is not a replacement for talent. If you lack artistic talent, procedural generation is not likely to help much. You can amplify an initial set into a much larger set, but you can't turn a bad set into a good one. A microphone won't make you a good singer.

The second batch of bad news is the one you should worry about: Procedural generation has a cost. I have posted about this before. You cannot make something out of nothing. Entropy matters. It takes serious effort for a team of human creators to come up with interesting scenes. In the same way, you must pay a similar price (in energy or time) when you synthesize something from scratch. As a rule of thumb, the complexity of the system you can generate is proportional to the amount of energy you spend. If you think you can generate entire planets on-the-fly on a console or PC, you are up for some serious disappointment.

I will be very specific about this. Procedural content based on local mathematical functions like Perlin, Voronoi, etc. cannot hold our interest and it is guaranteed to produce boring content. Procedural content based on simulation and AI can rival human nature and what humans create, but it is not fast enough for real time generation.

Is there any point in pursuing real time procedural generation? Absolutely, but you have to take it for what it is. As I see it, the only available path at this point is to have large sets of interesting content that can be used to produce even larger sets, but there are limits for how much you can grow the content before it becomes predictable.

For real time generation, our goal will be to help people produce better base content sets. These could be produced on-the-fly, assuming the application allows some time and energy for it. 

Here is where we are going next:


Hopefully that explains why we chose to start with a system that can closely obey the instructions of an artist. Our goal is to replace human work by automating the artist, not by automating the subject. We have not taken a detour, I believe this is the only way.
There was an error in this gadget