Long time no posts.
I haven't been keeping up with the projects I started. At first it seems fun and exciting, but I always run in to limitations in the setup, plus the grind of just making stuff without any real challenges... It ends up being something that I don't want to commit to.
So right now I'm just messing around with some ideas and see what comes out. No commitment to a bigger project, just some time to try new things.
This week I've been working on procedurally generated terrain.
In the past, there were some big limitations on how I approached this, because the game world had to have the whole map, from the micro to the macro. I had to choose a scale somewhere between, which meant I couldn't have really large maps, or really small details.
I think I've found a way around that.
Below you can see two types of map data coexisting on top of each other. The wireframe is the collision data, used for physics and for clicking on the map, to move characters around. The grey plane is the visual representation of the map.
What's important is that they don't have to be the same size.The two types of data are completely separate from each other. The collision data can be low resolution. It can cover the whole gameworld, or be loaded in chunks.
The visual representation is all done with shaders. It can be rendered super fast, and only needs to be on a plane big enough to fit in the viewport. This means the vertexes can be modified with not only the game world data (heightmap) but also rocks, bumps and hollows, cracks, dunes or whatever else is needed to bring the visual texture of the ground to life. It can be very high res, because it's all done on the GPU, no need to worry about poking verts around.
Hopefully I'll show some experiments on this idea later this week.
Comments
Post a Comment