Page 4 of 17 FirstFirst ... 2345614 ... LastLast
Results 31 to 40 of 165
  1. #31
    Join Date
    Dec 2008
    Location
    Iceland
    Posts
    99
    Dude! I like!
    /me is drooling over those pictures of yours!

  2. #32
    Join Date
    Oct 2006
    Posts
    1,548
    So I take it you're building the "outer shell" of each 8x8x8 block? as opposed to rendering each cube of the 512 cubes?

    Did you try rendering each cube? Im curious as to the results. As rebuilding meshes itself can be rather expensive, but with the optimization of only rebuilding the ones that are changing, this could be barly noticable.

    My first thought was to leverage the geometry shader. Im not sure if you have that avalible to you, but you could have each draw call be a new 8x8x8 block, where you use a static buffer of 8x8x8 vertices, each vertice then expaned out to a cube in the geometry shader, with the information for all cubes within the block packed tightly into single shader variable. This way there would be no mesh rebuilding. Further more, because you have the information of the surround cubes, you can only create the faces of the cubes required, ie those that are going to be blocking by something else.

    This idea could then be further modified by occulsion culling each 8x8x8 shell before the rendering, or maybe using a simple z pass first.

    I dont really understand your ray casting issue. Why do you need the exact triangle and normal? I would have thought you'd only need which cube is being selected, in which case just AABB intersection would do. Or is this for your mesh rebuilding algorithm?
    Last edited by wforl; 11-30-2010 at 04:12 AM.
    [quote][\quote]

  3. #33
    Join Date
    Oct 2004
    Posts
    3,140
    wooooow this looks so cooooool! Can you add some glow maps for things like lava and special minerals?

  4. #34
    Join Date
    Nov 2003
    Posts
    1,191
    WOW, this surely will be really useful stuff to learn, thumps up.
    I guess this will be member sponsor only?
    well doesn't matter too much to me, I will be joining as a member sponsor soon anyway

    Keep it up, but do catch some sleep now and then

  5. #35
    Join Date
    Mar 2002
    Location
    Long Island
    Posts
    1,673
    very nice that you are able to not only optimize the code but also get it so that it contains a lot more blocks. Your encountering some very interesting problems and i'm sure me and a lot of others can't wait to see how to this all came about and overcome.
    Can't wait for more videos...
    Please let me know when we are going to have some fun so I can pay attention.


  6. #36
    Join Date
    Dec 2002
    Location
    Virginia Beach, VA
    Posts
    861
    Quote Originally Posted by wforl View Post
    So I take it you're building the "outer shell" of each 8x8x8 block? as opposed to rendering each cube of the 512 cubes?
    That is correct.

    Did you try rendering each cube? Im curious as to the results. As rebuilding meshes itself can be rather expensive, but with the optimization of only rebuilding the ones that are changing, this could be barly noticable.
    Yes I did both as an instantiate prefab and as a custom mesh. Instantiate allowed about 30k blocks before performance started to dip below 30fps and custom built about 2-3 time that since I wasn't instantiating and destroying a bunch of blocks all the time.

    My first thought was to leverage the geometry shader. Im not sure if you have that avalible to you, but you could have each draw call be a new 8x8x8 block, where you use a static buffer of 8x8x8 vertices, each vertice then expaned out to a cube in the geometry shader, with the information for all cubes within the block packed tightly into single shader variable. This way there would be no mesh rebuilding. Further more, because you have the information of the surround cubes, you can only create the faces of the cubes required, ie those that are going to be blocking by something else.
    I had thought about it but as that requires quite a bit of CG shader language know how, that idea has been put on that back burner for now. Wanted to try to keep things simple as possible for start. Also think that there might be a problem once you start sending the vertices for millions of blocks every frame (each block has 24 vertices so that's 24 million verts for every million blocks, don't think even the best video cards can handle that at 200 fps).

    And yes I'm up in the millions of blocks now. Last night before calling it I finally tracked down how to kill the memory leak that is caused every time you create a new custom mesh. What a pain in the @$$ to figure out how to work around that little flaw in Unity. The good thing is that my last run yielded a world of 3,763,200 blocks and it looks like it could be push another million or two before hitting the memory limit. Down side is I'm once again coming up against the limits of the GPU with the frame rate dropping down to about 40fps now. So if I'm going to push it any further there's going to be more optimization to be done.

    This idea could then be further modified by occulsion culling each 8x8x8 shell before the rendering, or maybe using a simple z pass first.
    The next step if I do decide to play around with a custom shader for this project would be to take the mesh as they're built now and then pass them of to the GPU and let it handle back face removal and possibly z-depth. That way I can get the best of both worlds. Not sending 90 million plus vertices to the graphics card every frame and still getting the benefit of reducing the rendered triangle by about half.

    I dont really understand your ray casting issue. Why do you need the exact triangle and normal? I would have thought you'd only need which cube is being selected, in which case just AABB intersection would do. Or is this for your mesh rebuilding algorithm?
    It would if the ray was intersecting a single block. But there are no blocks. What there is is a chunk that hold the visible triangles that represents an stack of blocks currently set at 8x8x8. That means there is one object that has one mesh that has one collider. The ray cast only tells me if a chunk was hit and where it was hit it and which triangle it hit. The entire game has no concept that there are blocks at all. So to get the origin of a pretend box it has to be done all through calculation. Though I've adjusted the way things are calculated since that post so I now longer need to calculate the triangle midpoint. Instead I using a snapping method to to take the rays hit point and snap it to a grid then offset that position by the normal of the hit surface.

  7. #37
    Join Date
    Oct 2006
    Posts
    1,548
    I had thought about it but as that requires quite a bit of CG shader language know how, that idea has been put on that back burner for now. Wanted to try to keep things simple as possible for start. Also think that there might be a problem once you start sending the vertices for millions of blocks every frame (each block has 24 vertices so that's 24 million verts for every million blocks, don't think even the best video cards can handle that at 200 fps).
    when you say block, are you refering to the 8x8x8 3d array of cubes? or a block as in, each singular cube? else where do you get 24 vertices from? It would be 512

    The idea was that you wouldnt be making that many calls, as the vast majority of them would be culled away.

    Not 100% sure how you're doing it. Look forward to the videos though. When do you plan to release them?
    [quote][\quote]

  8. #38
    Join Date
    Dec 2002
    Location
    Virginia Beach, VA
    Posts
    861
    Quote Originally Posted by wforl View Post
    when you say block, are you refering to the 8x8x8 3d array of cubes? or a block as in, each singular cube? else where do you get 24 vertices from? It would be 512

    The idea was that you wouldnt be making that many calls, as the vast majority of them would be culled away.

    Not 100% sure how you're doing it. Look forward to the videos though. When do you plan to release them?
    I'm using the term block in place of cube. Either is correct but I've just been calling them blocks because others have been calling theirs cubes. A single block or cube if you must is made up of 6 side, 12 triangle, 24 vertices, 24 UV points, and 32 triangle points. Each of the blocks has to have unique point for each of the faces due to the way lighting is calculated. I tried using just 8 vertices and Unity smooth shaded the block which is definitely not desirable. After spending a good bit of time looking into this, the only way to get nice crisp hard edges it turns out is to define each face with its own vertices.

    Now a chunk is a grouping of blocks(cubes) and it is modeled in the memory (instantiated classes) as well as having a GameObject associated with it (actually 2 but that's a different story). A chunk can hold a variable amount of blocks which are decide prior to runtime. Currently I've been playing with 8x8x8 and 16x16x16. Regardless of the size you choose a chunk will always have the same number of blocks in height, width, and depth. The script attached to the gameObject is the one responsible for looking at the list of blocks and determining which vertices, UVs, and triangles need to be assigned to the mesh at that time. This is persistent from the time the chunk is loaded into memory and only changes when a) a block is removed b) a block is added or c) when something changes in a neighboring chunk that may effect it.

    This list a vertices, UVs, and triangles is what is getting past to the video card either behind the scenes or manually if the custom shader route is taken. So either we can pass in a list of 7,077,888 per chunk every frame (assuming a grid of 8x8x8) and let the GPU figure out what to do with it or we can pass it a list a fraction of that size if we pre cull the list which only has to be recalculated a few times second at most and only for the chunks that have changed.

    I would imagine the process could be streamlined and send in only a list of the block positions (or even better just the block positions of blocks not made of air) and try to build the vertex, uv, and triangle lists directly in the shader but that would be beyond my abilities at the moment. At least until I get a chance to sit down and learn Unity's shader lab and the CG shader language.

    Still at some point I would love to offload some of the work to the GPU since that is what it's made for but time is always a limiting factor.
    Last edited by chronos78; 11-30-2010 at 02:08 PM.

  9. #39
    Join Date
    Oct 2006
    Posts
    1,548
    Ah ok.

    I was passing each "Chunk" down to the GPU as a separate draw call. Where no mesh is required, just a shader parameter that tightly packs an nxnxn listing of data. The GShader then uses this data to build the chunk shell.

    If the world is then stored in a 3d array of nxnxn *chunks*, then any chunck that hasnt been tampered with or is all air is NULL, and thus doesnt require a draw call or cost anything.
    Last edited by wforl; 11-30-2010 at 02:16 PM.
    [quote][\quote]

  10. #40
    Join Date
    Dec 2002
    Location
    Virginia Beach, VA
    Posts
    861
    You run into the problem though that in the end we still have to have a mesh created. If not then there is nothing for the rays to hit when we click the mouse and nothing for the character controller to collide with. Which results in not being able to add / remove blocks and the player falling through the world. Unless we then pass the processed data back out of the GPU to assemble a mess to be used as the mesh collider. At which point we still have to pass millions of pieces of information through the graphics pipeline and this would definitely become a Pro only project.
    Last edited by chronos78; 11-30-2010 at 02:27 PM.

Page 4 of 17 FirstFirst ... 2345614 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •