Announcement

Collapse
No announcement yet.

Is reading single pixels of a Texture's Depthbuffer possible?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Is reading single pixels of a Texture's Depthbuffer possible?

    I was thinking about a better way to "block" my lensflare effect and I thought that reading the value of the depthbuffer at the position where my sunsprite is on the screen would be an elegant way.(Like if the depth value is higher (or lower what ever means nearer) then I know something else is in front of the sunsprite) I tried copying the Depthbuffer to a Rsstersurface with

    Code:
    FModeltexture.Retrieve(Tscenetexturetype.depth).Save(LSurface, 0, ZeroPoint2i, ZeroIntRect);  
    Pixelvalue := Lsurface.pixels [posx,posy];
    But all I get are Zeros (or if I clear the surface first, then I get the color I cleared with). I used the same Pixelformat for the Rastersurface as I used for the Depthformat of the Modeltexture.

    I also tried to read the "color" values of the texture (Rastersurface changed to RGBA8 and scenetexturetype to "color") to see if that works, but that also gives me zeros at any position.
    So for being sure I use the correct method I saved a regular TTexture to the surface and NOW I was getting actual color values of the position X/Y.

    Is reading values of the depthbuffer possible in another way?

  • #2
    The code for retrieving Depth Buffer is correct, but copying it to system memory would involve quite serious performance costs. Also, remember that Depth Buffer stores normalized depth values, so you will have to convert those to real values before testing (it depends on whether you are using orthographic or perspective projection, if you need these formulas, please let me know).

    An approach that I could think of could be the following:

    1. Make sure your Depth Buffer uses TPixelFormat.D32F format.

    2. If you are using multisampling, you need to resolve Depth Buffer to a non-multisampled texture (use TTexture.Copy), you can set its format to TPixelFormat.R32F.

    3. Ideally, I would downsample your Depth Buffer to 1/4th of its size (or maybe even to something like 1/16th!), so there is less information to retrieve from GPU.

    4. For reading, always use Depth Buffer from a previous frame, not current one. For this, you can create two auxiliary textures, so you'll copy current (downsampled) Depth Buffer to one texture, while read values from another; in the end of frame, you'll exchange both textures. This technique is called Ping-Ponging.

    In Afterwarp v2, there is a downscaler module, which downsamples and resolves multisampled texture at the same time; however, it is relatively easy to create one externally. Please try aforementioned approach without downsampling and if you succeed reading values, I'll post a downsampler code for you to try.

    Also, please try enabling debug mode, e.g.:
    Code:
      FDevice := DeviceInit(TDeviceBackend.Default, HandleToWindowControlHandle(Handle), FDisplaySize,
        TPixelFormat.RGBA8, TPixelFormat.D16, 8, DeviceAttributes([TDeviceAttribute.Debug]));
    For Direct3D, this would tell if you are doing something wrong (e.g. trying to read multisampled depth buffer). On Windows 10, to enable Direct3D 11 debug layer, you'll need to install Direct3D 11 developer runtime. The easiest way of doing this is to just install VS2019 Remote Debugging tools.

    Comment


    • #3
      I must admit that I have not really an idea what normalized depth values means I googled something about the range 0-1?

      For clarification about the performance issue. The part that takes a lot of the horse power is when pixel data is transferred from a texture to a rastersurface and back ,right? So if I only copy a small part of a texture to a surface it's fine? I only would have to check the exact origin point (maybe 3-4 additional pixels very near to it) for seeing if the "sun" is blocked.

      Comment


      • #4
        Normalized depth value means that instead of real depth (Z) values you would actually get values in range of [0, 1], which would correspond to Near/Far planes of the projection matrix.

        Regarding performance: GPU and whole graphics pipeline is optimized for rendering and general data flow from CPU to GPU. Reading from GPU means you'll have to wait for rendering to finish, then use system bus to transfer that data and you'll have to wait for this to finish.

        If you only need to detect whether light source is occluded by some object, maybe you could use the same mechanism for Object Picking, but use a ray going from light's source to the camera's position; alternatively, you could "pick" from light's 2D position as if mouse would be there - if an object is found, then the light is occluded. This would only work if light source is considered somewhat at infinite distance, or at least with presumption that no objects can be behind light source.

        P.S. I haven't yet ported ObjectPicking example from C++ to Pascal, but if I'll see if can do it over this weekend as it should be fairly easy.

        Comment


        • #5
          Originally posted by lifepower View Post
          Regarding performance: GPU and whole graphics pipeline is optimized for rendering and general data flow from CPU to GPU. Reading from GPU means you'll have to wait for rendering to finish, then use system bus to transfer that data and you'll have to wait for this to finish.
          OK so, in generell , it is best to avoid any GPU to CPU transfers.

          If you only need to detect whether light source is occluded by some object, maybe you could use the same mechanism for Object Picking, but use a ray going from light's source to the camera's position; alternatively, you could "pick" from light's 2D position as if mouse would be there - if an object is found, then the light is occluded. This would only work if light source is considered somewhat at infinite distance, or at least with presumption that no objects can be behind light source.
          P.S. I haven't yet ported ObjectPicking example from C++ to Pascal, but if I'll see if can do it over this weekend as it should be fairly easy.
          That was my first thought, too. My understanding of Rays is that they check for interceptions with simplified bounding boxes or planes so more complex meshes might result in "blocking" the light source while you can still fully see it. I guess one has to carefully craft those boxes around complex meshes so that they "block" accurate enough.
          Any additional example in pascal is much appreciated. I was able to get something out of every one so far

          I will ignore this specific problem for now and continue with adding other stuff. I need to care about bounding boxes anyway for implementing gravity and wall collision (so real walking around and jumping ect) Once I have a little playable test game established I will do some FPS testing on my oldest notebook (from 2012) and see if there is enough power left for trying the depthbuffer approach.

          I said it a thousands times but, Thanks again for your patience and detailed explanations.
          Last edited by Zimond; 01-25-2020, 05:32 PM.

          Comment


          • #6
            Originally posted by Zimond View Post
            That was my first thought, too. My understanding of Rays is that they check for interceptions with simplified bounding boxes or planes so more complex meshes might result in "blocking" the light source while you can still fully see it.
            For this reason, Afterwarp can do Object Picking either OOBBs or the object's voxel representation, so when you put mouse cursor on a hole inside object, you'll be able to pick other objects through it. If you run ObjectPicking example from C++ precompiled samples, you can try this: rotate the camera so that some object with holes is in front (e.g. chair), then pick other objects through the hole.

            Originally posted by Zimond View Post
            Any additional example in pascal is much appreciated. I was able to get something out of every one so far
            I am currently investigating WIC issue reported by DraculaLin and after that will port ObjectPicking example to Delphi/Lazarus and post it. In fact, I am very curious on whether the technique will work for light source occlusion, so I'll try that idea too.

            For custom meshes, you'll have to use Voxelize tool included in the package to generate voxel representations (e.g. "Voxelize.exe -v 4 MyMesh.obj MyMesh.voxel"), to make Object Picking more accurate. You can then visualize these voxels with ModelViewer too.

            P.S. I have attached an example of how voxel mesh looks like - please notice the holes in it, Object Picking properly detects them. In fact, when doing picking with voxels, you'll get exact position where ray intersects with the mesh, which could be useful in some scenarios.

            Originally posted by Zimond View Post
            I said it a thousands times but, Thanks again for your patience and detailed explanations.
            You are welcome. I really value all the questions and suggestions, it is also very useful to be able to create an additional Afterwarp "Getting Started" guide that I'm working on.

            Attached Files

            Comment


            • #7
              Oooh thats what the Voxelstuff is about? Freaking awesome. Using this for light source occlusion would pretty much be directly usable for hit detection of projectiles. Never thought that I would be able to create a simple first person engine (already dreaming about an arena shooter with crazy weapons, seems way to high for me now, but then again so did 2d Point and Click Adventures in the 90s)

              Comment


              • #8
                I have attached ObjectPicking example for FreePascal/Lazarus modified from original version that comes in the distribution. This one loads skull mesh that you've mentioned in another post and performs light source hit test based on the technique that I've described above.

                You can rotate the camera to put light source behind one of the skulls - in this case, it'll hide the light spot. Also, it'll detect a semi-transparent skull and in this case will draw a semi-transparent glow. Please note that the detection of semi-transparent object only works if only one such object is blocking light source, for multiple objects, it'll give object closest to view, so probably won't work properly (it'll depend on whether a semi-transparent or solid object will be closest to you).

                You can try to replace the skull with an object with holes in it, just remember to create its voxel representation first and look at it with ModelViewer to make sure it has sufficient detail.
                Attached Files

                Comment


                • #9
                  Will check it out in the next days. Looks great, thank you once more.

                  Comment

                  Working...
                  X