Announcement

Collapse
No announcement yet.

Has disabling Backface culling changed?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Has disabling Backface culling changed?

    Hi there.

    I wanted to reenable the ability to render cubes with transparent textures.

    ​For that I use this renderstate in the Cube demo of Afterwarp 300:

    LRenderingState := FDevice.RenderingState;
    LRenderingState.CullFace := TTriangleFace.None;
    LRenderingState.States := LRenderingState.States or TRenderingState.State.BlendEnable;
    LRenderingState.BlendColor.Source := TBlendFactor.SourceAlpha;
    LRenderingState.BlendColor.Dest := TBlendFactor.InvSourceAlpha;
    LRenderingState.BlendColor.Op := TBlendOp.add;
    LRenderingState.BlendAlpha.Source := TBlendFactor.One;
    LRenderingState.BlendAlpha.Dest := TBlendFactor.One;
    LRenderingState.BlendAlpha.Op := TBlendOp.Add;
    LRenderingState.States := LRenderingState.States or TRenderingState.State.BlendEnable or TRenderingState.State.AlphaToCoverage;
    FDevice.RenderingState := LRenderingState;​

    While this does still enable the transparent areas of the texture, the back faces seem to be culled still.

    Pic 1: afterwarp 300 cube demo, no renderstate changed
    Pic 2: Same demo with above renderstate change
    Pic 3: My demo from Afterwarp 205 also with the same renderstate change as above

  • #2
    In previous versions, Delphi/Pascal Cube example wasn't using depth pre-pass. In latest version, it's been updated to use depth pre-pass similar to C and C# examples.

    Therefore, I would suggest disabling depth pre-pass: comment out the appropriate block and make sure to move texture cabinet occlusion filter command to be executed after the main rendering pass.

    Rendering with a depth pre-pass is a more common approach in Afterwarp as it allows to improve performance both by reducing overdraw and allowing ambient occlusion to execute in parallel while some other processing takes place (otherwise it is computed after the main pass). However, depth/normals scene does not expose texturing interface, so you can't render texture "holes" in depth pre-pass.

    An update is coming with some features that could not make it into first v3 release, I'll try to make sure that texturing for depth/normals scene is exposed - this would also allow rendering shadows with holes using textures like in your approach.

    Comment


    • #3
      ​Nice, that worked. And thanks for the explanation. I wondered what this new step was for. It would be perfect to be able to use these "holes" in the shadow pass
      I am currently (once more) trying to reboot my adventure engine. Second picture shows the current state. Basically, at first I wanted to get my foot back into the door.. meaning getting used again to the current state of Delphi and Afterwarp and being able to load graphics, sounds and obj files, getting it DPI aware from the beginning, having a project structure.
      It autoloads normal and paralax maps if available.

      My goal is some kind of diorama visual where rooms are 3d but characters and complex objects are 2d sprites (planes). So that lightning and shadows are real, instead of prerendered background graphics. So for these planes rendering shadows with transparent pngs would be great of course ^^ But there is no hurry, this will take many months before I get to this state.

      4th pic is my old engine.
      Last edited by Zimond; 02-08-2024, 01:26 AM.

      Comment


      • #4
        Regarding the update that is being worked, it includes major improvements both in native Wavefront OBJ format loader and material support system in scene meshes. Basically, when you load a mesh, everything is loaded for you, including materials and textures, which is very easy to integrate into existing code, and everything works "out of the box", including bump-maps, etc. Voxelization uses a novel technique and is lightning fast now, so it'll be available as an option at runtime - this is useful both for accurate object selection, "walking" on the surfaces (especially now with 1st person view camera) and for visibility detection. These features are required for a very important client, so are developed intensively and exhaustively.

        I mention this because just a couple of days ago, while testing some of new functionality, I defined a simple surface in OBJ file, set materials to use a fence texture, and it become immediately obvious that for the "holes", it would be useful to define the albedo texture for depth/normals as well, to get better shadows - practically at the same time when you made this post. So I'll be addressing that shortly. For curiosity's sake, I've attached a screenshot of new "ModelViewer" tool, which is a replacement for Voxelize: for the surface, I've used albedo texture, ambient map and height map both as bump-map (which gets converted to a normal map behind the scenes for you, no need to use any external tools) and displacement map.
        Attached Files

        Comment


        • #5
          So the mtl file will be fully used and automaticly applied including textures? That is awesome. And internal voxalization is great too. Thanks again for all the work

          Comment


          • #6
            That is right, MTL file is fully loaded alongside its textures, which are loaded and initialized into the appropriate "Texture" objects, but you still get to pass this material to the scene and call the drawing function. The idea is that you can control what parameters are passed and how, in case you want to modify the appearance. Also, MTL even with third-party vendor extensions does not define all parameters that you can specify in Afterwarp's scene material (e.g. shadow and ambient occlusion strengths, bloom factor, etc.) - these will be available through special "pragmas" in MTL, but you can also specify them directly in rendering code. Later, a similar approach will be adopted to other 3D mesh formats loaded through Assimp.

            Comment


            • #7
              Oh .. another thing. Right now you cannot access submeshes in any way right? Let's say you have a furnature model with a drawer in it and the drawer is a defined submesh and you want to move it seperately to pull it out. Of course it is always possible to simply split the submeshes into several obj files to solve this. So this is only a convinience thing. But would it be possible to add something like TScenemesh.Mesh.DrawSubmesh(name/id) ? This way you could render each submesh individually and change parameters inbetween like transformation or material.

              Comment


              • #8
                When you load a mesh from file either to mesh buffer or directly to scene meshes, there is "TMeshMetaTags" collection that is provided. Each "tag" contains name of the appropriate object, min/max boundaries, first vertex, vertex count, first index and index count. If you want to render this particular tag, you can issue a draw call specifying first index and index count (these parameters are always passed to Draw functions, just by default they are set to zero, which means the entire mesh).

                Splitting is also possible - Voxelize tool does this and provides you XML file with information on each sub-object, and it also detects instances (same object, different position). So you can either use tags directly (more efficient approach), or do the split with Voxelize and use information from XML file.

                Comment


                • #9
                  I'll keep that in mind. 👍 Thanks again

                  Comment


                  • #10
                    Originally posted by lifepower View Post
                    In previous versions, Delphi/Pascal Cube example wasn't using depth pre-pass. In latest version, it's been updated to use depth pre-pass similar to C and C# examples.

                    Therefore, I would suggest disabling depth pre-pass: comment out the appropriate block and make sure to move texture cabinet occlusion filter command to be executed after the main rendering pass.

                    Rendering with a depth pre-pass is a more common approach in Afterwarp as it allows to improve performance both by reducing overdraw and allowing ambient occlusion to execute in parallel while some other processing takes place (otherwise it is computed after the main pass). However, depth/normals scene does not expose texturing interface, so you can't render texture "holes" in depth pre-pass.

                    An update is coming with some features that could not make it into first v3 release, I'll try to make sure that texturing for depth/normals scene is exposed - this would also allow rendering shadows with holes using textures like in your approach.
                    Just to make sure. Am I guessing right that a projected SDF canvas also doesn't work when a depth pre pass is active, for the same reasons?

                    Comment


                    • #11
                      Depth pre-pass is, as the name implies, a separate pass to produce depth buffers, which would be used for ambient occlusion, and to speed up the main pass. This means that AO calculation can be performed at the same time, for instance, while lighting calculations are performed. While rendering the main pass, depth buffer testing is set to equal, so only those pixels that are actually visible will be processed.

                      This has nothing to do with canvas and its projected rendering. However, to avoid Z-fighting, you might need to adjust depth testing to "equal or less", and/or enable depth bias, like in Chart3D example (see RenderText function there).

                      Comment

                      Working...
                      X