Announcement

Collapse
No announcement yet.

Manipulate Meshbuffer vertices

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Manipulate Meshbuffer vertices

    ​Hey there.
    I am trying to figure out how I can read and manipulate single vertices from a meshbuffer.

    Click image for larger version

Name:	disctexture.jpg
Views:	48
Size:	129.9 KB
ID:	525

    What I wanted to do is to figure out some way to make TexCoords for meshes like a disc that resemble a plane. At least I think I understood how I need to calculate Tex coordinates. Anyway thats why I would like to know how to read and write single vertices.

  • #2
    This looks like a texture coordinate generation bug in "Disc" function. It'll be fixed in Afterwarp experimental build, but as I'm in Italy currently, I can't publish an update until I get back.

    Meanwhile, as you asked, you can modify vertices manually by accessing TMeshBuffer.Vertices and TMeshBuffer.Indices properties.

    Comment


    • #3
      Originally posted by lifepower View Post
      This looks like a texture coordinate generation bug in "Disc" function. It'll be fixed in Afterwarp experimental build, but as I'm in Italy currently, I can't publish an update until I get back.

      Meanwhile, as you asked, you can modify vertices manually by accessing TMeshBuffer.Vertices and TMeshBuffer.Indices properties.
      Again, sorry if that is something that should be obvious to a more skilled coder xD but how do I go through all vertices or select one? TMeshBuffer.Vertices gives me one vertex, I assume the first.
      And no rush. I can wait, I work super slow anyway
      Hope you have a good time in Italy.

      Comment


      • #4
        Thanks! The actual API has changed slightly in an experimental build (now is called "VerticesPtr" and "IndicesPtr" respectively), so please keep that in mind when you update later on.

        Meanwhile, you can access it like this:

        Code:
        var
          LVertex: PMeshBufferEntry;
          LIndex: PInteger;
          I: Integer;
        begin
          LVertex := LMeshBuffer.Vertices;
        
          for I := 0 to LMeshBuffer.VertexCount - 1 do
          begin
            // Alternatively:
            // LVertex := Pointer(NativeUInt(LMeshBuffer.Vertices) + Cardinal(I) * SizeOf(TMeshBufferEntry));
        
            LVertex.Position := LVertex.Position + Vector3f(0.0, 10.0, 0.0); // change vertex position
            LVertex.TexCoord := Point2f(0.0, 0.0); // texture coordinates
        
            Inc(LVertex);
          end;
          
          LIndex := LMeshBuffer.Indices;
        
          for I := 0 to LMeshBuffer.IndexCount - 1 do
          begin
            // Alternatively:
            // LIndex := Pointer(NativeUInt(LMeshBuffer.Indices) + Cardinal(I) * SizeOf(Integer));
        
            // Do something with "LIndex^"
        
            Inc(LIndex);
          end;
        end;

        Comment


        • #5
          Ah I see. You take the first pointer memory position and jump to the next one by adding a multiple of entry sizes to get to the one you want. Thanks. Definitly something that i wasnt aware of.

          One more question for technical understanding: the indicies are merely a constructing order for the verticies that specify for example if a cylinder is created with inside of outside faces is that correct?
          for manipulating texture coordinates or for example adding modifiers like bending or twisting a model I can just go through all verticies since the order is not important for that? And again. Thank you very much for taking the time answering my questions. Without your help I would have never been able to get into developement of an actual 3d engine.

          Comment


          • #6
            Yes, mesh buffer is laid linearly in memory and "Vertices" points to the first vertex. In an incoming update, API (only Pascal high-level wrapper) has been slightly improved so you get properties to access each element individually.

            The mesh is defined by an array of vertices, and then an array of indices. An array of indices defines individual triangles, so it is always multiple of 3. Each of these indices reference a particular vertex. So, for example, a very basic definition of cube would have 8 vertices and 36 indices (12 triangles). The order in which indices are specified is important and defines each triangle's winding order, which is used for backface culling. If you want, for example, to make a very big sphere visible from "inside", you would need to invert its winding order (by exchanging the order of every group of three indices) and also separately negate the normals. TMeshBuffer provides helper functions for each of these operations.

            Speaking of bending and twisting, the framework allows doing this to a cylinder already, which can be used to create different types of "connections" between round and rounded square cylinders and tubes. An experimental build also enables bending and twisting for "extrude" function (very powerful primitive that creates a mesh from a 2D path, which can contain holes and other sub-paths in the holes).

            And no worries, I'm glad I was helpful.

            Comment


            • #7
              I was able to flatten the disc texture correctly (number 1)

              While trying out other mesh functions I saw that the rounded cube is also currently less useful with textures on them. The texture is stretched around the 4 sides. (number 2) but also its normals (I guess?) are weird. Number 3 is the same cube, rotated 90°. Using a normal map or paralax map results in a completely dark cube. (number 4)

              Not important right now , just wanted to mention it.

              I also tried a cube mapping function by selecting the side depending on the normal of each vertex, but this rather ended up in chaos so I abandoned it for now
              Attached Files

              Comment


              • #8
                Rounded cube created using super-sphere type of primitives is more difficult for texture mapping because it is technically a sphere that is stretched to form what looks like a rounded cube. There is a "CubeRound" function, which actually creates a rounded cube using planes, half-cylinder and and half-spheres, so should work correctly with texture mapping. However, currently, it has only a couple of hard-coded shapes for different "roundness" parameters. Refactoring it into a fully configurable function is still in TODO list.

                "Cube mapping" term actually refers to a different mechanism, where you use cube-mapped textures with 6 faces each. Afterwarp has supported this for many years, sadly there is no example showing how to use it. Making something useful out of it (e.g. for sky) in an example is still in TODO list as well.

                The technique you are referring to, however, is "uv unwrapping". Tools like Blender are able to to do it. It would be interesting to make such runtime implementation, but there are already too many things in the TODO list.

                Comment


                • #9
                  Originally posted by lifepower View Post
                  but there are already too many things in the TODO list.
                  I definitely know that feeling. As always , take your time.

                  Comment

                  Working...
                  X