To answer some of the questions here, the reason this has not been used before is because this technique requires being able to access the quad definitions (ie. which 4 vertices makeup each quad) within the gpu.
Up until recently with Mesh Shaders, there's really just been no good way to send this data to the GPU and read back the barycentric coordinates you need in the fragment shader for each pixel.
The article offers several options, to support older GPUs, like Geometry Shaders and Tesselation shaders. This is good, but these are really at best Terrible Hacks(tm). Proof of the ability to contort old extensions is not proof of reasonable performance!
Notably, geometry shaders are notorious for bad performance, so the fact that they list them as a viable strategy for older devices makes it pretty clear they aren't thinking much about performance, just possible compatibility.
Still, I think this is very cool, and now that GPUs are becoming much more of a generic computing device with the ability to execute arbitrary code on random buffers, I think we are nearly at the point of being able to break from the triangle and fix this! We hit this triangulation issue several times on the last project, and it's a real pain.
This is one of those things that feels like a broken/half-assed/oversimplified implementation got completely proliferated into the world a long time ago and it took several years for the right person to do a full-depth mathematical analysis to reveal what we should've been doing all along. Similar to antialiasing and sharpening, texture filtering, color spaces and gamma correction, etc.
The fact that triangles have proliferated is not due to half-assery. Hardware can rasterize them very quickly, and a triangle can have only one normal vector. Quads can be non-planar. It's true that quads are nice for humans and artists though!
As an aside, Catmull-Clark subdivision has been around since 1978, which, as a first step, breaks an arbitrary polyhedron into a mesh of quadrilaterals.
It's not so much that triangles are the primitive, as much as our logic for combining multiple triangles into a mesh and texturing, lighting, and deforming them in continuous ways clearly has some gaps. It's definitely not an easy problem and it's a fun exercise to see how various silicon innovations unlocked increasingly accurate solutions, and what corners needed to be cut to hit 30fps back in the day.
It's quite astonishing how complicated it is to draw lines in 3D graphics. As a novice it was a little unbelievable that the primitives for drawing lines was effectively limited to a solid screen-space pixel wide line. Want to draw a 2 pixel wide line? Do it yourself with triangles.
Well, technically the API is still available pretty much everywhere (be it directly or via a wrapper library) and most hardware still has support for drawing lines, so it is still easy in current days to do things like this too :-P.
I'm using it all the time when i want to draw lines in 3D.
(though as far as lines and OpenGL is concerned i remember reading ages ago that not even SGI's implementation had full support for everything OpenGL was supposedly able to do)
For most workflows this is a non-issue. When texturing a triangle mesh, the distortions are baked into the texture map, so no seams are visible at the quad diagonals.
> More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.
Very interesting! This reminds me of how stumped I was learning about UV unwrapping for texturing. Even simple models are difficult to unwrap into easily editable textures. "Why can't I just draw on the model?"
I am definitely not an expert in 3D graphics... but this looks such an astonishingly simple and effective method, it makes me to question why this wasn't already thought of and picked up?
I get that with fixed-pipeline GPUs you do what the hardware and driver make you do, but with the advent of programmable pipelines, you'd though improving stuff like this would be the first things people do?
Anyway, gotta run and implement this in my toy Metal renderer.
You want triangles in general, they behave way better, think for example computing intersections.
Also, debugging colors on a 3D surface is not an easy task (debugging in general in 3D is not easy). So if the rendering is nice and seems correct, you tend to think it is.
And if it was not, and you didn't encounter something that bothers you, it doesn't matter that much, after all, what is important is the final rendering style, not that everything is perfectly accurate.
It's not really that simple, barycentric coordinate access is relatively recent. It's asking the rasterizer for information and transforming that information into barycentric coordinates, and the correspondence of barycentric coordinates to vertices is unstable without further hardware support or further shader trickery. In the case of AMD gpus, it's only RDNA2 and later that have hardware support for stable barycentrics.
And you're right that this has been thought of. There are other approaches for bilinearly interpolating quads that have been historically used, but they require passing extra data through the vertex shader and thus often splitting vertices.
Because when working with a textured asset, these seam distortions simply don't occur. The inverse of the distortion is baked into the texture map of the asset. So the distortion between a triangle's world-space size vs. its texture-space size cancels out exactly, and everything looks correct.
Okay, so the same idea that I spitballed in a sibling thread:
> Btw. wouldn't it be possible in modern pipelines to remap or "disfigure" the texture when converting to triangles, so that it counters the bias accordingly? Ah, but that bakes in the shape of the quad, so it can't be modified runtime or it will get distorted again, right.
If you animate the mesh in a non-affine way, then seams are inevitable. The quad-rendering technique described in the article wouldn't save you from that either: seams would appear where two quads meet.
Look at prideout's reply in the thread, the argument about having just one normal vector and the fact they can only describe one plane is huge. Unless you want more edge cases to deal with (hehe, pun intended), you're better off sticking to tris.
I know about the advantages and uniqueness properties of triangles. However, if the article is correct about that artists prefer using quads when editing (I know absolutely nothing about 3D editing, and didn't know that, I thought triangles are universal these days), something is clearly missing from the pipeline if a neatly mapping textures to quads, then converted to triangles ends up messing the interpolation.
Maybe we can continue to describe the geometry in triangles, but could use an additional "virtual fourth handle point" data (maybe expressed in barycentric coords of the other three, so everything is automatically planar) for stretching the interpolation in the expected way.
Anyway, I'm just getting started with Metal, and this provided for a nice theme for experimentation.
Modern GPUs - and really GPUs for about 10 years - are stupidly fast when it comes to rasterizing triangles, so artists simply work with quads (and polygons of more than four vertices - when Blender added "n-gons" back in the day many artists rejoiced) and they just subdivide them if they want neater looking interpolation before triangulating them. But a lot of detail nowadays come from geometry anyway.
This could have been nice ~15 years ago when using much fewer poly counts and relying a lot more on textures for detail was more common. But at the same time, GPUs were also much slower so this approach would be slower too.
So in practice this is really a niche approach for when you want to have a large quad that isn't "perfectly" mapped to an underlying texture and you can't subdivide the quad. Nowadays such use cases are more likely to come from more procedural sides (or perhaps effects) of the engine than artists making assets for the game.
It might be useful for an artist wanting to use a modern engine running on a modern PC to make a low-poly retro-styled game though.
Interesting - low-poly (and 2D) are exactly where my interests are.
Btw. wouldn't it be possible in modern pipelines to remap or "disfigure" the texture when converting to triangles, so that it counters the bias accordingly? Ah, but that bakes in the shape of the quad, so it can't be modified runtime or it will get distorted again, right.
Artists usually prefer quads, but it really depends on what they're doing. It's also not unusual for them to willingly create a model of mostly quads with tris in the mix at the same time.
In the end, the representation of a quad in a 3D modelling editor doesn't really matter. Be it an actual quad or 2 tris. Because any sane artist wants every quad to have 1 normal vec, if they want 2 on a quad they will happily use 2 tris instead.
It is a non issue.
(btw I have done plenty of 3D art for games, I have also made my own vk renderer, I don't see a problem with using tris in the end, even if "visual quads" do de-clutter my work and do help reasoning about the flow of the model at hand)
When you unwrap UVs the interpolation issue is taken care of. The UV map accounts for it.
It's only for tessellation like subdivision surfaces where something like this would be nifty That is to say, this is mainly nifty for the DCC tool renderer itself to render slightly faster.
This actually seems quite easy to implement. Any thoughts on the performance hit a program takes when going this route instead of using one of the workarounds?
To answer some of the questions here, the reason this has not been used before is because this technique requires being able to access the quad definitions (ie. which 4 vertices makeup each quad) within the gpu.
Up until recently with Mesh Shaders, there's really just been no good way to send this data to the GPU and read back the barycentric coordinates you need in the fragment shader for each pixel.
The article offers several options, to support older GPUs, like Geometry Shaders and Tesselation shaders. This is good, but these are really at best Terrible Hacks(tm). Proof of the ability to contort old extensions is not proof of reasonable performance!
Notably, geometry shaders are notorious for bad performance, so the fact that they list them as a viable strategy for older devices makes it pretty clear they aren't thinking much about performance, just possible compatibility.
Still, I think this is very cool, and now that GPUs are becoming much more of a generic computing device with the ability to execute arbitrary code on random buffers, I think we are nearly at the point of being able to break from the triangle and fix this! We hit this triangulation issue several times on the last project, and it's a real pain.
This is one of those things that feels like a broken/half-assed/oversimplified implementation got completely proliferated into the world a long time ago and it took several years for the right person to do a full-depth mathematical analysis to reveal what we should've been doing all along. Similar to antialiasing and sharpening, texture filtering, color spaces and gamma correction, etc.
It reminded me of this article specifically: https://bgolus.medium.com/the-best-darn-grid-shader-yet-727f...
The fact that triangles have proliferated is not due to half-assery. Hardware can rasterize them very quickly, and a triangle can have only one normal vector. Quads can be non-planar. It's true that quads are nice for humans and artists though!
As an aside, Catmull-Clark subdivision has been around since 1978, which, as a first step, breaks an arbitrary polyhedron into a mesh of quadrilaterals.
It's not so much that triangles are the primitive, as much as our logic for combining multiple triangles into a mesh and texturing, lighting, and deforming them in continuous ways clearly has some gaps. It's definitely not an easy problem and it's a fun exercise to see how various silicon innovations unlocked increasingly accurate solutions, and what corners needed to be cut to hit 30fps back in the day.
Yeah, I don't think triangles will go away anytime soon. And, sometimes they're even preferred in certain cases by artists (think creases on jeans).
For someone who wrote textured triangles on a 386:
First rule of computer graphics: lie
Second rule of computer graphics: lie
Third rule of computer graphics: lie
It in no way replaces triangles, and very few will use it for good reason.
Why?
In many cases modern renders use triangles only a few pixels in size, you won't see C1 discontinuity at that size.
All the outer edges of quad still have C1 discontinuity between other quads, all it fixes is the internal diagonal
It has performance & complexity overhead
It's quite astonishing how complicated it is to draw lines in 3D graphics. As a novice it was a little unbelievable that the primitives for drawing lines was effectively limited to a solid screen-space pixel wide line. Want to draw a 2 pixel wide line? Do it yourself with triangles.
Ironically, back in the OpenGL 2.0 days, it was a lot easier to do things like this.
Well, technically the API is still available pretty much everywhere (be it directly or via a wrapper library) and most hardware still has support for drawing lines, so it is still easy in current days to do things like this too :-P.
I'm using it all the time when i want to draw lines in 3D.
(though as far as lines and OpenGL is concerned i remember reading ages ago that not even SGI's implementation had full support for everything OpenGL was supposedly able to do)
The API still exists, but in most implementations things like line styles and thickness are no longer supported.
Not sure about line styles but so far i haven't seen any implementation that doesn't support line thickness (though i only use OpenGL on desktop).
For most workflows this is a non-issue. When texturing a triangle mesh, the distortions are baked into the texture map, so no seams are visible at the quad diagonals.
This seems to happen really often! I think I remember there was another one being about color blending being done on the wrong gamma space on GPUs?
/? Barycentric
From "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
> More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.
Phase from second order amplitude FWIU
Very interesting! This reminds me of how stumped I was learning about UV unwrapping for texturing. Even simple models are difficult to unwrap into easily editable textures. "Why can't I just draw on the model?"
Blender has a few plugins these days that make it a lot easier --- one that impressed me was Mio3 UV: https://extensions.blender.org/add-ons/mio3-uv/
You can draw on a model: https://youtu.be/WjS_zNQNVlw
I am definitely not an expert in 3D graphics... but this looks such an astonishingly simple and effective method, it makes me to question why this wasn't already thought of and picked up?
I get that with fixed-pipeline GPUs you do what the hardware and driver make you do, but with the advent of programmable pipelines, you'd though improving stuff like this would be the first things people do?
Anyway, gotta run and implement this in my toy Metal renderer.
You want triangles in general, they behave way better, think for example computing intersections.
Also, debugging colors on a 3D surface is not an easy task (debugging in general in 3D is not easy). So if the rendering is nice and seems correct, you tend to think it is.
And if it was not, and you didn't encounter something that bothers you, it doesn't matter that much, after all, what is important is the final rendering style, not that everything is perfectly accurate.
It's not really that simple, barycentric coordinate access is relatively recent. It's asking the rasterizer for information and transforming that information into barycentric coordinates, and the correspondence of barycentric coordinates to vertices is unstable without further hardware support or further shader trickery. In the case of AMD gpus, it's only RDNA2 and later that have hardware support for stable barycentrics.
And you're right that this has been thought of. There are other approaches for bilinearly interpolating quads that have been historically used, but they require passing extra data through the vertex shader and thus often splitting vertices.
Because when working with a textured asset, these seam distortions simply don't occur. The inverse of the distortion is baked into the texture map of the asset. So the distortion between a triangle's world-space size vs. its texture-space size cancels out exactly, and everything looks correct.
Okay, so the same idea that I spitballed in a sibling thread:
> Btw. wouldn't it be possible in modern pipelines to remap or "disfigure" the texture when converting to triangles, so that it counters the bias accordingly? Ah, but that bakes in the shape of the quad, so it can't be modified runtime or it will get distorted again, right.
How does that work with animated meshes?
If you animate the mesh in a non-affine way, then seams are inevitable. The quad-rendering technique described in the article wouldn't save you from that either: seams would appear where two quads meet.
Because there is no reason to not use triangles.
Look at prideout's reply in the thread, the argument about having just one normal vector and the fact they can only describe one plane is huge. Unless you want more edge cases to deal with (hehe, pun intended), you're better off sticking to tris.
I know about the advantages and uniqueness properties of triangles. However, if the article is correct about that artists prefer using quads when editing (I know absolutely nothing about 3D editing, and didn't know that, I thought triangles are universal these days), something is clearly missing from the pipeline if a neatly mapping textures to quads, then converted to triangles ends up messing the interpolation.
Maybe we can continue to describe the geometry in triangles, but could use an additional "virtual fourth handle point" data (maybe expressed in barycentric coords of the other three, so everything is automatically planar) for stretching the interpolation in the expected way.
Anyway, I'm just getting started with Metal, and this provided for a nice theme for experimentation.
Modern GPUs - and really GPUs for about 10 years - are stupidly fast when it comes to rasterizing triangles, so artists simply work with quads (and polygons of more than four vertices - when Blender added "n-gons" back in the day many artists rejoiced) and they just subdivide them if they want neater looking interpolation before triangulating them. But a lot of detail nowadays come from geometry anyway.
This could have been nice ~15 years ago when using much fewer poly counts and relying a lot more on textures for detail was more common. But at the same time, GPUs were also much slower so this approach would be slower too.
So in practice this is really a niche approach for when you want to have a large quad that isn't "perfectly" mapped to an underlying texture and you can't subdivide the quad. Nowadays such use cases are more likely to come from more procedural sides (or perhaps effects) of the engine than artists making assets for the game.
It might be useful for an artist wanting to use a modern engine running on a modern PC to make a low-poly retro-styled game though.
Interesting - low-poly (and 2D) are exactly where my interests are.
Btw. wouldn't it be possible in modern pipelines to remap or "disfigure" the texture when converting to triangles, so that it counters the bias accordingly? Ah, but that bakes in the shape of the quad, so it can't be modified runtime or it will get distorted again, right.
Artists usually prefer quads, but it really depends on what they're doing. It's also not unusual for them to willingly create a model of mostly quads with tris in the mix at the same time.
In the end, the representation of a quad in a 3D modelling editor doesn't really matter. Be it an actual quad or 2 tris. Because any sane artist wants every quad to have 1 normal vec, if they want 2 on a quad they will happily use 2 tris instead.
It is a non issue.
(btw I have done plenty of 3D art for games, I have also made my own vk renderer, I don't see a problem with using tris in the end, even if "visual quads" do de-clutter my work and do help reasoning about the flow of the model at hand)
When you unwrap UVs the interpolation issue is taken care of. The UV map accounts for it.
It's only for tessellation like subdivision surfaces where something like this would be nifty That is to say, this is mainly nifty for the DCC tool renderer itself to render slightly faster.
This actually seems quite easy to implement. Any thoughts on the performance hit a program takes when going this route instead of using one of the workarounds?
Is this really new? Will it become an option in Unity, Unreal and the like? The results seem convincing!
(deleted)
I think you can if you align the deflector with the tetryon field and feed all power from the warp core directly into the Heisenberg compensators.