virtualritz 12 hours ago

Curiously, what people commonly refer to as 'Waterfront OBJ' is merely a tiny subset of that format. I.e. the part dealing with polygons.

The format supports e.g. higher order curves and surfaces and apps like Maya or Rhino3D can read and write OBj files containing such data. [1]

Writing a parser for the polygon subset also comes with some caveats.

If your target is a GPU you probably need to care for robust triangulation of n-gons and making per-face-per-vertex data per-vertex on disconnected triangles.

Vice versa, if you are feeding data to an offline renderer you want to absolutely preserves such information.

I believe the tobj Rust crate is one of the few OBJ importers that handles all edge cases. [2] If you think it doesn't, let me know and I will fix that.

This is surprising for people familiar with one but not the other of the requirements of offline- or GPU rendering.

I.e. if you write an OBJ reader this can become a challenge; see e.g. an issue I opened here [3].

1. https://paulbourke.net/dataformats/obj/

2. https://docs.rs/tobj/latest/tobj/struct.LoadOptions.html

3. https://github.com/assimp/assimp/issues/3677

  • the__alchemist 8 hours ago

    How does this compare to the `obj` crate? I'm assuming that doesn't handle cases beyond the common one well? I ask because I have a 3D rendering/GUI application lib in rust (`graphics` crate), and for OBJ files, I thinly wrap that to turn it into a mesh.

    In my own applications, it hasn't come up, as I've been mostly using primitives and dynamically-generated meshes, but am wondering if I should switch.

  • grandempire 9 hours ago

    > robust triangulation of n-gons and making per-face-per-vertex data per-vertex on disconnected triangles.

    This is a simple post-process step after parsing.

suspended_state 9 hours ago

That's very nice work, and many interesting concepts introduced in the post (for example, arenas, length bounded strings, the Cut struct).

One caveat though:

> If the OBJ source cannot fit in memory, then the model won’t fit in memory.

I don't think that this is true: a (single precision) float textual representation is typically equal or larger than its binary representation (4 bytes), the floating point used in the renderer given later in the post. The numbers given in the cube example are unlikely to occur in real world examples, where one would probably expect more than 2 digits of decimal precision. That being said, for double precision floats, it might be true in many scenarios, but I would not make that a cardinal rule anyway.

This corner cut fits within the objective of the post, which, imho, isn't to make the most efficient program, but provide a great foundation in C to build upon.

  • dahart 8 hours ago

    The sentence you quoted must be true because the input file and the output binary model both need to fit in memory at the same time.

    • suspended_state 6 hours ago

      I guess that the following statement would be true: if the process cannot load the whole file in memory and allocate memory for the model at the same time, then the it won't be able to run successfully. Strictly speaking, the sentence I quoted doesn't derive from that. This is just me quibbling though, because the intended meaning was most likely what you said.

pixelesque 14 hours ago

As someone who has written multiple OBJ readers over the years, this is interesting, but noteably seems to be ignoring texture coords (UV coords), and doesn't support object groups.

Also obj material support is an absolute nightmare if you ever try and support that: there's technically a sort of original standard (made around 30 years ago, so understandably somewhat messy given how far materials and physically-based shading has come in the mean time), but different DCCs do vastly different things, especially for things like texture paths and things like specular/roughness...

  • milesrout 11 hours ago

    I think it doesn't support 'vt' because the techniques are adequately demonstrated just with faces and normals, so it would be more code without serving any pedagogical purpose. The author would, I think, not suggest you copy this code and try to use it as a library or something, but that you should develop the skillset to be able to write code like this when you need it.

pjmlp 11 hours ago

The usual rite of passage into 3D programming in the old days, adding all the things that OpenGL doesn't do out of the box like in other 3D frameworks, naturally the 3D assets loading code was OBJ based.

Nowadays you can have the same fun by rewriting the previous sentence using Vulkan instead of OpenGL, and glTF instead of OBJ.

  • rossant 9 hours ago

    "The same fun" but also likely orders of magnitude more efforts (and headaches).

    • pjmlp 8 hours ago

      Indeed, at least now there is an SDK as starting point.

tylermw 8 hours ago

Note that there's a great C99/C++ single header library, tinyobjloader, that provides robust (in my experience) and feature-full OBJ loading, including triangulation of n-gons and full material parsing.

https://github.com/tinyobjloader/tinyobjloader

It's fairly mature and handles many of the parsing footguns you'll inevitably run into trying to write your own OBJ parser.

animal531 8 hours ago

This is one of those things where for literally every 3d tool you test it against you're going to find new edge cases that breaks the code.

bvrmn 10 hours ago

> Str substring(Str s, ptrdiff_t i)

The function has quite questionable implementation. It fails miserably for strings with length < i.

  • Joker_vD 10 hours ago

    Only because every other Str-accepting function uses "s.len" instead of "s.len > 0" as the "is s non-empty" test.

    Still, this function is called only once, and in that call, its i argument is always <= length, so it's perfectly fine (it's only UB if you actually pass it a bad argument).

    • bvrmn 10 hours ago

      > Still, this function is called only once, and in that call, its i argument is always <= length, so it's perfectly fine (it's only UB if you actually pass it a bad argument).

      This very mindset is a source of bugs and vulnerabilities. The author has high marks from me on safety and "make it hard to use wrong" and it's quite surprising to see such code.

      • grandempire 9 hours ago

        Satisfying preconditions is a requirement to make functioning programs.

        The insanity would be assuming that every function is valid for the Cartesian product of all possible of its arguments.

        What he probably needs is an assert

        • Joker_vD 8 hours ago

          > Satisfying preconditions is a requirement to make functioning programs.

          > The insanity would be assuming that every function is valid for the Cartesian product of all possible of its arguments.

          Would it? That reminds me of a recent post on HN about proving the long (binary) division algorithm with Hoare's logic. It uses the "d > 0" precondition and proves that, indeed, the algorithm arrives at the required postcondition. However, the algorithm still terminates and produces something even when d == 0. What does it computes in this case? Is it useful? Should such questions even be considered?

          • grandempire 8 hours ago

            > Should such questions even be considered?

            Yes, a better understanding of the problem gives you a better understanding of the preconditions. Always ask if you have that right and weaken accordingly.

        • bvrmn 8 hours ago

          For this particular case it's trivial to fix substring function and extend possible inputs. It seems your proposition: "do nothing because it's futile". It's simply wrong.

          • grandempire 8 hours ago

            Will that make the function more useful?

            In general you can write better code when you can make assumptions.

            Code to handle every possibility is filled with error prone branching, that reduplicates effort at every function.

            • bvrmn 10 minutes ago

              It would reduce number of assumptions, especially ones laying only in your head. Generally it's a good thing, isn't it? Literally large portion of C code bugs is due to broken assumptions. WTF man?

      • UncleEntity 10 hours ago

        Reminds me of the time I was chastised for adding a NULL check to keep <program> from segfaulting by the dev responsible for said segfault because crashing without even as much as a warning was "intended behavior". IIRC this was over reading a file from disk and just assuming it existed.

writebetterc 12 hours ago

This way of writing programs is also quite a lot faster than depending on fgetline and the like. The integer and float parsing is probably slow, though.

My question is: Does the author actually use Windows XP?

  • claytonaalves 12 hours ago

    > Does the author actually use Windows XP?

    I've switched to XP (from Windows 7, on a VM) and the performance is astounding even on limited hardware settings. No bloatware, just good old Win32 x86.

    • MSFT_Edging 10 hours ago

      I recently pulled an old laptop out of the closet with a mostly stock image of XP to play with an old device and it felt so snappy.

      It's sad how bloated things have gotten.

  • kilpikaarna 10 hours ago

    > My question is: Does the author actually use Windows XP?

    Significant overlap between the types of people who use WinXP and write 3D file format importers in C, I think! Though I prefer 7 myself.

kleiba 14 hours ago

Who can do it as a one liner with a regex?

  • creaktive 13 hours ago

    Been there, done that... Not worth it

turnsout 12 hours ago

This sent me down a rabbit hole reading about the author's style of having an "Arena allocator," [0] which was fascinating. I often did something similar when writing ANSI C back in the day—allocate a big enough chunk of memory to operate, and do your own bookkeeping. But his Arena implementation looks more flexible and robust.

  [0]: https://nullprogram.com/blog/2023/09/27/
xyzsparetimexyz 13 hours ago

[flagged]

  • jbreckmckye 12 hours ago

    I did some OBJ processing the other day, it's an easy format for working with PlayStation 1 3D models.

maccard 11 hours ago

I think this article serves as a perfect example of why we should consider moving on from C. The first third of this article is "how to do memory allocation and work with strings".

The bit about OBJ parsing is neat, though.

  • writebetterc 11 hours ago

    Why isn't the conclusion "There is a far better way of using C, which the stdlib doesn't promote... But could"? The fact of the matter is that any sufficiently large C codebase will do this stuff anyway, it's not a language issue.

    Good Rust code will also care about memory allocations to the same degree as the C code, the difference is that Rust will help you out in making sure your thinking is correct. My experience is that good systems programming has thinking about memory allocations not as an annoying side issue, but as a main concern.

  • flohofwoe 11 hours ago

    If you even remotely care about performance you'll need to take care of such details in any language, and some high level 'managed' languages make that actually harder than C because you need to work around or even against builtin language features.

    • maccard 9 hours ago

      I've spent my entire career working in C++ writing low level code for video games (and a decent chunk of it writing backend services for said games, and the glue between the two).

      If you want to talk about performance, you better come armed with numbers. If you don't, you're not writing "high performance" code.

  • fsloth 10 hours ago

    A large part of high performance programming using any language is about memory management.

    For stuff that you run only for yourself and _always_ executes in a blink of an eye I do agree.

    • maccard 9 hours ago

      I've spent my career ping ponging between writing fast low level code for games, and online systems. If you want to talk about high performance code, benchmarks are a requirement. There's no numbers here. It only talks about "Robust", which OP defines as:

      > By robust I mean no undefined behavior for any input, valid or invalid; no out of bounds accesses, no signed overflows. Input is otherwise not validated. Invalid input may load as valid by chance, which will render as either garbage or nothing.

      Robust is a baseline for high performance programming.

  • tocariimaa 6 hours ago

    Always when an article of this author gets posted on HN there's a Rust fanatic saying how his code does not work and how wrong he is for committing the sin of using C in current year.

    • maccard 5 hours ago

      I didn't mention rust. I think he's used enough features of C++ to warrant using it instead.