Boring homework post
Feb. 7th, 2023 10:20 amEck's _Introduction to Computer Graphics_ noted that later major versions of OpenGL are harder to use but faster, and so introduced OpenGL 1.1 first, but in JavaScript, but WebGL (JS GL) is OpenES 1 or 2 based (doing this memory, probably with errors) based on OpenGL 2-and-or-3. Eck created a JS API shim to make the subset of OpenGL 1.1 that he teaches available on top of WebGL/OpenGL 2/3, just so that he could teach OpenGL 1.1 with JavaScript in your browser.
For context, OpenGL for decades was basically C only, maybe with some Fortran bindings on some expensive Unix workstations, before a Cambrian explosion about a decade ago when other languages started writing lots of really good bindings to C libraries (modules) to take advantage of all of the work that had been done for C.
I absolutely could have used a real SGI O2 workstation to do this homework and then published it online using Eck's shim and the Emscripten C-to-JS compiler. In fact, it's a shame that I didn't do that. GL1.1 being so much less fickle would have more than made up for the extra time involved to set that nonsense up. As it is, most of the time I've spent on that, is just trying to figure out for the 300th time why some surface isn't being rendered. There's a lot of Simon Says. Oh, you didn't compute the vertex normals, so I didn't do it. Oh, this time you were supposed to compute the surface normals, so I didn't render your thing. Oh, you didn't tell the camera to update after changing something, so I just started ignoring it. Oh, you didn't tell the scene to update. Don't forget to make all of the surfaces double sided unless you want to make sure they're drawn counterclockwise from the point of view of the camera and most of the types of lights available each don't work with most of the surface types available so even though it looks like there are thousands of possible combinations, there are only about a dozen, and we forgot to document them so you'll get this info from a "why doesn't my thing render" StackOverflow. It seems like anything anyone is doing is with a couple of light types, a couple of surface types, some fog effect, and lots of texturemaps and shaders. Want shadows? Turn them on in the camera, on each object that can cast a shadow, and each object you want to see a shadow on, cuz if maybe you don't want a shadow on some surface, then not setting it there is faster! And then my nearly 10 year old integrated graphics GPU core with shared memory built in the Intel mobile CPU is slower than snot anyway even with everything fun and cool turned off. I was a bit angry before that I could never get video games to work on this machine but now I understand. Drawing five small stupid traingles is a serious workout. No wonder people completely shun integrated graphics 3D. I'm going to need to figure out how to turn the resolution way down if we have to do anything more complicated which seems likely.
But dabbling about with 3D is fun. Doing this on a 68000 at 320x resolution with integer math would be way, way faster and allow for better effects (I started on a 6502 using docs written for 68000). Hard not to dive all of the way in and learn everything. Doing integer 3D stuff on the CPU, things are highly optimized. You don't do a full 3D rotation if you're just rotating about one axis, and you set things up to avoid 2/3 axis rotations, or pre-compute your rotations and store them in a table. GL doesn't let you do that (that I know of yet... maybe shaders can do this?) so it seems like everyone instead heavily optimizes what they do and how they do it in other ways. The law of leaky abstractions strikes every time. Any general purpose interface is going to be generally useless or generally slow or both. That this one is just kind of useless and kind of slow is relatively impressive. When things get pushed to the shaders, you're basically just back to writing C-like code to do stuff and using a OpenGL backdoor to bypass OpenGL and that's how OpenGL is fast. But APIs that get out of the way are good too I guess or at least that nullifies badness without necessarily leaving you with any reliable abstractions.
For context, OpenGL for decades was basically C only, maybe with some Fortran bindings on some expensive Unix workstations, before a Cambrian explosion about a decade ago when other languages started writing lots of really good bindings to C libraries (modules) to take advantage of all of the work that had been done for C.
I absolutely could have used a real SGI O2 workstation to do this homework and then published it online using Eck's shim and the Emscripten C-to-JS compiler. In fact, it's a shame that I didn't do that. GL1.1 being so much less fickle would have more than made up for the extra time involved to set that nonsense up. As it is, most of the time I've spent on that, is just trying to figure out for the 300th time why some surface isn't being rendered. There's a lot of Simon Says. Oh, you didn't compute the vertex normals, so I didn't do it. Oh, this time you were supposed to compute the surface normals, so I didn't render your thing. Oh, you didn't tell the camera to update after changing something, so I just started ignoring it. Oh, you didn't tell the scene to update. Don't forget to make all of the surfaces double sided unless you want to make sure they're drawn counterclockwise from the point of view of the camera and most of the types of lights available each don't work with most of the surface types available so even though it looks like there are thousands of possible combinations, there are only about a dozen, and we forgot to document them so you'll get this info from a "why doesn't my thing render" StackOverflow. It seems like anything anyone is doing is with a couple of light types, a couple of surface types, some fog effect, and lots of texturemaps and shaders. Want shadows? Turn them on in the camera, on each object that can cast a shadow, and each object you want to see a shadow on, cuz if maybe you don't want a shadow on some surface, then not setting it there is faster! And then my nearly 10 year old integrated graphics GPU core with shared memory built in the Intel mobile CPU is slower than snot anyway even with everything fun and cool turned off. I was a bit angry before that I could never get video games to work on this machine but now I understand. Drawing five small stupid traingles is a serious workout. No wonder people completely shun integrated graphics 3D. I'm going to need to figure out how to turn the resolution way down if we have to do anything more complicated which seems likely.
But dabbling about with 3D is fun. Doing this on a 68000 at 320x resolution with integer math would be way, way faster and allow for better effects (I started on a 6502 using docs written for 68000). Hard not to dive all of the way in and learn everything. Doing integer 3D stuff on the CPU, things are highly optimized. You don't do a full 3D rotation if you're just rotating about one axis, and you set things up to avoid 2/3 axis rotations, or pre-compute your rotations and store them in a table. GL doesn't let you do that (that I know of yet... maybe shaders can do this?) so it seems like everyone instead heavily optimizes what they do and how they do it in other ways. The law of leaky abstractions strikes every time. Any general purpose interface is going to be generally useless or generally slow or both. That this one is just kind of useless and kind of slow is relatively impressive. When things get pushed to the shaders, you're basically just back to writing C-like code to do stuff and using a OpenGL backdoor to bypass OpenGL and that's how OpenGL is fast. But APIs that get out of the way are good too I guess or at least that nullifies badness without necessarily leaving you with any reliable abstractions.