Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder why people haven't coalesced around HLSL; I vastly prefer defining vertices as structs and having cbuffers over declaring every individual uniform or vertex variable with its specific location. It makes much more sense to me to define structs with semantics over saying location(X) for each var or addressing uniforms by name/string. I often see GLSL to HLSL transpilers, but not vice-versa, which is weird because HLSL is the more ergonomic language to me


There's no need for transpilers these days, you can just compile HLSL to SPIR-V bytecode with dxc.

https://github.com/microsoft/DirectXShaderCompiler


I would even advise to author the shaders directly in SPIR-V with a basic SSA checker and text to binary translater. That, very conservatively to avoid any drivers incompatibility (aka nothing fancy).

I wonder if there is an open source HLSL to spir-v compiler written in simple and plain C... but I am ready to work a bit more to avoid to depend on a complex HLSL compiler.


The quantity and type of SPIR-V issues which DirectXShaderCompiler has makes me believe it is not really used in serious settings.

Microsoft also directly states in the README it is a community contribution, which seems an intentional choice of language.

I'd love to be proven wrong, though


Can WebGL run SPIR-V? That was my main issue


I'm pretty sure you have to transpile SPIR-V shaders to GLSL first. WebGPU has its own Rust-inspired language as well.


It's worth noting that SPIR-V isn't really compiled for the target ISA, it's not even guaranteed to be in a particular optimized form already.. so when the GPU driver loads it, it may or may not decide to spend a lot of time running optimization passes before translating it to the actual machine code the GPU needs.

In contrast, Metal shaders can be pre-compiled to the actual ARM binary code the GPU runs.

And DirectX shader bytecode, DXIL, is (poorly) defined to be a low-level IR that LLVM spits out right before it would be translated to machine code, rather than a high-level IR like SPIR-V is. i.e. it is guaranteed to be in an optimized form already, so drivers do not expect to run any optimizations at load time.

SPIR-V seems a bit of a mess here, because you don't really know what the target GPU is going to do - and what you can expect in terms of time the GPU spends on optimizing the SPIR-V when loading it varies on mobile and non-mobile GPUs, depending basically on what the GPU manufacturer felt behaved the best based on how random people making games distribute optimized or unoptimized SPIR-V.

Valve even maintains a fancy database of (SPIRV, GPU driver) pairings which map to _actual_ precompiled shaders for all games distributed on their platform, so that they aren't affected by this.

Whew, what a mess shaders are.


Which is really how the experience with portable APIs from Khronos goes, most people that claim they are portable never really did any serious cross platform, cross device programming with them.

At the end of the day there are so many if this, if that, load this extension, load that extension, do this workaroud, do that workaround, that it could just be another API for all pratical purposes.


> I wonder why people haven't coalesced around HLSL

I mean, people kind of have, because Unity uses it (and Unreal too I think).

> I vastly prefer defining vertices as structs

I agree, and note that WGSL lets you do this.

> It makes much more sense to me to define structs with semantics over saying location(X) for each var or addressing uniforms by name/string.

I feel like semantics are a straitjacket, as though the compiler is forcing me to choose from a predefined list of names instead of letting me choose my own. Having to use TEXCOORDn for things that aren't texture coordinates is weird.

> I often see GLSL to HLSL transpilers, but not vice-versa, which is weird because HLSL is the more ergonomic language to me

Microsoft and Khronos support this now [1]. Microsoft's dxc can compile HLSL to SPIR-V, which can then be decompiled to GLSL with SPIRV-Cross.

In general, I prefer GLSL to HLSL because GLSL doesn't use semantics, and because GLSL has looser implicit conversions between scalars and vectors (this continually bites me when writing HLSL). But I highly suspect people just prefer what they started with and perfectly understand people who have the opposite opinion. In any case, WGSL feels like the best of both worlds and a few newer engines like Bevy are adopting it.

[1]: https://www.khronos.org/blog/hlsl-first-class-vulkan-shading...


They have, HLSL has practically won the shading language wars in the games industry, even on consoles, the Playstation and Switch shading languages are heavily inspired on HLSL.

Also as I noted on another comment, Khronos is basically leaving to Microsoft HLSL as de facto shading language for Vulkan, as they don't have monetary resources, nor anyone else, interested in improving GLSL as a language.

However, as per MSL and HLSL 2021 improvements, alongside SYCL and CUDA, eventually C++ will get the spot, and I doubt this is an area where any of the wannabe C++ replacements can do better.


Note that WGSL is closer to Rust than C++--syntactically speaking anyway--and it's pretty much guaranteed to get traction as it'll be the only way to talk to the GPU using a modern API on the Web.


I doubt it, everyone is generating WGSL from their engine toolchains.

I don't know anyone that is happy with the idea of WSGL, and the amount of work it has generated.

Other than the WebGPU folks, that is.


I'm happy with it. It's a nice improvement over HLSL/GLSL in Bevy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: