Code Generation
Backends and lowering: Cranelift, native, LLVM, GPU, and debug info.
Code Generation
Sounio supports multiple code generation backends so it can target:
- fast local execution (JIT)
- native artifacts (object files / shared libraries)
- GPU kernels (PTX/SPIR-V/Metal)
- debug information for diagnostics and tooling
Where It Lives
crates/souc/src/codegen/— lowering and backend-specific generatorscrates/souc/src/backend/— runtime glue (native runtime, effect dispatch, GPU bridge)crates/runtime/— runtime support crate
Backends (Overview)
At the architecture level:
HLIR / MIR
-> backend selection
-> native artifacts (ELF/Mach-O/PE) or JIT
-> GPU kernels (PTX/SPIR-V/MSL)
-> debug info + source maps
Cranelift
Cranelift is used for fast compilation and JIT-style workflows.
Entry points commonly live under:
crates/souc/src/codegen/cranelift.rscrates/souc/src/codegen/mir_cranelift.rs
Typical flow:
MIR (SSA)
-> MIR optimizations (optional)
-> Cranelift IR (CLIF)
-> machine code (JIT) / object output (AOT, when supported)
Native (no LLVM)
There is a “native” backend path intended to avoid an LLVM dependency, emitting platform-specific artifacts (ELF/Mach-O/PE) through the compiler’s native runtime glue.
See:
crates/souc/src/backend/native/crates/souc/src/backend/native/elf.rs(Linux) and platform-specific modules
LLVM (optional)
LLVM support is feature-gated and used for optimized and portable AOT compilation.
See:
crates/souc/src/codegen/llvm/
GPU
GPU code generation is implemented under:
crates/souc/src/codegen/gpu/
The compiler can lower “kernel-like” code into GPU IR forms. Actual execution depends on the runtime and available toolchains (e.g., CUDA toolkit).
Targets commonly referenced in the codebase:
| Target | Notes |
|---|---|
| PTX | NVIDIA CUDA |
| SPIR-V | Vulkan/OpenCL-style ecosystems |
| MSL | Apple Metal shading language |
Debug Info
Debug info and source mapping live under:
crates/souc/src/codegen/debug/