Been writing C++ for a decade, I think it's time to make some changes. So I kicked off my Rust learning recently.
I made my version of the hello world triangle, and safe to say I can work with the Rust now!
Next I'll detail how it's done and hopefully this helps the people who just want to start D3D12 implementation with Rust too.
Since my Rust is still rookie level, the coding style might not be the best fit or could be improved. Feel free to point out where you think it would be a problem or could be improved.
Git Link: https://github.com/EasyJellySniper/RustD3D12
Environment setup
There're many tutorial resources about the Rust already. Simply use whatever you're familiar with for the code management.
I'm currently with the cargo version 1.80.0 (376290515 2024-07-16), and using VSCode as my IDE.
The implementation relies on the Rust for Windows crate. Here is the list of my dependencies (Cargo.toml):
[package]
name = "RustD3D12"
version = "0.1.0"
edition = "2021"
[dependencies]
libc = "0.2.158"
windows-core = "0.58.0"
windows-sys = "0.59.0"
[dependencies.windows]
version = "0.58.0"
features = [
"Win32",
"Win32_System_LibraryLoader",
"Win32_UI_WindowsAndMessaging",
"Win32_Graphics_Gdi",
"Win32_Graphics_Direct3D12",
"Win32_Graphics_Direct3D",
"Win32_Graphics_Dxgi",
"Win32_Graphics_Dxgi_Common",
"Win32_System_Threading",
"Win32_Security",
"Win32_Graphics_Direct3D_Fxc",
]
Mainly for Win32 functions and I installed libc for some memory operations.
Actually, there are D3D examples from Rust for Windows GitHub. Which are good points to start too. I didn't use them as I prefer exploring myself.
Note that you probably need to install the Windows SDK too.
Gameloop setup
Start from my main.rs, it's always to create the main window first.
Unlike C++ Win32 application provides a wWinMain entry point with HINSTANCE, I don't think there is an equivalent in Rust side.
So I get my the HINSTANCE from GetModuleHandle().
let app_instance : HINSTANCE = GetModuleHandleW(None).unwrap().into();
let app_class_name = PCWSTR::from_raw(w!("Rust D3D12"));
// setup WNDCLASSEXW struct, mem::size_of is the Rust version "sizeof"
let app_class = WNDCLASSEXW
{
cbSize : mem::size_of::<WNDCLASSEXW>().try_into().unwrap(),
style : CS_HREDRAW | CS_VREDRAW,
cbClsExtra : 0,
cbWndExtra : 0,
hInstance : app_instance,
hIcon : HICON::default(),
hCursor : LoadCursorW(None, IDC_ARROW).unwrap(),
hbrBackground : GetSysColorBrush(COLOR_WINDOWFRAME),
lpszMenuName : PCWSTR::null(),
lpszClassName : app_class_name,
hIconSm : HICON::default(),
lpfnWndProc : Some(wnd_proc),
};
RegisterClassExW(&app_class);
let render_width : u32 = 1920;
let render_height : u32 = 1080;
let app_window = CreateWindowExW(WINDOW_EX_STYLE::default(), app_class_name, PCWSTR::from_raw(w!("Rust D3D12"))
, WS_OVERLAPPED | WS_MINIMIZEBOX | WS_SYSMENU, 0, 0, render_width as i32, render_height as i32, None, None, app_instance, None).unwrap();
I don't need menu and the icons at the moment so they're either set as default value or null. The w! macro from the windows-sys crate can be useful if you want to establish a PCWSTR structure.
And here is the wnd_proc function
unsafe extern "system" fn wnd_proc(h_wnd : HWND, message : u32, w_param : WPARAM, l_param : LPARAM) -> LRESULT
{
match message
{
WM_DESTROY =>
{
PostQuitMessage(0);
LRESULT::default()
}
_ => return DefWindowProcW(h_wnd,message,w_param,l_param),
}
}
match is the Rust version switch, simply process the message based on your demands.
After the window creation, it proceeds to the graphic device initialization and game loop.
// initialize graphic device
if !graphic_device::initialize_d3d12(app_window, render_width, render_height)
{
return;
}
// initialize demo resources
hello_world_triangle::create_pipeline();
// show the window and enter the game loop after window and graphic device are created.
let _ = ShowWindow(app_window, SW_SHOW);
let mut msg = MSG::default();
while msg.message != WM_QUIT
{
if PeekMessageW(&mut msg, None, 0, 0, PM_REMOVE).as_bool()
{
let _ = TranslateMessage(&msg);
DispatchMessageW(&msg);
}
else
{
graphic_device::update();
hello_world_triangle::render(render_width, render_height);
// present and wait GPU fence. just for demo, it's not the best way to do this.
// doing a ring-buffer workflow for frame resources is the way for better CPU-GPU efficiency.
graphic_device::present();
graphic_device::wait_for_gpu();
}
}
graphic_device::shutdown();
D3D12 Initialization
Next is the classic graphic device initialization of course. Start from listing available adapters and select compatible one to create the D3D12 device in my graphic_device.rs.
I cache most D3D12 interface as Option<> so I can call is_some() function to check whether they're valid.
fn create_device()
{
unsafe
{
let mut dxgi_factory_flag : DXGI_CREATE_FACTORY_FLAGS = DXGI_CREATE_FACTORY_FLAGS::default();
// enable debug layer
let mut debug_controller : Option<ID3D12Debug> = None;
if let Ok(()) = D3D12GetDebugInterface(&mut debug_controller)
{
debug_controller.as_ref().unwrap().EnableDebugLayer();
dxgi_factory_flag = dxgi_factory_flag | DXGI_CREATE_FACTORY_DEBUG;
}
// create DXGI factory
if let Ok(x) = CreateDXGIFactory2::<IDXGIFactory4>(dxgi_factory_flag)
{
GDXGI_FACTORY = Some(x);
}
// create d3d device after dxgi factory is created
let mut d3d12_device : Option<ID3D12Device> = None;
if GDXGI_FACTORY.is_some()
{
// try adapters from the highest feature level to lowest
let feature_levels : [D3D_FEATURE_LEVEL; 3] = [D3D_FEATURE_LEVEL_12_2, D3D_FEATURE_LEVEL_12_1, D3D_FEATURE_LEVEL_12_0];
let feature_levels_name : [&str;3] = ["12_2","12_1","12_0"];
let mut feature_index = 0;
let mut adapter_index;
'FeatureLevelLoop : loop
{
adapter_index = 0;
loop
{
if let Ok(x) = GDXGI_FACTORY.as_ref().unwrap().EnumAdapters1(adapter_index)
{
let adapter_desc = x.GetDesc1().unwrap();
if (adapter_desc.Flags & DXGI_ADAPTER_FLAG_SOFTWARE.0 as u32) > 0
{
// skip software adapter
continue;
}
// whenever an adapter with the highest feature level succeeds a intialization, jump out
if let Ok(_) = D3D12CreateDevice(&x, feature_levels[feature_index], &mut d3d12_device)
{
println!("Selected adapter for D3D12CreateDevice: {}", String::from_utf16(&adapter_desc.Description).unwrap());
println!("Intialized with feature level: {}", feature_levels_name[feature_index]);
GD3D12_DEVICE = d3d12_device;
break 'FeatureLevelLoop;
}
}
else
{
// jump out when EnumAdapters1 stops returning anything.
break;
}
adapter_index = adapter_index + 1;
}
feature_index = feature_index + 1;
}
}
// cache an ID3D12InfoQueue interface for use if device creation and debug layer are ready.
// since the visual studio code failed to catch the D3D error output, I'm going to print them out manually.
if GD3D12_DEVICE.is_some() && debug_controller.is_some()
{
GDEBUG_INFO_QUEUE = Some(GD3D12_DEVICE.as_ref().unwrap().cast().unwrap());
}
}
}
First thing to do is enabling D3D debug layer. Any error or warning reported by it can be useful.
Useless you're 100% sure the warnings/errors are false positive (E.g. The validation message in my sample is caused by the RTSS overlay), any message shall not be ignored.
The problem is VSCode can't seem to print D3D validation errors directly. I've checked OUTPUT, DEBUG CONSOLE, and TERMINAL. None of them are showing messages.
So in the end of initialization, I cache an ID3D12InfoQueue interface as well. The .cast() function from the windows-core crate is the equivalent of QueryInterface().
After the debug layer is set, I create DXGI factory and try to create a device from highest feature level.
Besides the device creation, I have create_command_buffers(), create_swapchain(), and create_fence() for other essentials.
I'll skip those here as the context might be too long.
Hello world triangle setup
This is done in hello_world_triangle.rs. Don't feel strange why it seems not setting up any vertex buffer at all. Because I didn't actuall set up it.
I'm TIRED of traditional hello world stuff and decided to make it a little bit funnier - Drawing a full-screen quad then cut off the pixels that are outside my defined triangle points.
(Oh don't do this in real world application lol, drawing a full-screen quad just to cut it isn't efficient at all)
The first important thing to do is creating ID3D12PipelineState interface. This decides the graphic state of a draw call operation.
pub fn create_pipeline()
{
unsafe
{
let device = graphic_device::get_device();
// create a root signature with a pixel-only 32-bit constant.
let root_parameter_constant = D3D12_ROOT_PARAMETER
{
ParameterType : D3D12_ROOT_PARAMETER_TYPE_32BIT_CONSTANTS,
Anonymous : D3D12_ROOT_PARAMETER_0
{
// nested initializor for union sturcture
Constants : D3D12_ROOT_CONSTANTS
{
ShaderRegister : 0,
RegisterSpace : 0,
Num32BitValues : 1,
}
},
ShaderVisibility : D3D12_SHADER_VISIBILITY_PIXEL,
..D3D12_ROOT_PARAMETER::default()
};
let root_signature_desc = D3D12_ROOT_SIGNATURE_DESC
{
NumParameters : 1,
pParameters : &root_parameter_constant,
..D3D12_ROOT_SIGNATURE_DESC::default()
};
let mut root_signature_blob : Option<ID3DBlob> = None;
let _ = D3D12SerializeRootSignature(&root_signature_desc, D3D_ROOT_SIGNATURE_VERSION_1, &mut root_signature_blob, None);
// convert the ID3DBlob::GetBufferPointer() to *const u8 with std::slice::fromw_raw_parts()
let root_blob_data = std::slice::from_raw_parts(root_signature_blob.as_ref().unwrap().GetBufferPointer() as *const u8, root_signature_blob.as_ref().unwrap().GetBufferSize());
if let Ok(x) = device.CreateRootSignature::<ID3D12RootSignature>(0, root_blob_data)
{
GHELLO_ROOT_SIGNATURE = Some(x);
}
if GHELLO_ROOT_SIGNATURE.is_none()
{
println!("Error during root signature creation!");
return;
}
// compile shaders with D3DCompileFromFile just for demo purpose, as it uses old FXC compiler
// in real world application, you might want to use DirectXShaderCompiler binary for 6.0 shader models and above
// fs::canonicalize() to get absolute path
// PathBuf::from() to establish a path structure
let shader_file_name = string_to_pcwstr(fs::canonicalize(PathBuf::from("./shaders/hello_world_triangle.hlsl")).unwrap().display().to_string());
let compile_flag = D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION;
let mut vs_blob : Option<ID3DBlob> = None;
let mut ps_blob : Option<ID3DBlob> = None;
// if compile error message is needed, setup ID3DBlob for the last parameter as well, I skip it for now
let _ = D3DCompileFromFile(shader_file_name, None, None, s!("HelloWorldVS"), s!("vs_5_1"), compile_flag, 0, &mut vs_blob, None);
let _ = D3DCompileFromFile(shader_file_name, None, None, s!("HelloWorldPS"), s!("ps_5_1"), compile_flag, 0, &mut ps_blob, None);
if vs_blob.is_none() || ps_blob.is_none()
{
println!("Error during shader creation!");
return;
}
// setup byte code structure
let vs_bytecode = D3D12_SHADER_BYTECODE
{
pShaderBytecode : vs_blob.as_ref().unwrap().GetBufferPointer(),
BytecodeLength : vs_blob.as_ref().unwrap().GetBufferSize(),
};
let ps_bytecode = D3D12_SHADER_BYTECODE
{
pShaderBytecode : ps_blob.as_ref().unwrap().GetBufferPointer(),
BytecodeLength : ps_blob.as_ref().unwrap().GetBufferSize(),
};
// setup an overlay rasterizer
let mut overlay_rasterize_state = D3D12_RASTERIZER_DESC::default();
overlay_rasterize_state = D3D12_RASTERIZER_DESC
{
FillMode : D3D12_FILL_MODE_SOLID,
CullMode : D3D12_CULL_MODE_NONE,
..overlay_rasterize_state
};
// setup color write mask for render target
let render_target_blend_desc = D3D12_RENDER_TARGET_BLEND_DESC
{
RenderTargetWriteMask : D3D12_COLOR_WRITE_ENABLE_ALL.0 as u8,
..D3D12_RENDER_TARGET_BLEND_DESC::default()
};
// setup RTV format array, unused slot must be DXGI_UNKNOWN
let mut rtv_format_list = [DXGI_FORMAT_UNKNOWN; 8];
rtv_format_list[0] = graphic_device::get_back_buffer_format();
// create pipeline state
let pso_desc = D3D12_GRAPHICS_PIPELINE_STATE_DESC
{
// pRootSignature somehow implemented as ManuallyDrop, just setup one for it
pRootSignature : ManuallyDrop::new(GHELLO_ROOT_SIGNATURE.clone()),
VS : vs_bytecode,
PS : ps_bytecode,
RasterizerState : overlay_rasterize_state,
BlendState : D3D12_BLEND_DESC
{
RenderTarget : [render_target_blend_desc; 8],
..D3D12_BLEND_DESC::default()
},
DepthStencilState : D3D12_DEPTH_STENCIL_DESC::default(),
SampleMask : u32::MAX,
PrimitiveTopologyType : D3D12_PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE,
NumRenderTargets : 1,
RTVFormats : rtv_format_list,
SampleDesc : DXGI_SAMPLE_DESC
{
Count : 1,
Quality : 0,
},
..D3D12_GRAPHICS_PIPELINE_STATE_DESC::default()
};
if let Ok(x) = device.CreateGraphicsPipelineState::<ID3D12PipelineState>(&pso_desc)
{
GOVERLAY_STATE = Some(x);
}
if GOVERLAY_STATE.is_none()
{
println!("Error during pipeline state creation!");
}
// store the start time
GSTART_TIME = Some(std::time::SystemTime::now());
}
}
Ouch, the code is a bit long. I start from setting up the root signature for my hello world shader.
I passed the time parameter for animating the triangle so a D3D12_ROOT_CONSTANTS is necessary.
std::slice::fromw_raw_parts()
is useful for converting a D3DBlob data pointer as *const u8, so it can be passed to CreateRootSignature
.
Next is the shader compilation, the key point is dealing with the file path which is a tricky thing to do in the Rust.
After organizing my file path with PathBuf::from()
and fs::canonicalize
, I convert it as PCWSTR by a customized function string_to_pcwstr
.
The string_to_pcwstr
simply mirrors the same logic from w! macro, but I made it a function version instead of a literal macro.
At last, fill all elements in D3D12_GRAPHICS_PIPELINE_STATE_DESC
strucuture and call CreateGraphicsPipelineState
to finish the state creation.
Oh if you're interested in the shader file, it's in the hello_world_triangle.hlsl. It checks if a pixel point is inside the triangle I defined via Barycentric coordinate formula.
Render the triangle
Refer the Render() function in the hello_world_triangle.rs.
After all way long with those initializations and setups, this part is fairly simple.
Reset the command buffers, bind graphic state and root signature, transition the render target and clear it, setup view port and scissor rect…blah blah blah.
Summary
That's it! Hopefully this helps the people who just want to start D3D12 implementation with the Rust.
If you're an experienced C++ programmer already, it's mostly to get used with Rust syntax/concepts and the Windows crate.