Skip to content

Commit

Permalink
Mobile Nerf sRGB and readme fix (#1020)
Browse files Browse the repository at this point in the history
* Override swapchain settings to use UNORM as MLP data was trained with already gamma-corrected values, also removes a wrongly added text on the Readme file

* Perform sRGB to linear on shaders instead using an UNORM swapchain

* Fix formatting spacing on shaders

* Update copyright year

* Fix readme copyright year
  • Loading branch information
RodrigoHolztrattner-QuIC committed May 6, 2024
1 parent 234bc13 commit d99ede4
Show file tree
Hide file tree
Showing 7 changed files with 132 additions and 23 deletions.
10 changes: 1 addition & 9 deletions samples/general/mobile_nerf/README.adoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
////
- Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved
- Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved
-
- SPDX-License-Identifier: Apache-2.0
-
Expand Down Expand Up @@ -28,14 +28,6 @@ It's based on its original https://github.com/google-research/jax3d/tree/main/ja
This is a different version from traditional NeRF rendering, which normally requires tracing rays (usually done via ray-marching) and querying a MLP multiple times for each ray. These many queries result in non-interactive frame rates on most of the GPUs.
The mobile version uses the rasterization pipeline to render the final image; this is done via a triangle mesh and a feature texture, where each of its visible pixels are run through a small MLP (executed in the fragment shader) that converts the feature data and view direction to the corresponding output pixel color. This technique enables interactive FPS even on mobile GPUs (thus the name).

just rendering the standard NeRF in realtime is not feasible on commodity hardware. Rendering
views from a trained NeRF requires querying a multi-layer
perception (MLP) hundreds of times per ray. It requires
about 100 teraflops to render a single 800∗800 frame, which
results in a best-case rendering time of 10 seconds per frame
on a NVIDIA RTX 2080 GPU with full GPU utilization.


== Description: [https://mobile-nerf.github.io/[Mobile Nerf]]
Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views.
However, they rely upon specialized volumetric rendering algorithms based on ray marching that are mismatched to the capabilities of widely deployed graphics hardware.
Expand Down
24 changes: 22 additions & 2 deletions shaders/mobile_nerf/merged.frag
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
/* Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved.
/* Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*
Expand Down Expand Up @@ -152,6 +152,26 @@ vec3 evaluateNetwork( vec4 f0, vec4 f1, vec4 viewdir)
return vec3(result * viewdir.a+(1.0-viewdir.a));
}

//////////////////////////////////////////////////////////////
// MLP was trained with gamma-corrected values //
// convert to linear so sRGB conversion isn't applied twice //
//////////////////////////////////////////////////////////////

float Convert_sRGB_ToLinear(float value)
{
return value <= 0.04045
? value / 12.92
: pow((value + 0.055) / 1.055, 2.4);
}

vec3 Convert_sRGB_ToLinear(vec3 value)
{
return vec3(Convert_sRGB_ToLinear(value.x), Convert_sRGB_ToLinear(value.y), Convert_sRGB_ToLinear(value.z));
}

//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////

void main(void)
{
Expand All @@ -174,7 +194,7 @@ void main(void)

// Original

o_color.rgb = evaluateNetwork(feature_0,feature_1,rayDirection);
o_color.rgb = Convert_sRGB_ToLinear(evaluateNetwork(feature_0,feature_1,rayDirection));
//o_color.rgb = feature_0.rgb;
o_color.a = 1.0;
}
24 changes: 22 additions & 2 deletions shaders/mobile_nerf/merged_morpheus.frag
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
/* Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved.
/* Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*
Expand Down Expand Up @@ -152,6 +152,26 @@ vec3 evaluateNetwork( vec4 f0, vec4 f1, vec4 viewdir)
return vec3(result * viewdir.a+(1.0-viewdir.a));
}

//////////////////////////////////////////////////////////////
// MLP was trained with gamma-corrected values //
// convert to linear so sRGB conversion isn't applied twice //
//////////////////////////////////////////////////////////////

float Convert_sRGB_ToLinear(float value)
{
return value <= 0.04045
? value / 12.92
: pow((value + 0.055) / 1.055, 2.4);
}

vec3 Convert_sRGB_ToLinear(vec3 value)
{
return vec3(Convert_sRGB_ToLinear(value.x), Convert_sRGB_ToLinear(value.y), Convert_sRGB_ToLinear(value.z));
}

//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////

void main(void)
{
Expand All @@ -173,7 +193,7 @@ void main(void)
rayDirection.a = rayDirection.a*2.0-1.0;

// Original
o_color.rgb = evaluateNetwork(feature_0,feature_1,rayDirection);
o_color.rgb = Convert_sRGB_ToLinear(evaluateNetwork(feature_0,feature_1,rayDirection));
// o_color.rgb = feature_0.rgb;
o_color.a = 1.0;
}
22 changes: 20 additions & 2 deletions shaders/mobile_nerf/mlp.frag
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
/* Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved.
/* Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*
Expand Down Expand Up @@ -196,8 +196,26 @@ vec3 evaluateNetwork( vec4 f0, vec4 f1, vec4 viewdir) {
}


//////////////////////////////////////////////////////////////
// MLP was trained with gamma-corrected values //
// convert to linear so sRGB conversion isn't applied twice //
//////////////////////////////////////////////////////////////

float Convert_sRGB_ToLinear(float value)
{
return value <= 0.04045
? value / 12.92
: pow((value + 0.055) / 1.055, 2.4);
}

vec3 Convert_sRGB_ToLinear(vec3 value)
{
return vec3(Convert_sRGB_ToLinear(value.x), Convert_sRGB_ToLinear(value.y), Convert_sRGB_ToLinear(value.z));
}

//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////

void main(void)
{
Expand All @@ -214,7 +232,7 @@ void main(void)

// Original

o_color.rgb = evaluateNetwork(feature_0,feature_1,rayDirection);
o_color.rgb = Convert_sRGB_ToLinear(evaluateNetwork(feature_0,feature_1,rayDirection));
//o_color.rgb = feature_0.rgb;
o_color.a = 1.0;

Expand Down
21 changes: 19 additions & 2 deletions shaders/mobile_nerf/mlp_morpheus.frag
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
/* Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved.
/* Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*
Expand Down Expand Up @@ -195,9 +195,26 @@ vec3 evaluateNetwork( vec4 f0, vec4 f1, vec4 viewdir) {
return vec3(result * viewdir.a+(1.0-viewdir.a));
}

//////////////////////////////////////////////////////////////
// MLP was trained with gamma-corrected values //
// convert to linear so sRGB conversion isn't applied twice //
//////////////////////////////////////////////////////////////

float Convert_sRGB_ToLinear(float value)
{
return value <= 0.04045
? value / 12.92
: pow((value + 0.055) / 1.055, 2.4);
}

vec3 Convert_sRGB_ToLinear(vec3 value)
{
return vec3(Convert_sRGB_ToLinear(value.x), Convert_sRGB_ToLinear(value.y), Convert_sRGB_ToLinear(value.z));
}

//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////

void main(void)
{
Expand All @@ -214,7 +231,7 @@ void main(void)

// Original

o_color.rgb = evaluateNetwork(feature_0,feature_1,rayDirection);
o_color.rgb = Convert_sRGB_ToLinear(evaluateNetwork(feature_0,feature_1,rayDirection));
//o_color.rgb = feature_0.rgb;
o_color.a = 1.0;

Expand Down
27 changes: 24 additions & 3 deletions shaders/mobile_nerf/raster.frag
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
/* Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved.
/* Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*
Expand Down Expand Up @@ -38,14 +38,35 @@ layout(location = 2) out vec4 rayDirectionOut;
layout(binding = 0) uniform sampler2D textureInput_0;
layout(binding = 1) uniform sampler2D textureInput_1;

//////////////////////////////////////////////////////////////
// MLP was trained with gamma-corrected values //
// convert to linear so sRGB conversion isn't applied twice //
//////////////////////////////////////////////////////////////

float Convert_sRGB_ToLinear(float value)
{
return value <= 0.04045
? value / 12.92
: pow((value + 0.055) / 1.055, 2.4);
}

vec3 Convert_sRGB_ToLinear(vec3 value)
{
return vec3(Convert_sRGB_ToLinear(value.x), Convert_sRGB_ToLinear(value.y), Convert_sRGB_ToLinear(value.z));
}

//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////

void main(void)
{
vec2 flipped = vec2( texCoord_frag.x, 1.0 - texCoord_frag.y );
vec4 pixel_0 = texture(textureInput_0, flipped);
if (pixel_0.r == 0.0) discard;
vec4 pixel_1 = texture(textureInput_1, flipped);
o_color_0 = pixel_0;
o_color_1 = pixel_1;
o_color_0 = vec4(Convert_sRGB_ToLinear(pixel_0.xyz), pixel_0.w);
o_color_1 = vec4(Convert_sRGB_ToLinear(pixel_1.xyz), pixel_1.w);

rayDirectionOut.rgb = normalize(rayDirectionIn);
rayDirectionOut.a = 1.0f;
Expand Down
27 changes: 24 additions & 3 deletions shaders/mobile_nerf/raster_morpheus.frag
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
/* Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved.
/* Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*
Expand Down Expand Up @@ -38,14 +38,35 @@ layout(location = 2) out vec4 rayDirectionOut;
layout(binding = 0) uniform sampler2D textureInput_0;
layout(binding = 1) uniform sampler2D textureInput_1;

//////////////////////////////////////////////////////////////
// MLP was trained with gamma-corrected values //
// convert to linear so sRGB conversion isn't applied twice //
//////////////////////////////////////////////////////////////

float Convert_sRGB_ToLinear(float value)
{
return value <= 0.04045
? value / 12.92
: pow((value + 0.055) / 1.055, 2.4);
}

vec3 Convert_sRGB_ToLinear(vec3 value)
{
return vec3(Convert_sRGB_ToLinear(value.x), Convert_sRGB_ToLinear(value.y), Convert_sRGB_ToLinear(value.z));
}

//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////

void main(void)
{
vec2 flipped = vec2( texCoord_frag.x, 1.0 - texCoord_frag.y );
vec4 pixel_0 = texture(textureInput_0, flipped);
// if (pixel_0.r == 0.0) discard;
vec4 pixel_1 = texture(textureInput_1, flipped);
o_color_0 = pixel_0;
o_color_1 = pixel_1;
o_color_0 = vec4(Convert_sRGB_ToLinear(pixel_0.xyz), pixel_0.w);
o_color_1 = vec4(Convert_sRGB_ToLinear(pixel_1.xyz), pixel_1.w);

rayDirectionOut.rgb = normalize(rayDirectionIn);
rayDirectionOut.a = 1.0f;
Expand Down

0 comments on commit d99ede4

Please sign in to comment.