Skip to content

WebGPURenderer: Premultiply in sRGB color space#33329

Draft
WestLangley wants to merge 1 commit intomrdoob:devfrom
WestLangley:dev-premultiply_in_sRGB
Draft

WebGPURenderer: Premultiply in sRGB color space#33329
WestLangley wants to merge 1 commit intomrdoob:devfrom
WestLangley:dev-premultiply_in_sRGB

Conversation

@WestLangley
Copy link
Copy Markdown
Collaborator

Fixes #33104

The browser compositor performs a Porter-Duff source-over blend operation of the renderer output with the HTML canvas, blending in sRGB color space. And in fact, the compositor expects color values to be premultiplied in sRGB space.

WebGPURenderer, on the other hand, generates premultiplied RGB values in linear-sRGB color space, and then converts the result to sRGB color space.

WebGPURenderer's workflow is proper, but it is not compatible with what the compositor expects, and the mismatch produces visible artifacts as described in #33104.

This is a proposal for discussion, only.

/ping @gkjohnson
/ping @donmccurdy

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 4, 2026

📦 Bundle size

Full ESM build, minified and gzipped.

Before After Diff
WebGL 360.57
85.58
360.57
85.58
+0 B
+0 B
WebGPU 635.53
176.41
635.64
176.45
+107 B
+43 B
WebGPU Nodes 633.65
176.12
633.76
176.16
+107 B
+43 B

🌳 Bundle size after tree-shaking

Minimal build including a renderer, camera, empty scene, and dependencies.

Before After Diff
WebGL 492.95
120.2
492.95
120.2
+0 B
+0 B
WebGPU 707.71
191.31
707.82
191.36
+107 B
+48 B
WebGPU Nodes 656.93
178.57
657.04
178.65
+107 B
+82 B

@gkjohnson
Copy link
Copy Markdown
Collaborator

Thanks @WestLangley - this looks correct to me. The only question may be whether the unpremultiply should happen before or after tone mapping but I'm not sure if there's strong rationale for either in this case.

I'm thinking the WebGLRenderer OutputPass should be modified, as well.

@Mugen87
Copy link
Copy Markdown
Collaborator

Mugen87 commented Apr 4, 2026

Can we execute this new code only when a transparent clear color is used? In this way, we won't break that many examples.

Alternatively, how about making this code path configurable with a renderer flag (it should be disabled by default)? I have no good feeling with applying such a breaking change without an opt-in/out solution. The property would control in what color space the premultiplication happens (linear-srgb vs srgb).

@gkjohnson
Copy link
Copy Markdown
Collaborator

I wouldn't have expected this change to break many tests but perhaps my mental model of the WebGPURenderer pipeline is incorrect. WebGPURenderer is rendering to an internal render target in Linear sRGB Color Space, correct? And then that buffer is rendered directly to the canvas, applying tone mapping and color space conversion, is that right? If so that would mean that only that final full-screen render is applicable to this change. Unless sRGB render targets are being used elsewhere or in the demos? Is there something else I'm missing?

@Mugen87
Copy link
Copy Markdown
Collaborator

Mugen87 commented Apr 4, 2026

Unless sRGB render targets are being used elsewhere or in the demos?

Not that I am aware of.

The default render output pass happens here and it's indeed a full screen pass that renders into the default framebuffer/canvas:

/**
* The output pass performs tone mapping and color space conversion.
*
* @private
* @param {RenderTarget} renderTarget - The current render target.
*/
_renderOutput( renderTarget ) {
const cacheKey = this._nodes.getOutputCacheKey();
let quadData = this._quadCache.get( renderTarget.texture );
let quad;
if ( quadData === undefined ) {
quad = new QuadMesh( new NodeMaterial() );
quad.name = 'Output Color Transform';
quad.material.name = 'outputColorTransform';
quad.material.fragmentNode = this._nodes.getOutputNode( renderTarget.texture );
quadData = {
quad,
cacheKey
};
this._quadCache.set( renderTarget.texture, quadData );
// dispose logic
const dispose = () => {
quad.material.dispose();
this._quadCache.delete( renderTarget.texture );
renderTarget.texture.removeEventListener( 'dispose', dispose );
};
renderTarget.texture.addEventListener( 'dispose', dispose );
} else {
quad = quadData.quad;
if ( quadData.cacheKey !== cacheKey ) {
quad.material.fragmentNode = this._nodes.getOutputNode( renderTarget.texture );
quad.material.needsUpdate = true;
quadData.cacheKey = cacheKey;
}
}
// a clear operation clears the intermediate renderTarget texture, but should not update the screen canvas.
const currentAutoClear = this.autoClear;
const currentXR = this.xr.enabled;
this.autoClear = false;
this.xr.enabled = false;
this._renderScene( quad, quad.camera, false );
this.autoClear = currentAutoClear;
this.xr.enabled = currentXR;
}

However, when you configure an instance of RenderPipeline for a more complex render pipeline with post processing, you might use the renderOutput() TSL function to apply tone mapping or color space conversion at a specific point in your pipeline. That is for example required if you want to apply a LUT or when using FXAA. The implementation of renderOutput() is here.

setup( { context } ) {
let outputNode = this.colorNode || context.color;
// tone mapping
const toneMapping = ( this._toneMapping !== null ? this._toneMapping : context.toneMapping ) || NoToneMapping;
const outputColorSpace = ( this.outputColorSpace !== null ? this.outputColorSpace : context.outputColorSpace ) || NoColorSpace;
if ( toneMapping !== NoToneMapping ) {
outputNode = outputNode.toneMapping( toneMapping );
}
// working to output color space
if ( outputColorSpace !== NoColorSpace && outputColorSpace !== ColorManagement.workingColorSpace ) {
outputNode = outputNode.workingToColorSpace( outputColorSpace );
}
return outputNode;
}

When using renderOutput(), the default code path from the first snippet is not taken (since pipeline.outputColorTransform is set to false).

// ignore default output color transform ( toneMapping and outputColorSpace )
// use renderOutput() for control the sequence
renderPipeline.outputColorTransform = false;

@WestLangley
Copy link
Copy Markdown
Collaborator Author

Can we execute this new code only when a transparent clear color is used? In this way, we won't break that many examples.

WebGPURenderer is violating the browser contract, so this change should always be implemented, if we choose to do this at all.

It is unfortunate that this leads to so much breakage. It would be a good idea to understand why.

Alternatively, how about making this code path configurable with a renderer flag?

IMO, that is not something the user should have to set. It should just work.

@WestLangley
Copy link
Copy Markdown
Collaborator Author

@gkjohnson I think we are going to need a compelling reason for this change to be made. Can you provide before and after screenshots of some real use cases as justification -- other than the simple tests we have already investigated?

@sunag Every webGPU example should have tone mapping enabled and tone mapping exposure explicitly set. Adding exposure to the GUI can be helpful, if you wish.

There is a lot of breakage, even though in some sense, the changes in this PR are more-correct. For the purpose of studying this PR, do not compare with an HDR example lacking tone mapping.

@mrdoob
Copy link
Copy Markdown
Owner

mrdoob commented Apr 6, 2026

@WestLangley

Can we execute this new code only when a transparent clear color is used? In this way, we won't break that many examples.

WebGPURenderer is violating the browser contract, so this change should always be implemented, if we choose to do this at all.

Agreed. The sooner we fix this, the better.

@gkjohnson
Copy link
Copy Markdown
Collaborator

gkjohnson commented Apr 6, 2026

Can you provide before and after screenshots of some real use cases

Here is a simple example with spheres and a standard material along with screenshots showing the behavior and after appearance:

w/ Background w/ Transparent BG w/ Fix
image image image

And from the path tracer with a ray jitter to emulate depth of field blurring (not fully implemented yet) - notice the "glowing" fringes at the edge of the blur:

w/ Background w/ Transparent BG w/ Fix
image image image

Basically anything with partial transparency rendering to a transparent background will will be "wrong" though it may be more or less noticeable depending on the set up. If it's not going to be fixed then alpha support might as well be removed from WebGPURenderer until browsers provide a way to pass linear colors to the final buffer.

I understand this isn't necessarily an easy fix to stomach, though.

@Mugen87

The default render output pass happens here and it's indeed a full screen pass that renders into the default framebuffer/canvas:

Do you mind explaining how or pointing me to where postprocessing effects are handled? 🙏 In theory this change shouldn't be noticeable if every pixel in the final buffer is 1.0 alpha since that would effectively be a no-op for every premultiply. But if post processing effects are being "layered" INTO the final sRGB buffer each as separate passes then we would see a difference. Eg if the final scene color is first rendered into the final sRGB buffer, then a partially transparent bloom pass is rendered on top of that then we would be seeing sRGB-color space blending happening for the bloom composite even though we're not ultimately passing partially transparent pixels to the browser via the canvas.

If this is the case a few options come to mind (though none feel like easy or obvious solutions to take):

  1. Render all post-processing passes and compositing into a target linear sRGB target first. Then render that final composited buffer into the canvas with the sRGB-conversion and premultiplication fix step in a single pass. This is a bit heavy handed, though, since it requires an extra full screen pass and additional screen render target.

  2. Merge all post processing passes (or as many as possible) into a single shader (similar to what the postprocessing package does) in which case all blending is explicitly handled by the shader code and can be done linearly even if the target is sRGB. With nodes perhaps this becomes more simple to do.

  3. If browsers universally supported a "linear-srgb" output color space then we could just pass premultiplied linear-srgb colors to the canvas and let it handle the compositing from there. But I'm not sure how realistic this is.

@Mugen87
Copy link
Copy Markdown
Collaborator

Mugen87 commented Apr 6, 2026

Merge all post processing passes (or as many as possible) into a single shader (similar to what the postprocessing package does) in which case all blending is explicitly handled by the shader code

We are already doing things thanks to TSL and the node material. A single UBER shader won't work by the way, some effects require separate passes.

Do you mind explaining how or pointing me to where postprocessing effects are handled?

A good starting point is the RenderPipeline module. This module has an outputNode property that is always set in our post processing demos. The node that is assigned to this property represents the render pipeline/post processing stack as a node composition. Think of it like composing a node material with nodes. The nodes that are part of this composition render/update their effects in their own modules, usually in their updateBefore() lifecycle method.

RenderPipeline also uses renderOutput(), the TSL function that is responsible for the tone mapping and color space conversion.

outputNode = renderOutput( outputNode, toneMapping, outputColorSpace );

However, depending on your render pipeline, color space conversion does not necessarily comes last in the effect chain, see below:

Render all post-processing passes and compositing into a target linear sRGB target first. Then render that final composited buffer into the canvas with the sRGB-conversion and premultiplication fix step in a single pass.

How would this work for effects that require sRGB input like FXAA or LUT?

@Mugen87
Copy link
Copy Markdown
Collaborator

Mugen87 commented Apr 6, 2026

But if post processing effects are being "layered" INTO the final sRGB buffer each as separate passes then we would see a difference.

There should only be a single render into the default framebuffer. You can see this at the RenderPipeline.render() method which only does a single render() call.

this._quadMesh.render( renderer );

Effects like Bloom which have internal renders also render in internal render targets (FP16 linear-sRGB in this case).

@Mugen87
Copy link
Copy Markdown
Collaborator

Mugen87 commented Apr 6, 2026

@sunag I'm not sure but I could imagine we see that many breakage since we currently merge renderOutput() into whatever render pass comes last in the effect chain (the result of merging effects to reduce the pass count). At least for debugging, could we modify renderOutput() to force an isolated render pass?

@gkjohnson
Copy link
Copy Markdown
Collaborator

@Mugen87 I'm seeing that clamping the "outputNode.a" value to [0, 1] at the beginning of the ColorSpaceNode operations is fixing the problem for at least this ocean example:

if ( ColorManagement.getTransfer( target ) === SRGBTransfer ) {

    outputNode = vec4( outputNode.rgb, clamp( outputNode.a, 0.0, 1.0 ) );
dev branch this branch this PR w/ clamping
image image image

I suspect that at least part of the issue is that some of these effects (like bloom pass used in webgpu_ocean) are not outputting an alpha value in a valid [0, 1] range.

@Mugen87
Copy link
Copy Markdown
Collaborator

Mugen87 commented Apr 6, 2026

The ocean demo is quite complex since it has the water and sky component. How does webgpu_postprocessing_bloom look like with your patch.

@gkjohnson
Copy link
Copy Markdown
Collaborator

The ocean demo is quite complex since it has the water and sky component. How does webgpu_postprocessing_bloom look like with your patch.

I haven't tested it but I did note that removing "bloomPass" from the ocean demo (before the clamping fix) also addressed the issue and there was no difference between this branch and dev. So my assumption is that bloom pass is the issue here.

@sunag
Copy link
Copy Markdown
Collaborator

sunag commented Apr 6, 2026

... At least for debugging, could we modify renderOutput() to force an isolated render pass?

@Mugen87 Are you thinking of something like rtt( node ).renderOutput()?

renderPipeline = new THREE.RenderPipeline( renderer );
renderPipeline.outputColorTransform = false;

// ...

renderPipeline.outputNode = rtt( scenePassColor.add( bloomPass ) ).renderOutput();

This should isolate the rtt( node ) node into a new texture.

@Mugen87
Copy link
Copy Markdown
Collaborator

Mugen87 commented Apr 6, 2026

Great! This is indeed ideal for testing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Renderer: Blending with transparent background is incorrect.

5 participants