Skip to content

linux-pipewire: Fix render technique in captures again#12424

Open
PancakeTAS wants to merge 1 commit intoobsproject:masterfrom
PancakeTAS:fix/pipewirev3
Open

linux-pipewire: Fix render technique in captures again#12424
PancakeTAS wants to merge 1 commit intoobsproject:masterfrom
PancakeTAS:fix/pipewirev3

Conversation

@PancakeTAS
Copy link
Copy Markdown
Contributor

Description

Fixes #12412
Supersedes #12422

Changes the linux-pipewire render code to use the correct rendering techniques

Motivation and Context

Adding an effect to the pipewire capture source leads to a decrease in gamma.
While unsure about why exactly this happens, this PR copies the function for finding the right render technique from the Windows dc-capture and correctly handles the so far only supported non-linear format.

How Has This Been Tested?

Tested each combination of:

  • 10-bit vs 8-bit in my compositor settings
  • 16-bit P416 vs 8-bit NV12 in obs settings
  • Rec. 709 vs Rec. 2100 (PQ)
  • With "Render Delay" effect vs without. (As from the original issue)

Additional testing with hardware cursor capable machine is required. Same for traditional 8-bit displays (I do not trust Hyprland, better be safe!)

Types of changes

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

  • My code has been run through clang-format.
  • I have read the contributing document.
  • My code is not on the master branch.
  • The code has been tested.
  • All commit messages are properly formatted and commits squashed where appropriate.
  • I have included updates to all appropriate documentation.

@PatTheMav
Copy link
Copy Markdown
Member

While unsure about why exactly this happens

It probably happens because the current code will potentially de-gamma twice:

  • Once because the DrawSrgbDecompress shader is used, which will convert from sRGB gamma into linear gamma
  • A second time because gs_effect_set_texture_srgb enables OpenGL to automatically convert every RGB value sampled from a texture into linear gamma

The functions gs_framebuffer_srgb_enabled, gs_effect_set_texture_srgb, and gs_effect_set_texture need to be set correctly in relation to the fragment shader selected, as they have explicit effects on the colours:

  • gs_framebuffer_srgb_enabled - will enable/disable automatic conversion of linear RGB values returned by the fragment shader into sRGB gamma encoded values
  • gs_effect_set_texture_srgb - will enable automatic conversion of sRGB gamma encoded values returned by texture samplers into linear RGB values
  • gs_effect_set_texture - will not do any conversions, RGB values returned from the sampler are interpreted as "already using linear gamma"

By default OBS Studio assumes that the video output texture uses either sRGB (or sRGB-alike) gamma encoded values or PQ/HLG-encoded values for HDR video. So the scenarios in which the frame buffer should not be "sRGB-enabled" is when the fragment shader will write RGB values that are already encoded with sRGB gamma (so the values are written "straight" into the render target).

Likewise if a sampled texture is known to have RGB values with linear gamma, you'd use the plan method to set the texture, as the fragment shader can work with the RGB values as-is.

@tytan652
Copy link
Copy Markdown
Collaborator

tytan652 commented Jul 22, 2025

Hyprland is not an implementation I would use as primary testing ground (since they stop using wlroots).

It's Mutter (GNOME) or KWin (Plasma) that should serve as primary testing ground.

@tytan652 tytan652 added kind/bug Categorizes issue or PR as related to a bug. Seeking Testers Build artifacts on CI platform/linux Categorizes issue or PR as affecting Linux specifically labels Jul 22, 2025
@PancakeTAS
Copy link
Copy Markdown
Contributor Author

@PatTheMav Thanks for your explanation! So essentially what I'm doing is converting to and from twice, where I could be skipping any conversion entirely. I'll dig through my code again and try to implement what you said.

@PancakeTAS
Copy link
Copy Markdown
Contributor Author

Let me try to document this here. The assumption I was making is that when the compositor provides a 10-bit texture, it would not be gamma-encoded sRGB. That was incorrect. Instead, I believe I had flipped my understanding of what method does gamma-to-linear and reverse.

As far as I understand it now, and it seems to actually work on my system, the source image is always in gamma-encoded sRGB. When OBS is set to an 8-bit based format, it too is in a gamma-encoded colorspace. All one has to do is use "Draw", with framebuffer srgb disabled and the normal image set method.
When OBS is switched into a 10-bit or 16-bit format, it uses a linear color space for the framebuffer and so "DrawSrgbDecompress" must be used instead.
As for changing the colorspace to HDR PQ or HLG, no tonemapping is implemented anyways, so that doesn't even need to be handled differently at all (which was another wrong assumption I had).

I would like to understand when the obs colorspace is changed to scRGB because changing to any of the HDR colorspaces does not actually do that. I understand from the comment that "GS_CS_709_EXTENDED" is something used only on MacOS where this pipewire capture doesn't run anyways, but "GS_CS_709_SCRGB" says it is Linux HDR, which would imply it is enabled when switching to an HDR color space, am I wrong?

Also, I would like to ask about this code sample. Is there any reason to store and revert the framebuffer srgb state? I first saw this in the Windows code and it already confused me.

const bool previous = gs_framebuffer_srgb_enabled();
gs_enable_framebuffer_srgb(false);

@tytan652
Copy link
Copy Markdown
Collaborator

I let Pat anwser your questioning but for that part:

but "GS_CS_709_SCRGB" says it is Linux HDR

When this line of code was added, Linux as in Wayland did not have much about HDR.
Wayland color management protocol were still in development, I think some assumption were made that may not meet today situation.

Copy link
Copy Markdown
Collaborator

@tytan652 tytan652 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is really preferred that variable and pointer are declared at the start of the function and if wanted/needed initialize later.

Edit: I know you got inspired by OBS own code but it's from a time this kind of preference were not too much enforced.

Comment thread plugins/linux-pipewire/pipewire.c Outdated
Comment on lines +1314 to +1315
enum gs_color_space color_space = gs_get_color_space();
const bool gamma_space = color_space == GS_CS_SRGB;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move declarations at the start or the function.

Comment thread plugins/linux-pipewire/pipewire.c Outdated
enum gs_color_space color_space = gs_get_color_space();
const bool gamma_space = color_space == GS_CS_SRGB;

const char *tech_name = "Draw";
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move declaration at the start or the function.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move declaration at the start or the function.

Removed gamma_space entirely actually and embedded it into the if.
Is it okay to use the else for initializing tech_name, or should I initialize it at declaration?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keep it the initialize to "Draw" at the start.

Comment thread plugins/linux-pipewire/pipewire.c Outdated
Comment on lines 1324 to 1325

const bool linear_srgb = gs_get_linear_srgb();
const bool previous = gs_framebuffer_srgb_enabled();
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move declaration of previous at the start or the function (and if you want initialize it later).

Comment thread plugins/linux-pipewire/pipewire.c Outdated
if (!gamma_space)
tech_name = "DrawSrgbDecompress";

gs_technique_t *tech = gs_effect_get_technique(effect, tech_name);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move declaration at the start or the function and keep the initialization here.

Suggested change
gs_technique_t *tech = gs_effect_get_technique(effect, tech_name);
tech = gs_effect_get_technique(effect, tech_name);

@tytan652
Copy link
Copy Markdown
Collaborator

tytan652 commented Jul 22, 2025

With this third commit (you'll have to squash as one commit and force-push later), it seems that the the render delay issue is no longer there.

Edit: The second had potential DrawSrgbDecompress with gs_effect_set_texture_srgb which was not good even if it "worked".

@tytan652 tytan652 added this to the OBS Studio 31.1 milestone Jul 22, 2025
@tytan652 tytan652 requested a review from kkartaltepe July 22, 2025 17:13
@PancakeTAS PancakeTAS requested a review from tytan652 July 22, 2025 17:43
Copy link
Copy Markdown
Collaborator

@tytan652 tytan652 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Declaration have been put at the function start, it does fix the Render Delay issue met on my machine.

But I can't review if the texture manipulation are good.

@RytoEX RytoEX requested a review from GeorgesStavracas July 22, 2025 19:12
@RytoEX RytoEX self-assigned this Jul 22, 2025
@PatTheMav
Copy link
Copy Markdown
Member

PatTheMav commented Jul 23, 2025

Let me try to document this here. The assumption I was making is that when the compositor provides a 10-bit texture, it would not be gamma-encoded sRGB. That was incorrect. Instead, I believe I had flipped my understanding of what method does gamma-to-linear and reverse.

You can't make that assumption based on the texture format alone. A2RGB10 (or BGR10A2) would either be suited for colour spaces like D65-P3 or any other similar space that has a wider gamut than sRGB. Usually those colour spaces still use the sRGB gamma curve because they are effectively SDR formats.

However you might also get HDR10 data with either PQ or HLG transfer functions, as both require only 10-bit colour and using a 32-bit format might be more efficient than a 64-bit format.

So you can't assume that "10-bit" means the colours are encoded using sRGB gamma, or HLG, or PQ, either way. Ideally the compositor metadata would tell you explicitly.

As far as I understand it now, and it seems to actually work on my system, the source image is always in gamma-encoded sRGB. When OBS is set to an 8-bit based format, it too is in a gamma-encoded colorspace.

Kinda correct - OBS Studio expects the colours written to the render target (the main output texture) to always use sRGB gamma encoding for sRGB, Rec.709, and Rec.601 output modes.

All one has to do is use "Draw", with framebuffer srgb disabled and the normal image set method.

That's not always correct, and admittedly the way this is supposed to work is not very intuitive but was designed this way to avoid breaking functionality of existing plugins and sources:

  • The basic idea is that OBS can "tell" a source to behave in a sRGB-aware way via the state returned by gs_get_linear_srgb.
    • If that is the case, it usually means that the source should enable automatic sRGB gamma encoding on the framebuffer
    • This leads to the requirement that the fragment shader has to return RGB colours with linear gamma (it should not encode sRGB gamma itself)
    • In turn this means that the data provided to the fragment shader needs to use linear gamma
    • If the source of colour data is a texture, the correct command needs to be used:
      • gs_effect_set_texture_srgb if the texture's colour data should potentially be automatically decoded from sRGB gamma (only valid for textures using GL_SRGB8 or GL_SRGB8_ALPHA8).
      • gs_effect_set_texture if the no automatic decoding from sRGB gamma should take place for textures sing GL_SRGB8 or GL_SRGB8_ALPHA8
    • So a source needs to ensure that the current flow is upheld:
      source texture -> linear RGB colour data -> fragment shader -> linear RGB colour data -> render target

Important

If a texture uses a colour format that does not explicitly use sRGB gamma, but it is known (by convention) that the colour data in the texture uses it, then an appropriate fragment shader has to be used that decodes the samples colour values "manually"

  • Any source that is not sRGB-aware would effectively ignore this new state variable:
    • The source would set gs_effect_set_texture for any texture
    • The source would instead maybe choose a different fragment shader like DrawSrgbDecompress to explicitly decode from sRGB gamma in the shader itself, but has to be sure about the gamma curve used to generate the colour values in the texture

So there can be scenarios where gs_get_linear_srgb will return false and thus even an sRGB-aware source will mostly behave like a "legacy" source: The fragment shader will usually read and write all colour values without automatic gamma encoding or decoding, which can make sense for cases like a source actually rendering into an intermediary render target when OBS does not expect any blending to happen.

When OBS is switched into a 10-bit or 16-bit format, it uses a linear color space for the framebuffer and so "DrawSrgbDecompress" must be used instead. As for changing the colorspace to HDR PQ or HLG, no tonemapping is implemented anyways, so that doesn't even need to be handled differently at all (which was another wrong assumption I had).

As I mentioned above, DrawSrgbDecompress should probably not be used at all, instead the correct texture call needs to be used to allow the graphics API to automatically decode sRGB gamma from a texture if necessary. DrawNonlinearAlpha is a different story, because it handles cases of premultiplied alpha IIRC.

I was wrong about this - texture formats like GL_RGB10_A2 will not trigger OpenGL's automatic gamma decoding even though the 10-bit colour values might actually have sRGB gamma applied to them and thus the fragment shader has to decode them explicitly.

I would like to understand when the obs colorspace is changed to scRGB because changing to any of the HDR colorspaces does not actually do that. I understand from the comment that "GS_CS_709_EXTENDED" is something used only on MacOS where this pipewire capture doesn't run anyways, but "GS_CS_709_SCRGB" says it is Linux HDR, which would imply it is enabled when switching to an HDR color space, am I wrong?

GS_CS_709_SCRGB is meant for use with DXGI_COLOR_SPACE_RGB_FULL_G10_NONE_P709 on Windows, as the 1.0 colour value is supposed to represent "80 nits" brightness. The idea is to use Rec 709 colour values to be fully compatible with SDR colour, but any HDR colour goes "beyond" that value (scRGB uses negative values for that). OBS uses 300 nits as "paper white" by default, so 1.0 at 300 nits has to be converted into the appropriate colour value at 80 nits for this format.

GS_CS_709_EXTENDED is similar, in that it also uses Rec 709 colour values, but HDR colours just go beyond 1.0 - the colour values (and gamma curve) are simply extended beyond the SDR endpoint. macOS uses this for EDR, where 1.0 is mapped to the current brightness setting of the display and brighter colours are dynamically mapped to use the "brightness headroom" of the display.

Those values are typically used by the main preview and other projectors and updated when Qt calls the "move" or "display change" callbacks.

HDR content in OBS uses GL_RGBA16F which also allows colour values to just go "beyond 1.0", which are then converted back into colours ranging from 0.0 to 1.0 using the Rec 2100 colour space with either HLG or PQ transfer functions.

Copy link
Copy Markdown
Member

@PatTheMav PatTheMav left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One important addendum to what I commented already:

  • OpenGL will only use the automatic sRGB gamma encoding if the render target's GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING attachment reports GL_SRGB, otherwise it will essentially "ignore" the setting
  • Direct3D uses resource views and the implementation manages a pair of "linear" and "non-linear" views, which can point to the same view if the associated colour format does not have a sRGB companion

The same is true for shader resource views (D3D) and the use of GL_DECODE_EXT (OpenGL) for textures used in shaders.

This means that if a render target and/or shader resource uses a floating-point format, OBS will still report that sRGB to linear encoding/decoding should take place, and most code will naively use gs_enable_framebuffer_srgb and gs_effect_set_texture_srgb accordingly, but will "luck out" in this case because OpenGL and D3D11 will magically ignore this and use linear gamma throughout.

Comment thread plugins/linux-pipewire/pipewire.c Outdated
Comment on lines +1320 to +1321
if (color_space != GS_CS_SRGB)
tech_name = "DrawSrgbDecompress";
Copy link
Copy Markdown
Member

@PatTheMav PatTheMav Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned in my comment, don't use DrawSrgbDecompress. Either set a texture via gs_effect_set_texture_srgb if its colour values use sRGB gamma or use gs_effect_set_texture if the colour values use linear gamma or should not be decoded at all.

This is not entirely correct, see below.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm genuinely so confused. I've reread your explanation probably five times by now.

  • I changed the render technique back to "Draw".
  • Then, I get gs_get_linear_srgb(), which as far as I understand means the source is sRGB-aware and should provide linear image data. It was true in my scenario.
  • Since the incoming texture is gamma-encoded, I use gs_effect_set_texture_srgb in combination with gs_enable_framebuffer_srgb(1) which I understand should degamma the texture, making it linear.

Now the texture is white-ish, which means it is... gamma-encoded and displayed on the obs surface.. which is also gamma-encoded? That sounds like I did a pro-gamma (or whatever the opposite of degamma is). No matter what I change the obs settings to, it stays white. I'm really confused.

Instead when I set gs_enable_framebuffer_srgb(0), which again, as far as I understand effectively nullifies gs_effect_set_texture_srgb and just sets it normally. When I do this, I get the correct texture in 8-bit OBS formats but incorrect texture in 10-bit and 16-bit formats. This one makes sense. I don't understand the previous behavior though.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead when I set gs_enable_framebuffer_srgb(0), which again, as far as I understand effectively nullifies gs_effect_set_texture_srgb and just sets it normally. When I do this, I get the correct texture in 8-bit OBS formats but incorrect texture in 10-bit and 16-bit formats. This one makes sense. I don't understand the previous behavior though.

Maybe I misunderstand what you meant, but those two function calls change the input and output behaviour independently.


As OBS "wants" colours to use sRGB gamma in the video output texture, let's take a single pixel with the values (.735, .735, .735).

When a fragment shader outputs the linear colour (.2, .2, .2) and framebuffer_srgb is set to false, then that colour is written straight into the render target, which might look "too dark".

If it's set to true however, OpenGL will first decode the render target's colour into (.5, .5, .5), before blending it with (.2, .2, .2) (which by default will just result in (.2, .2, .2)), and OpenGL will then automatically convert this into (.485, .485, .485), which is the value written into the render target.


If a common 8-bit texture has a pixel with the colour (.906, .906, .906) and uses a colour format with sRGB gamma, then using gs_effect_set_texture_srgb will ensure that the colour value used inside the shader code is decoded to (.8, .8, .8). The Draw shader will typically return that value, which is then used for blending in the render target, the result of which then follows the steps above.

If you use gs_effect_set_texture however, (.906, .906, .906) will be used and if framebuffer_srgb is set to true, that pixel will then be encoded into (.957, .957, .957) after linear blending, and thus be "too bright".

OBS' default 8-bit colour formats GS_BGRX, GS_RGBA, and GS_BGRA are all converted into such sRGB-tagged OpenGL formats (GL_SRGB8_ALPHA8 and GL_SRGB8 respectively), so OpenGL will automatically "decode" them but only if you also use gs_effect_set_texture_srgb.

Important

It took me a bit to wrap my head around this, so I was wrong before:

A format like GS_R10G10B10A2 is converted into GL_RGB10_A2 which is not automatically decoded. So if you know that you actually get wide-gamut sRGB-encoded SDR colour (e.g. when using the D65-P3 colour space), you have to "manually" decode the sampled colour values in the shader (either via DrawSrgbDecompress, or DrawD65P3 for this specific case).


So in theory you could use framebuffer_srgb set to false and gs_effect_set_texture, which would read (.906, .906, .906) from the texture, return it from the fragment shader, and OpenGL will blend it with (.735, .735, .735) and as long as the user has set up no actual blending, (.906, .906, .906) will be written into the render target, which is a happy accident and will be correct.

But as soon as the user sets up any other blending mode, OpenGL will blend (.735, .735, .735) and (.906, .906, .906), which will result in a far too bright result, because the correct values for blending should be (.5, .5, .5) and (.8, .8, .8), with the final colour being encoded into sRGB gamma again (which will also not happen).


The important wrinkle here (specifically for OpenGL, as that's important for this plugin) is that:

  • OpenGL will automatically decode gamma for textures using GL_SRGB8_ALPHA8 and GL_SRGB8 only, if a texture has any other format, no decoding takes place.
    • All gs_effect_set_texture does, is to tell OpenGL to "skip" decoding of textures _even if they use GL_SRGB8_ALPHA8 and GL_SRGB8
  • OpenGL will not automatically encode gamma for render targets even if they use GL_SRGB8_ALPHA8 and GL_SRGB8
    • This behaviour needs to be enabled explicitly via gl_enable(GL_FRAMEBUFFER_SRGB) (that's what device_enable_framebuffer_srgb(true) does)

I'm also learning this while investigating these calls and the specific behaviour of OpenGL, so no worries - we'll get this across the finish line.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PatTheMav
Sorry for the long wait, I was writing some exams for university.
Let's try to get this to work the right way then.

Here's my current understanding:
We essentially cannot use gs_effect_set_texture_srgb without hacky workarounds, because not every possible format is sRGB tagged, meaning in our case 10-bit formats will not be decoded from their gamma format.
As the source is tagged as sRGB-aware, when the state variable gs_get_linear_srgb is set, the source needs to output linear color values and so DrawSrgbDecompress must be used.
If gs_get_linear_srgb is set to false, then we have to fallback to Draw because gamma-encoded color values are expected. Is that correct so far?
Ontop of this we would explicitly disable the automatic color conversion done by the sRGB framebuffer so that OpenGL doesn't try to degamma the color values a second time when an sRGB tagged format is provided.

Coincidentally I believe this was my original code actually, let my try rewriting it.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not it at all, heh.
Colors are too dark in 8-bit OBS color formats and normal in 10-bit or 16-bit formats.
With a render delay filter enabled they are correct in 8-bit formats but too bright in 10-bit an 16-bit formats.

The thing that still puzzles me the most is why the implementation that's currently pushed in this pull request here works just fine. It doesn't line up with your explanation at all... I'm really confused

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried adding a color correction filter to see if what you're saying causes a new issue, and now I am even more confused.

The filter.. does not do anything. I can add a render delay filter before the color correction filter and it works, but without it the filter just straight up does not do anything anymore.

I'd really love to fix this issue and implement the pipewire capture properly, but after getting more and more input I'm just being more and more confused and I don't even understand what I'm supposed to do anymore.


Perhaps it's best to take a step back and make sure we're on the same page, step-by-step.

From my understanding it works something like this:

  1. It starts with an input texture
  2. You then either set it via set_texture or set_texture_srgb. When using the latter, any format that is tagged as sRGB is converted into linear, with all other formats being left untouched.
  3. Now the specified shader is applied, which in our case is Draw or DrawSrgbDecompress, where the latter always decompresses gamma color into linear.
  4. Finally, if enable_framebuffer_srgb is true, then it will do the opposite and turn linear color into a gamma color, as that is what OBS expects.

There's a good chance that I am far, far off in my understanding and hopefully you can clear it up. But assuming what I said is actually correct, then the this workflow here will work:

  1. The input texture may be non-sRGB tagged, but sRGB gamma-encoded.
  2. We need to use set_texture, because set_texture_srgb would lead to inconsistent results.
  3. As the input into the fragment shader is now gamma, rather than linear, DrawSrgbDecompress should be used such that it outputs linear color.
  4. If the source is sRGB-aware and assuming no filter is set, then I should enable the framebuffer sRGB, which will for the last time convert the linear color data into gamma color data, which is what OBS expects.
    As for what I do when sRGB-aware is not set, I'm not sure.

I implemented what I just said and it sort of confirmed some things and caused more confusion in others, for reference here is the pseudo-code:

effect = obs_get_base_effect(OBS_EFFECT_DEFAULT);
tech = gs_effect_get_technique(effect, "DrawSrgbDecompress");
gs_enable_framebuffer_srgb(true);
gs_effect_set_texture(image, obs_pw_stream->texture);

First of all:
It doesn't matter what format I choose in my compositor. As you said earlier, only some formats returned from pipewire are sRGB-tagged and because I was using set_texture_srgb originally, I was getting inconsistent results. These are now gone, which leads me to believe that this was the right step and the shader now as expected always gets gamma-encoded input, always outputting linear color data.

Onto the part I do not understand:
The code works as intended when there is no filter, that's already a good start. What I did now is, I set gs_enable_framebuffer_srgb to false out of curiosity. From my understanding this causes the linear output of the fragment shader to be interpreted as gamma, thus leading to an image that is too bright. That is not what happened. Instead, for 10-bit and 16-bit formats in OBS, the source rendered correctly as before, but when I switched to an 8-bit format, it was... too dark? Why? Doesn't make any sense to me. If I had to make sense of it, 10-bit and 16-bit formats on the main render target are linear-encoded in OBS and because the output color from the fragment shader is linear, it's a perfect match. Meanwhile 8-bit formats are gamma-encoded in OBS, so the image appears darker, because linear color data interpreted as gamma color data is actually darker rather than brighter and my understanding of the gamma curve is flawed 😂

On top of this, when I go back to the pseudo code and test with filters, I get even weirder behavior:

  1. Color correction filter straight up does nothing, when the output is 8-bit, but when it is 10-bit or 16-bit, everything works as expected.
  2. When I put a render delay before color correction, everything does work as expected.
    I think I understand why putting a render delay filter before works, but why color correction directly does not? I have no idea.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. We need to use set_texture, because set_texture_srgb would lead to inconsistent results.

I'd say it wouldn't do anything, as the texture is not sRGB-tagged - no decoding takes place and if the colours in the texture use sRGB gamma, the sampler will return those colours untouched.

But if you want to ensure that no decoding takes place, regardless of whether the texture is tagged as sRGB or not, you need to use set_texture.

effect = obs_get_base_effect(OBS_EFFECT_DEFAULT);
tech = gs_effect_get_technique(effect, "DrawSrgbDecompress");
gs_enable_framebuffer_srgb(true);
gs_effect_set_texture(image, obs_pw_stream->texture);

This should be correct if PipeWire textures always contain colour data with the inverse sRGB transfer function applied and returns linear colour from the fragment shader.

Instead, for 10-bit and 16-bit formats in OBS, the source rendered correctly as before

When you select 10-bit or 16-bit video output, OBS will use a 16-bit floating point format with "extended dynamic range" (Rec 709 primaries, HDR colours are "larger than 1.0") internally which will always use linear colour. The render target will thus not use an sRGB image format and OpenGL will not automatically apply the inverse sRGB transfer function to any colour returned by the shader.

Which means that gs_enable_framebuffer_srgb has no effect whatsoever, but it's expected to "just leave it on" because it will always do the right thing depending on the render target format (encode when the render target uses sRGB, do nothing when it doesn't) as you're always expected to return linear colour in the shader.

If I had to make sense of it, 10-bit and 16-bit formats on the main render target are linear-encoded in OBS and because the output color from the fragment shader is linear, it's a perfect match. Meanwhile 8-bit formats are gamma-encoded in OBS, so the image appears darker, because linear color data interpreted as gamma color data is actually darker rather than brighter

That sounds like a correct assessment to me.


As far as the filters are concerned I'd have to check out their render paths. Any filter essentially puts an intermediary render target between your source's render function and the actual video output texture.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the "rule" is: Your shader should always return linear colour and you should always enable sRGB for the frame buffer, because then:

  • If the render target uses an sRGB image format (8-bit video output format), a colour value of 0.5 is blended as 0.5 and (if no transparencies are involved) written as 0.735 into the render target
  • If the render target uses a linear image format (10-bit and 16-bit video output format), a colour value of 0.5 is blended as 0.5 and written as 0.5 into the render target.

The reason for this is that for HDR output formats you have a choice of different transfer functions (PQ or HLG) which behave differently and thus are open to user choice, whereas sRGB is "close enough" to Rec 709 and Rec 601 that OBS uses them for all 8-bit output formats automatically and as it is the more "natural" way of representing colour for output we can use automatic GPU encoding/decoding for it.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha! So I wasn't far off at all. I will push the updated code in a second then and then we'll figure out what's wrong with the filters.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the line that makes the filter not work is this:

effect = obs_get_base_effect(OBS_EFFECT_DEFAULT);

Removing it makes the whole source whiter again, but at least the color correction filter works without slapping a render delay inbetween. I'm not sure what this line does anyways, but I imagine there's some parameters set in the shader passed to the method from color correction? Perhaps not.

Either way, what do I do with this information now? Do you have a solution to this @PatTheMav ?

Comment thread plugins/linux-pipewire/pipewire.c Outdated
const bool linear_srgb = gs_get_linear_srgb();
const bool previous = gs_framebuffer_srgb_enabled();
gs_enable_framebuffer_srgb(linear_srgb);
gs_enable_framebuffer_srgb(false);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modern sRGB-aware sources are expected to use sRGB encoding for the render target based on the value returned by gs_get_linear_srgb, because that's OBS' way to communicate to a source whether it expects the source to write colour with sRGB gamma.

As a source cannot know which context it is in (is the render function being called by a scene, a filter, something else), the source has to rely on OBS to tell it what to do.

It's annoying from an API point of view, but it's how this is supposed to work.

@RytoEX
Copy link
Copy Markdown
Member

RytoEX commented Jul 24, 2025

As this fix is not yet finalized, I am inclined to merge the revert (#12422) to unbreak SDR (likely the more common use-case) until this can be fixed.

@PancakeTAS
Copy link
Copy Markdown
Contributor Author

I hope that won't cause too many merge conflicts, but I think that is a good idea for now as I'm going to be busy these next few days.

@PancakeTAS
Copy link
Copy Markdown
Contributor Author

Successful rebase... I think

@PancakeTAS PancakeTAS force-pushed the fix/pipewirev3 branch 2 times, most recently from 4df62ac to 92363ed Compare August 6, 2025 21:27
@PatTheMav PatTheMav removed this from the OBS Studio 31.1 milestone Nov 20, 2025
@PatTheMav
Copy link
Copy Markdown
Member

@PancakeTAS Could you refresh my memory what the state of this PR based on all our earlier discussions is?

@PancakeTAS
Copy link
Copy Markdown
Contributor Author

PancakeTAS commented Jan 20, 2026

@PancakeTAS Could you refresh my memory what the state of this PR based on all our earlier discussions is?

Of course! My original pull request fixed a "white-gamma" issue when obs was configured to use 10-bit or 16-bit color formats. My pull request had the side effect of breaking filters, so it was reverted. This pull request intends to implement the pipewire rendering properly, such that regardless of the imported pipewire source, obs filterchain and obs color configuration, the texture was always rendered properly.

We were going back and forth on how the color needs to be "de-gamma'd" or passed through, depending on how obs expects the color to be and we were able to find an implementation which both makes sense & actually works.

Unfortunately, one issue remained which I had absolutely no idea how to resolve:

  • If OBS is configured to use an 8-bit color format (NV12 / I420 / I444 / BGRA) and the PipeWire source has a color correction filter applied to it, then the color correction filter does not do anything..
  • If OBS uses a 10-bit or 16-bit format, the filter works as expected
  • If a render delay filter is placed before the color correction filter, the filter also works as expected.
  • It does not matter what format is imported from PipeWire, this issue occurs on both 8-bit and 10-bit formats.

My understanding of the OBS rendering stack is still not deep enough, and randomly going at it with ideas was not able to find a solution, hence the stalemate.

Finally: I did a rebase on the PR and confirmed the same issue still being present on the latest master branch.

@PatTheMav
Copy link
Copy Markdown
Member

PatTheMav commented Jan 21, 2026

Unfortunately, one issue remained which I had absolutely no idea how to resolve:

  • If OBS is configured to use an 8-bit color format (NV12 / I420 / I444 / BGRA) and the PipeWire source has a color correction filter applied to it, then the color correction filter does not do anything..
  • If OBS uses a 10-bit or 16-bit format, the filter works as expected
  • If a render delay filter is placed before the color correction filter, the filter also works as expected.
  • It does not matter what format is imported from PipeWire, this issue occurs on both 8-bit and 10-bit formats.

Thanks, much appreciated.

The tricky thing with filters is that they don't all behave the same way and some even introduce entirely "wrong" rendering behaviour:

  • By default OBS tells a source what the current render target's colour space is and in turn asks what colour space the source will provide.
  • This is where a source (be it an actual source or a filter) conceptually needs to decide whether it produces its content as-is or whether it wants to do any conversions internally.
    • If the source reports its colour format as-is, OBS will check this against the render target's format and set up an intermediary render target if conversions need to happen.
    • Thus the source will render into the intermediate and once the source is done, OBS will render this intermediate with a conversion shader into the "actual" render target
    • Otherwise the source reports back the colour format used by the render target, so OBS expects zero colour conversions. The source now has to ensure that the output it produces uses that some format.
  • Many filters, when asked this question, will straight-up report the colour format of the source they are wrapping (either the actual source or another filter). This can be very wrong in some circumstances (more on that below).
  • Some filters, but unfortunately not all, will detect if the underlying source uses a colour format that the filter does not support. In that case the filter will call obs_source_skip_video_filter to effectively tell OBS to "skip over them". In this scenario, having reported back the wrapped source's colour format as their own is entirely correct.
  • However some other filters do not detect this and happily take the EDR texture provided by an HDR media source and then apply SDR shader mathematics to them, without any tonemapping, potentially resulting in all kinds of broken results.
    • And this can lead to the mad situation where OBS expects the filter to produce EDR output (because it reported back the colour format of the wrapped source), but actually produces broken SDR content based on EDR input.
    • On Windows where some sources might use Microsoft's scRGB format instead of EDR this can lead to even wilder results because all colour values need to have a luminance multiplier applied to them

IIRC most filters that do not support EDR input will print a single little line between filter preview and filter controls in the property view to notify the user about this and a user will be left wondering "why the filter doesn't do anything". But some filters happily "do something", albeit producing entirely wrong results, but which can be hard to notice depending on input.

@PatTheMav
Copy link
Copy Markdown
Member

So the "main" render target can only ever be one of two things:

  • Either an 8-bit SDR texture with sRGB transfer function
  • Or a 16-bit EDR texture with a linear transfer function

This choice is influenced by the "output" settings in OBS. Any output format that requires 10-bit colour will switch OBS' internal render target to the 16-bit variant.

By default OBS expects sources to use linear blending, regardless of EDR or SDR rendering:

Render target format Shader output format Source input format Process
8-bit sRGB 8-bit linear 8-bit sRGB Encode <- Passthrough <- Decode
8-bit sRGB 8-bit linear 8-bit linear Encode <- Passthrough <- Read
8-bit sRGB 8-bit sRGB 8-bit linear Encode <- Encode <- Read ⚠️
8-bit sRGB 8-bit sRGB 8-bit sRGB Encode <- Encode <- Decode ⚠️
8-bit sRGB 16-bit linear 8-bit linear Encode+Tonemap <- Passthrough <- Read
8-bit sRGB 16-bit linear 8-bit sRGB Encode+Tonemap <- Passthrough <- Decode
8-bit sRGB 16-bit linear 16-bit linear Encode+Tonemap <- Passthrough <- Read
16-bit linear 8-bit linear 8-bit sRGB Write <- Passthrough <- Decode
16-bit linear 8-bit linear 8-bit linear Write <- Passthrough <- Read
16-bit linear 8-bit sRGB 8-bit linear Write <- Encode <- Read ⚠️
16-bit linear 8-bit sRGB 8-bit sRGB Write <- Encode <- Decode ⚠️
16-bit linear 16-bit linear 8-bit sRGB Write <- Passthrough <- Decode
16-bit linear 16-bit linear 8-bit linear Write <- Passthrough <- Read
16-bit linear 16-bit linear 16-bit linear Write <- Passthrough <- Read

Every row with a warning produces wrong colour: In the first two cases the colour has the sRGB transfer function applied twice (once by the shader, another time by the graphics API when writing into the sRGB render target), in the latter two cases sRGB encoded colour will potentially have the HLG or PQ transfer functions applied later.

Effectively any row that doesn't have "passthrough" in the middle is potentially wrong. The legacy behaviour for sources is as follows:

Render target format Shader output format Source input format Legacy Process
8-bit sRGB 8-bit sRGB 8-bit sRGB Write <- Passthrough <- Read
8-bit sRGB 8-bit linear 8-bit linear Write <- Passthrough <- Read ⚠️

The first row is what happens when you disable sRGB on the render target and don't use the _srgb variant of the set_texture function: The sRGB values are sampled as-is and also written as-is. But when a texture actually has no transfer function applied (and is not explicitly applied by the shader) the colours are "too dark" when written into the render target.

@PancakeTAS
Copy link
Copy Markdown
Contributor Author

Thanks for the summary! Just to be clear though, the main rendering already works as expected and there aren't any duplicate transfer functions being applied anywhere anymore. The only remaining issue is with that color correction filter..

You mention the filter may report the wrong color formats and potentially run incorrect calculations, leading to incorrect results. In this case however, the filter doesn't run any incorrect calculations.. it just does nothing at all.

You also mention the filter may skip itself, if it notices it doesn't support the source format.. and it sure looks like that might be happening here. Like I said though, this issue happens in a standard 8-bit+sRGB-only setup and the color correction filter should definitely support this. It doesn't really make any sense why it would start working after the target format is changed either, does it?

Something is really wrong here..

@Warchamp7 Warchamp7 added this to the OBS Studio 32.2 milestone May 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kind/bug Categorizes issue or PR as related to a bug. platform/linux Categorizes issue or PR as affecting Linux specifically Seeking Testers Build artifacts on CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Render Delay filter on Pipewire Screen Capture causes gamma shift

5 participants