linux-pipewire: Fix render technique in captures again#12424
linux-pipewire: Fix render technique in captures again#12424PancakeTAS wants to merge 1 commit intoobsproject:masterfrom
Conversation
It probably happens because the current code will potentially de-gamma twice:
The functions
By default OBS Studio assumes that the video output texture uses either sRGB (or sRGB-alike) gamma encoded values or PQ/HLG-encoded values for HDR video. So the scenarios in which the frame buffer should not be "sRGB-enabled" is when the fragment shader will write RGB values that are already encoded with sRGB gamma (so the values are written "straight" into the render target). Likewise if a sampled texture is known to have RGB values with linear gamma, you'd use the plan method to set the texture, as the fragment shader can work with the RGB values as-is. |
|
Hyprland is not an implementation I would use as primary testing ground (since they stop using wlroots). It's Mutter (GNOME) or KWin (Plasma) that should serve as primary testing ground. |
|
@PatTheMav Thanks for your explanation! So essentially what I'm doing is converting to and from twice, where I could be skipping any conversion entirely. I'll dig through my code again and try to implement what you said. |
|
Let me try to document this here. The assumption I was making is that when the compositor provides a 10-bit texture, it would not be gamma-encoded sRGB. That was incorrect. Instead, I believe I had flipped my understanding of what method does gamma-to-linear and reverse. As far as I understand it now, and it seems to actually work on my system, the source image is always in gamma-encoded sRGB. When OBS is set to an 8-bit based format, it too is in a gamma-encoded colorspace. All one has to do is use "Draw", with framebuffer srgb disabled and the normal image set method. I would like to understand when the obs colorspace is changed to scRGB because changing to any of the HDR colorspaces does not actually do that. I understand from the comment that "GS_CS_709_EXTENDED" is something used only on MacOS where this pipewire capture doesn't run anyways, but "GS_CS_709_SCRGB" says it is Linux HDR, which would imply it is enabled when switching to an HDR color space, am I wrong? Also, I would like to ask about this code sample. Is there any reason to store and revert the framebuffer srgb state? I first saw this in the Windows code and it already confused me. const bool previous = gs_framebuffer_srgb_enabled();
gs_enable_framebuffer_srgb(false); |
|
I let Pat anwser your questioning but for that part:
When this line of code was added, Linux as in Wayland did not have much about HDR. |
There was a problem hiding this comment.
It is really preferred that variable and pointer are declared at the start of the function and if wanted/needed initialize later.
Edit: I know you got inspired by OBS own code but it's from a time this kind of preference were not too much enforced.
| enum gs_color_space color_space = gs_get_color_space(); | ||
| const bool gamma_space = color_space == GS_CS_SRGB; |
There was a problem hiding this comment.
Move declarations at the start or the function.
| enum gs_color_space color_space = gs_get_color_space(); | ||
| const bool gamma_space = color_space == GS_CS_SRGB; | ||
|
|
||
| const char *tech_name = "Draw"; |
There was a problem hiding this comment.
Move declaration at the start or the function.
There was a problem hiding this comment.
Move declaration at the start or the function.
Removed gamma_space entirely actually and embedded it into the if.
Is it okay to use the else for initializing tech_name, or should I initialize it at declaration?
There was a problem hiding this comment.
Keep it the initialize to "Draw" at the start.
|
|
||
| const bool linear_srgb = gs_get_linear_srgb(); | ||
| const bool previous = gs_framebuffer_srgb_enabled(); |
There was a problem hiding this comment.
Move declaration of previous at the start or the function (and if you want initialize it later).
| if (!gamma_space) | ||
| tech_name = "DrawSrgbDecompress"; | ||
|
|
||
| gs_technique_t *tech = gs_effect_get_technique(effect, tech_name); |
There was a problem hiding this comment.
Move declaration at the start or the function and keep the initialization here.
| gs_technique_t *tech = gs_effect_get_technique(effect, tech_name); | |
| tech = gs_effect_get_technique(effect, tech_name); |
|
With this third commit (you'll have to squash as one commit and force-push later), it seems that the the render delay issue is no longer there. Edit: The second had potential |
b7aa181 to
7ddc1bd
Compare
7ddc1bd to
92c55b8
Compare
tytan652
left a comment
There was a problem hiding this comment.
Declaration have been put at the function start, it does fix the Render Delay issue met on my machine.
But I can't review if the texture manipulation are good.
You can't make that assumption based on the texture format alone. A2RGB10 (or BGR10A2) would either be suited for colour spaces like D65-P3 or any other similar space that has a wider gamut than sRGB. Usually those colour spaces still use the sRGB gamma curve because they are effectively SDR formats. However you might also get HDR10 data with either PQ or HLG transfer functions, as both require only 10-bit colour and using a 32-bit format might be more efficient than a 64-bit format. So you can't assume that "10-bit" means the colours are encoded using sRGB gamma, or HLG, or PQ, either way. Ideally the compositor metadata would tell you explicitly.
Kinda correct - OBS Studio expects the colours written to the render target (the main output texture) to always use sRGB gamma encoding for sRGB, Rec.709, and Rec.601 output modes.
That's not always correct, and admittedly the way this is supposed to work is not very intuitive but was designed this way to avoid breaking functionality of existing plugins and sources:
Important If a texture uses a colour format that does not explicitly use sRGB gamma, but it is known (by convention) that the colour data in the texture uses it, then an appropriate fragment shader has to be used that decodes the samples colour values "manually"
So there can be scenarios where
I was wrong about this - texture formats like
Those values are typically used by the main preview and other projectors and updated when Qt calls the "move" or "display change" callbacks. HDR content in OBS uses |
PatTheMav
left a comment
There was a problem hiding this comment.
One important addendum to what I commented already:
- OpenGL will only use the automatic sRGB gamma encoding if the render target's
GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODINGattachment reportsGL_SRGB, otherwise it will essentially "ignore" the setting - Direct3D uses resource views and the implementation manages a pair of "linear" and "non-linear" views, which can point to the same view if the associated colour format does not have a sRGB companion
The same is true for shader resource views (D3D) and the use of GL_DECODE_EXT (OpenGL) for textures used in shaders.
This means that if a render target and/or shader resource uses a floating-point format, OBS will still report that sRGB to linear encoding/decoding should take place, and most code will naively use gs_enable_framebuffer_srgb and gs_effect_set_texture_srgb accordingly, but will "luck out" in this case because OpenGL and D3D11 will magically ignore this and use linear gamma throughout.
| if (color_space != GS_CS_SRGB) | ||
| tech_name = "DrawSrgbDecompress"; |
There was a problem hiding this comment.
As I mentioned in my comment, don't use DrawSrgbDecompress. Either set a texture via gs_effect_set_texture_srgb if its colour values use sRGB gamma or use gs_effect_set_texture if the colour values use linear gamma or should not be decoded at all.
This is not entirely correct, see below.
There was a problem hiding this comment.
I'm genuinely so confused. I've reread your explanation probably five times by now.
- I changed the render technique back to "Draw".
- Then, I get gs_get_linear_srgb(), which as far as I understand means the source is sRGB-aware and should provide linear image data. It was true in my scenario.
- Since the incoming texture is gamma-encoded, I use
gs_effect_set_texture_srgbin combination withgs_enable_framebuffer_srgb(1)which I understand should degamma the texture, making it linear.
Now the texture is white-ish, which means it is... gamma-encoded and displayed on the obs surface.. which is also gamma-encoded? That sounds like I did a pro-gamma (or whatever the opposite of degamma is). No matter what I change the obs settings to, it stays white. I'm really confused.
Instead when I set gs_enable_framebuffer_srgb(0), which again, as far as I understand effectively nullifies gs_effect_set_texture_srgb and just sets it normally. When I do this, I get the correct texture in 8-bit OBS formats but incorrect texture in 10-bit and 16-bit formats. This one makes sense. I don't understand the previous behavior though.
There was a problem hiding this comment.
Instead when I set gs_enable_framebuffer_srgb(0), which again, as far as I understand effectively nullifies gs_effect_set_texture_srgb and just sets it normally. When I do this, I get the correct texture in 8-bit OBS formats but incorrect texture in 10-bit and 16-bit formats. This one makes sense. I don't understand the previous behavior though.
Maybe I misunderstand what you meant, but those two function calls change the input and output behaviour independently.
As OBS "wants" colours to use sRGB gamma in the video output texture, let's take a single pixel with the values (.735, .735, .735).
When a fragment shader outputs the linear colour (.2, .2, .2) and framebuffer_srgb is set to false, then that colour is written straight into the render target, which might look "too dark".
If it's set to true however, OpenGL will first decode the render target's colour into (.5, .5, .5), before blending it with (.2, .2, .2) (which by default will just result in (.2, .2, .2)), and OpenGL will then automatically convert this into (.485, .485, .485), which is the value written into the render target.
If a common 8-bit texture has a pixel with the colour (.906, .906, .906) and uses a colour format with sRGB gamma, then using gs_effect_set_texture_srgb will ensure that the colour value used inside the shader code is decoded to (.8, .8, .8). The Draw shader will typically return that value, which is then used for blending in the render target, the result of which then follows the steps above.
If you use gs_effect_set_texture however, (.906, .906, .906) will be used and if framebuffer_srgb is set to true, that pixel will then be encoded into (.957, .957, .957) after linear blending, and thus be "too bright".
OBS' default 8-bit colour formats GS_BGRX, GS_RGBA, and GS_BGRA are all converted into such sRGB-tagged OpenGL formats (GL_SRGB8_ALPHA8 and GL_SRGB8 respectively), so OpenGL will automatically "decode" them but only if you also use gs_effect_set_texture_srgb.
Important
It took me a bit to wrap my head around this, so I was wrong before:
A format like GS_R10G10B10A2 is converted into GL_RGB10_A2 which is not automatically decoded. So if you know that you actually get wide-gamut sRGB-encoded SDR colour (e.g. when using the D65-P3 colour space), you have to "manually" decode the sampled colour values in the shader (either via DrawSrgbDecompress, or DrawD65P3 for this specific case).
So in theory you could use framebuffer_srgb set to false and gs_effect_set_texture, which would read (.906, .906, .906) from the texture, return it from the fragment shader, and OpenGL will blend it with (.735, .735, .735) and as long as the user has set up no actual blending, (.906, .906, .906) will be written into the render target, which is a happy accident and will be correct.
But as soon as the user sets up any other blending mode, OpenGL will blend (.735, .735, .735) and (.906, .906, .906), which will result in a far too bright result, because the correct values for blending should be (.5, .5, .5) and (.8, .8, .8), with the final colour being encoded into sRGB gamma again (which will also not happen).
The important wrinkle here (specifically for OpenGL, as that's important for this plugin) is that:
- OpenGL will automatically decode gamma for textures using
GL_SRGB8_ALPHA8andGL_SRGB8only, if a texture has any other format, no decoding takes place.- All
gs_effect_set_texturedoes, is to tell OpenGL to "skip" decoding of textures _even if they useGL_SRGB8_ALPHA8andGL_SRGB8
- All
- OpenGL will not automatically encode gamma for render targets even if they use
GL_SRGB8_ALPHA8andGL_SRGB8- This behaviour needs to be enabled explicitly via
gl_enable(GL_FRAMEBUFFER_SRGB)(that's whatdevice_enable_framebuffer_srgb(true)does)
- This behaviour needs to be enabled explicitly via
I'm also learning this while investigating these calls and the specific behaviour of OpenGL, so no worries - we'll get this across the finish line.
There was a problem hiding this comment.
@PatTheMav
Sorry for the long wait, I was writing some exams for university.
Let's try to get this to work the right way then.
Here's my current understanding:
We essentially cannot use gs_effect_set_texture_srgb without hacky workarounds, because not every possible format is sRGB tagged, meaning in our case 10-bit formats will not be decoded from their gamma format.
As the source is tagged as sRGB-aware, when the state variable gs_get_linear_srgb is set, the source needs to output linear color values and so DrawSrgbDecompress must be used.
If gs_get_linear_srgb is set to false, then we have to fallback to Draw because gamma-encoded color values are expected. Is that correct so far?
Ontop of this we would explicitly disable the automatic color conversion done by the sRGB framebuffer so that OpenGL doesn't try to degamma the color values a second time when an sRGB tagged format is provided.
Coincidentally I believe this was my original code actually, let my try rewriting it.
There was a problem hiding this comment.
That's not it at all, heh.
Colors are too dark in 8-bit OBS color formats and normal in 10-bit or 16-bit formats.
With a render delay filter enabled they are correct in 8-bit formats but too bright in 10-bit an 16-bit formats.
The thing that still puzzles me the most is why the implementation that's currently pushed in this pull request here works just fine. It doesn't line up with your explanation at all... I'm really confused
There was a problem hiding this comment.
I tried adding a color correction filter to see if what you're saying causes a new issue, and now I am even more confused.
The filter.. does not do anything. I can add a render delay filter before the color correction filter and it works, but without it the filter just straight up does not do anything anymore.
I'd really love to fix this issue and implement the pipewire capture properly, but after getting more and more input I'm just being more and more confused and I don't even understand what I'm supposed to do anymore.
Perhaps it's best to take a step back and make sure we're on the same page, step-by-step.
From my understanding it works something like this:
- It starts with an input texture
- You then either set it via
set_textureorset_texture_srgb. When using the latter, any format that is tagged as sRGB is converted into linear, with all other formats being left untouched. - Now the specified shader is applied, which in our case is
DraworDrawSrgbDecompress, where the latter always decompresses gamma color into linear. - Finally, if
enable_framebuffer_srgbis true, then it will do the opposite and turn linear color into a gamma color, as that is what OBS expects.
There's a good chance that I am far, far off in my understanding and hopefully you can clear it up. But assuming what I said is actually correct, then the this workflow here will work:
- The input texture may be non-sRGB tagged, but sRGB gamma-encoded.
- We need to use
set_texture, becauseset_texture_srgbwould lead to inconsistent results. - As the input into the fragment shader is now gamma, rather than linear,
DrawSrgbDecompressshould be used such that it outputs linear color. - If the source is sRGB-aware and assuming no filter is set, then I should enable the framebuffer sRGB, which will for the last time convert the linear color data into gamma color data, which is what OBS expects.
As for what I do when sRGB-aware is not set, I'm not sure.
I implemented what I just said and it sort of confirmed some things and caused more confusion in others, for reference here is the pseudo-code:
effect = obs_get_base_effect(OBS_EFFECT_DEFAULT);
tech = gs_effect_get_technique(effect, "DrawSrgbDecompress");
gs_enable_framebuffer_srgb(true);
gs_effect_set_texture(image, obs_pw_stream->texture);First of all:
It doesn't matter what format I choose in my compositor. As you said earlier, only some formats returned from pipewire are sRGB-tagged and because I was using set_texture_srgb originally, I was getting inconsistent results. These are now gone, which leads me to believe that this was the right step and the shader now as expected always gets gamma-encoded input, always outputting linear color data.
Onto the part I do not understand:
The code works as intended when there is no filter, that's already a good start. What I did now is, I set gs_enable_framebuffer_srgb to false out of curiosity. From my understanding this causes the linear output of the fragment shader to be interpreted as gamma, thus leading to an image that is too bright. That is not what happened. Instead, for 10-bit and 16-bit formats in OBS, the source rendered correctly as before, but when I switched to an 8-bit format, it was... too dark? Why? Doesn't make any sense to me. If I had to make sense of it, 10-bit and 16-bit formats on the main render target are linear-encoded in OBS and because the output color from the fragment shader is linear, it's a perfect match. Meanwhile 8-bit formats are gamma-encoded in OBS, so the image appears darker, because linear color data interpreted as gamma color data is actually darker rather than brighter and my understanding of the gamma curve is flawed 😂
On top of this, when I go back to the pseudo code and test with filters, I get even weirder behavior:
- Color correction filter straight up does nothing, when the output is 8-bit, but when it is 10-bit or 16-bit, everything works as expected.
- When I put a render delay before color correction, everything does work as expected.
I think I understand why putting a render delay filter before works, but why color correction directly does not? I have no idea.
There was a problem hiding this comment.
- We need to use
set_texture, becauseset_texture_srgbwould lead to inconsistent results.
I'd say it wouldn't do anything, as the texture is not sRGB-tagged - no decoding takes place and if the colours in the texture use sRGB gamma, the sampler will return those colours untouched.
But if you want to ensure that no decoding takes place, regardless of whether the texture is tagged as sRGB or not, you need to use set_texture.
effect = obs_get_base_effect(OBS_EFFECT_DEFAULT); tech = gs_effect_get_technique(effect, "DrawSrgbDecompress"); gs_enable_framebuffer_srgb(true); gs_effect_set_texture(image, obs_pw_stream->texture);
This should be correct if PipeWire textures always contain colour data with the inverse sRGB transfer function applied and returns linear colour from the fragment shader.
Instead, for 10-bit and 16-bit formats in OBS, the source rendered correctly as before
When you select 10-bit or 16-bit video output, OBS will use a 16-bit floating point format with "extended dynamic range" (Rec 709 primaries, HDR colours are "larger than 1.0") internally which will always use linear colour. The render target will thus not use an sRGB image format and OpenGL will not automatically apply the inverse sRGB transfer function to any colour returned by the shader.
Which means that gs_enable_framebuffer_srgb has no effect whatsoever, but it's expected to "just leave it on" because it will always do the right thing depending on the render target format (encode when the render target uses sRGB, do nothing when it doesn't) as you're always expected to return linear colour in the shader.
If I had to make sense of it, 10-bit and 16-bit formats on the main render target are linear-encoded in OBS and because the output color from the fragment shader is linear, it's a perfect match. Meanwhile 8-bit formats are gamma-encoded in OBS, so the image appears darker, because linear color data interpreted as gamma color data is actually darker rather than brighter
That sounds like a correct assessment to me.
As far as the filters are concerned I'd have to check out their render paths. Any filter essentially puts an intermediary render target between your source's render function and the actual video output texture.
There was a problem hiding this comment.
So the "rule" is: Your shader should always return linear colour and you should always enable sRGB for the frame buffer, because then:
- If the render target uses an sRGB image format (8-bit video output format), a colour value of
0.5is blended as0.5and (if no transparencies are involved) written as0.735into the render target - If the render target uses a linear image format (10-bit and 16-bit video output format), a colour value of
0.5is blended as0.5and written as0.5into the render target.
The reason for this is that for HDR output formats you have a choice of different transfer functions (PQ or HLG) which behave differently and thus are open to user choice, whereas sRGB is "close enough" to Rec 709 and Rec 601 that OBS uses them for all 8-bit output formats automatically and as it is the more "natural" way of representing colour for output we can use automatic GPU encoding/decoding for it.
There was a problem hiding this comment.
Gotcha! So I wasn't far off at all. I will push the updated code in a second then and then we'll figure out what's wrong with the filters.
There was a problem hiding this comment.
It looks like the line that makes the filter not work is this:
effect = obs_get_base_effect(OBS_EFFECT_DEFAULT);Removing it makes the whole source whiter again, but at least the color correction filter works without slapping a render delay inbetween. I'm not sure what this line does anyways, but I imagine there's some parameters set in the shader passed to the method from color correction? Perhaps not.
Either way, what do I do with this information now? Do you have a solution to this @PatTheMav ?
| const bool linear_srgb = gs_get_linear_srgb(); | ||
| const bool previous = gs_framebuffer_srgb_enabled(); | ||
| gs_enable_framebuffer_srgb(linear_srgb); | ||
| gs_enable_framebuffer_srgb(false); |
There was a problem hiding this comment.
Modern sRGB-aware sources are expected to use sRGB encoding for the render target based on the value returned by gs_get_linear_srgb, because that's OBS' way to communicate to a source whether it expects the source to write colour with sRGB gamma.
As a source cannot know which context it is in (is the render function being called by a scene, a filter, something else), the source has to rely on OBS to tell it what to do.
It's annoying from an API point of view, but it's how this is supposed to work.
|
As this fix is not yet finalized, I am inclined to merge the revert (#12422) to unbreak SDR (likely the more common use-case) until this can be fixed. |
|
I hope that won't cause too many merge conflicts, but I think that is a good idea for now as I'm going to be busy these next few days. |
92c55b8 to
5a01d9d
Compare
|
Successful rebase... I think |
4df62ac to
92363ed
Compare
|
@PancakeTAS Could you refresh my memory what the state of this PR based on all our earlier discussions is? |
Of course! My original pull request fixed a "white-gamma" issue when obs was configured to use 10-bit or 16-bit color formats. My pull request had the side effect of breaking filters, so it was reverted. This pull request intends to implement the pipewire rendering properly, such that regardless of the imported pipewire source, obs filterchain and obs color configuration, the texture was always rendered properly. We were going back and forth on how the color needs to be "de-gamma'd" or passed through, depending on how obs expects the color to be and we were able to find an implementation which both makes sense & actually works. Unfortunately, one issue remained which I had absolutely no idea how to resolve:
My understanding of the OBS rendering stack is still not deep enough, and randomly going at it with ideas was not able to find a solution, hence the stalemate. Finally: I did a rebase on the PR and confirmed the same issue still being present on the latest master branch. |
92363ed to
5cf6cd6
Compare
Thanks, much appreciated. The tricky thing with filters is that they don't all behave the same way and some even introduce entirely "wrong" rendering behaviour:
IIRC most filters that do not support EDR input will print a single little line between filter preview and filter controls in the property view to notify the user about this and a user will be left wondering "why the filter doesn't do anything". But some filters happily "do something", albeit producing entirely wrong results, but which can be hard to notice depending on input. |
|
So the "main" render target can only ever be one of two things:
This choice is influenced by the "output" settings in OBS. Any output format that requires 10-bit colour will switch OBS' internal render target to the 16-bit variant. By default OBS expects sources to use linear blending, regardless of EDR or SDR rendering:
Every row with a warning produces wrong colour: In the first two cases the colour has the sRGB transfer function applied twice (once by the shader, another time by the graphics API when writing into the sRGB render target), in the latter two cases sRGB encoded colour will potentially have the HLG or PQ transfer functions applied later. Effectively any row that doesn't have "passthrough" in the middle is potentially wrong. The legacy behaviour for sources is as follows:
The first row is what happens when you disable sRGB on the render target and don't use the |
|
Thanks for the summary! Just to be clear though, the main rendering already works as expected and there aren't any duplicate transfer functions being applied anywhere anymore. The only remaining issue is with that color correction filter.. You mention the filter may report the wrong color formats and potentially run incorrect calculations, leading to incorrect results. In this case however, the filter doesn't run any incorrect calculations.. it just does nothing at all. You also mention the filter may skip itself, if it notices it doesn't support the source format.. and it sure looks like that might be happening here. Like I said though, this issue happens in a standard 8-bit+sRGB-only setup and the color correction filter should definitely support this. It doesn't really make any sense why it would start working after the target format is changed either, does it? Something is really wrong here.. |
Description
Fixes #12412
Supersedes #12422
Changes the linux-pipewire render code to use the correct rendering techniques
Motivation and Context
Adding an effect to the pipewire capture source leads to a decrease in gamma.
While unsure about why exactly this happens, this PR copies the function for finding the right render technique from the Windows dc-capture and correctly handles the so far only supported non-linear format.
How Has This Been Tested?
Tested each combination of:
Additional testing with hardware cursor capable machine is required. Same for traditional 8-bit displays (I do not trust Hyprland, better be safe!)
Types of changes
Checklist: