Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 2 additions & 4 deletions .github/skills/code-review/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,11 +69,9 @@ If an excluded file contains meaningful hand-written changes (e.g. a hand-edited

Do this pass **before** the mechanical checklist. This is where most real bugs are caught; skipping it is the most common way a review misses a bug.

Record the output of this pass in your response, not just in internal reasoning. For each non-trivial new or changed function, produce a short bullet block in your working output containing the stated intent, the enumerated inputs, any input-to-output mismatches, and any missing test coverage. This record is what you'll draw from when compiling the issue table in Step 6.
For the branch as a whole, and then for each non-trivial new or changed function, work through the following steps. Record the output in your response (not just in internal reasoning): for each function, produce a short bullet block containing the stated intent, the enumerated inputs, any input-to-output mismatches, and any missing test coverage. This record is what you'll draw from when compiling the issue table in Step 6.

For the branch as a whole, and then for each non-trivial new or changed function:

1. **Identify the stated intent.** Read the commit messages, PR description, any task/issue reference, and the function's name and doc comment. Write down in one sentence what the code claims to do. If no commit message, PR description, or doc comment explains the intent, derive it from the function name and surrounding call sites, and **flag the missing context as a Warning** — a reviewer should not have to guess what the code is for.
1. **Identify the stated intent.** Read the commit messages, PR description, any task/issue reference, and the function's name and doc comment. Write down in one sentence what the code claims to do. If no commit message, PR description, or doc comment explains the intent, derive it from the function name and surrounding call sites, and **flag the missing context as a Warning** — a reviewer should not have to guess what the code is for. Treat in-code justifications for limitations or drops — comments, warnings, and JSDoc that explain why something is missing, skipped, ignored, or "not supported" ("we drop X because…", "X is not supported here", "for now we only handle Y") — as **claims to verify, not as facts**. If the code feels the need to explain an absence, investigate whether that absence is actually necessary rather than adopting it as part of the stated intent.
2. **Enumerate representative inputs.** List concrete input shapes the function must handle. Typical shapes to consider: empty, single element, two elements, boundary values at each end of a range, the symmetric counterpart of the obvious case (if the author handled A, did they handle not-A?), and any input that would take the code through a different branch or early return. For union-typed inputs (e.g. `number[] | string`), walk through one concrete value per variant so every branch sees a realistic value — a single input that happens to satisfy a shared guard (like `.length`) will hide bugs where the guard means different things across variants.
3. **Trace the output for each input.** Walk each input through the implementation and compare the result against the stated intent. If any input produces a result that doesn't match the intent, that is a Critical or Warning issue — even if the code compiles, lints, and every existing test passes.
4. **Check test coverage against the enumerated inputs.** If a particular input shape matters to the stated intent and no test exercises it, flag that as a Warning. Passing tests only prove the cases the author thought to test.
Expand Down
333 changes: 327 additions & 6 deletions packages/dev/core/src/Particles/gpuParticleSystem.ts

Large diffs are not rendered by default.

6 changes: 6 additions & 0 deletions packages/dev/core/src/Particles/thinParticleSystem.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1564,6 +1564,12 @@ export class ThinParticleSystem extends BaseParticleSystem implements IDisposabl
(this.emitter as any).computeWorldMatrix(true);
}

// Ensure the scene transform matrix is up-to-date so matrix-dependent
// update steps (notably flow map sampling, which projects world positions
// into screen space) produce correct results during pre-warm, before any
// render has had a chance to populate the matrix.
this._scene?.updateTransformMatrix();

const noiseTextureAsProcedural = this.noiseTexture as ProceduralTexture;

if (noiseTextureAsProcedural && noiseTextureAsProcedural.onGeneratedObservable) {
Expand Down
13 changes: 12 additions & 1 deletion packages/dev/core/src/Shaders/gpuRenderParticles.vertex.fx
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,9 @@ uniform mat4 invView;

#ifdef COLORGRADIENTS
uniform sampler2D colorGradientSampler;
#ifdef COLORGRADIENTS_COLOR2
attribute vec4 seed;
#endif
#else
uniform vec4 colorDead;
attribute vec4 color;
Expand Down Expand Up @@ -128,7 +131,15 @@ void main() {
#endif
float ratio = min(1.0, age / life);
#ifdef COLORGRADIENTS
vColor = texture2D(colorGradientSampler, vec2(ratio, 0));
#ifdef COLORGRADIENTS_COLOR2
// Sample both rows of the color gradient texture (row 0 = color1, row 1 = color2) at their texel
// centers and lerp using the particle's persistent seed.x for a stable per-particle color.
vec4 vColor1 = texture2D(colorGradientSampler, vec2(ratio, 0.25));
vec4 vColor2 = texture2D(colorGradientSampler, vec2(ratio, 0.75));
vColor = mix(vColor1, vColor2, seed.x);
#else
vColor = texture2D(colorGradientSampler, vec2(ratio, 0));
#endif
#else
vColor = color * vec4(1.0 - ratio) + colorDead * vec4(ratio);
#endif
Expand Down
Loading