Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion packages/start-server-core/src/server-functions-handler.ts
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,11 @@ export const handleServerAction = async ({

let jsonPayload
if (contentType?.includes('application/json')) {
jsonPayload = await request.json()
const text = await request.text()
if (text.length > MAX_PAYLOAD_SIZE) {
throw new Error('Payload too large')
}
jsonPayload = JSON.parse(text)
Comment on lines +153 to +157
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Does Fetch Request.text() read the entire body before resolving, and can Request.body be consumed incrementally as a ReadableStream<Uint8Array> via getReader()?

💡 Result:

Yes, Request.text reads the entire body before resolving. It implements the Body mixin, which consumes the full ReadableStream by extracting all bytes until end-of-stream, then decodes them to a UTF-8 string. The spec's consume body algorithm sets the disturbed flag and reads "from request's body until it returns end-of-stream" in parallel, resolving only after the complete body is processed. Yes, Request.body can be consumed incrementally as a ReadableStream via getReader. Request.body is a ReadableStream (or null for GET/HEAD), and getReader locks it to read Uint8Array chunks sequentially using reader.read in a loop until done=true. Calling text or getReader disturbs/locks the body, preventing reuse without cloning.

Citations:


🏁 Script executed:

# Check if the file exists and read lines 150-160 for context
if [ -f "packages/start-server-core/src/server-functions-handler.ts" ]; then
  echo "=== File exists. Reading lines 150-160 ==="
  sed -n '150,160p' "packages/start-server-core/src/server-functions-handler.ts" | cat -n
  echo ""
  echo "=== Full context: lines 140-170 ==="
  wc -l "packages/start-server-core/src/server-functions-handler.ts"
else
  echo "File not found at that path. Searching for similar files..."
  find . -name "server-functions-handler.ts" 2>/dev/null
fi

Repository: TanStack/router

Length of output: 634


🏁 Script executed:

# Search for MAX_PAYLOAD_SIZE definition and usage
rg "MAX_PAYLOAD_SIZE" packages/start-server-core/src/ -B 2 -A 2

Repository: TanStack/router

Length of output: 1536


🏁 Script executed:

# Search for readTextWithLimit or similar helper functions
rg "readTextWithLimit|getReader|Request\.body" packages/start-server-core/src/server-functions-handler.ts -B 2 -A 5

Repository: TanStack/router

Length of output: 41


Enforce the limit while streaming, not after buffering the entire body.

await request.text() reads the entire POST body into memory before the size check on line 154, so the large-body DoS this PR targets remains possible. Additionally, text.length counts UTF-16 code units rather than bytes, which does not accurately enforce the MAX_PAYLOAD_SIZE byte limit for non-ASCII JSON.

Use streaming to check the byte limit before buffering:

Suggested implementation
-          const text = await request.text()
-          if (text.length > MAX_PAYLOAD_SIZE) {
-            throw new Error('Payload too large')
-          }
+          const text = await readTextWithLimit(request, MAX_PAYLOAD_SIZE)
           jsonPayload = JSON.parse(text)

Add this helper with explicit type annotations:

async function readTextWithLimit(
  request: Request,
  maxBytes: number,
): Promise<string> {
  const reader = request.body?.getReader()
  if (!reader) {
    return ''
  }

  const chunks: Array<Uint8Array> = []
  let received = 0

  try {
    while (true) {
      const { done, value } = await reader.read()
      if (done) break
      if (!value) continue

      received += value.byteLength
      if (received > maxBytes) {
        await reader.cancel()
        throw new Error('Payload too large')
      }

      chunks.push(value)
    }
  } finally {
    reader.releaseLock()
  }

  const body = new Uint8Array(received)
  let offset = 0
  for (const chunk of chunks) {
    body.set(chunk, offset)
    offset += chunk.byteLength
  }

  return new TextDecoder().decode(body)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/start-server-core/src/server-functions-handler.ts` around lines 153
- 157, Replace the current buffering via await request.text() followed by
text.length check with a streaming byte-limit reader: add a helper function
(e.g., readTextWithLimit(request: Request, maxBytes: number): Promise<string>)
that reads request.body.getReader() in chunks, accumulates Uint8Array chunks,
tracks received byteLength, cancels and throws 'Payload too large' when received
> MAX_PAYLOAD_SIZE, and returns decoded text; then call jsonText = await
readTextWithLimit(request, MAX_PAYLOAD_SIZE) and use JSON.parse(jsonText) where
jsonPayload is set (replacing the existing await request.text() + length check)
so the limit is enforced on bytes during streaming rather than on UTF-16 code
units after buffering.

}

const payload = jsonPayload ? parsePayload(jsonPayload) : {}
Expand Down