Skip to content

Make PreallocatedBuffersPerPool runtime-configurable#14

Open
lixmal wants to merge 5 commits intomasterfrom
configurable-pool-size
Open

Make PreallocatedBuffersPerPool runtime-configurable#14
lixmal wants to merge 5 commits intomasterfrom
configurable-pool-size

Conversation

@lixmal
Copy link
Copy Markdown

@lixmal lixmal commented Apr 22, 2026

Makes WireGuard per-Device memory knobs runtime-configurable for embedded scenarios that create many Devices in one process.

  • Convert PreallocatedBuffersPerPool from const to var on all platforms. Defaults preserved: 0 on default/windows, 4096 on android, 1024 on ios.
  • Add SetPreallocatedBuffersPerPool(n uint32) to set the default for newly-created Devices.
  • Add (*Device).SetPreallocatedBuffersPerPool(n uint32) and (*WaitPool).SetMax(n uint32) to retune live Devices in place. Waiters are broadcast so they re-check against the new cap.
  • Add MaxBatchSizeOverride global and (*Device).SetMaxBatchSize(n int) to control the per-Device batch size used by RoutineReceiveIncoming and RoutineReadFromTUN. Each of those goroutines eagerly allocates batch message buffers for its lifetime, so this knob bounds the steady-state per-Device buffer footprint. Zero means "no override" (Devices fall back to max(bind.BatchSize(), tun.BatchSize())); zero is NOT unlimited.
  • Device.BatchSize() now honors the override and is used uniformly on both the receive and TUN-read paths.
  • Fast path preserved when a pool is constructed with max == 0: Get/Put skip the lock, and SetMax is a no-op. Upstream default behavior is unchanged for callers that never set these knobs.

@lixmal lixmal changed the title device: make PreallocatedBuffersPerPool runtime-configurable Make PreallocatedBuffersPerPool runtime-configurable Apr 22, 2026
@lixmal
Copy link
Copy Markdown
Author

lixmal commented Apr 22, 2026

@coderabbitai review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant