Skip to content

[WIP] GPU specific kernels for MLEBABecLap#4922

Draft
ankithadas wants to merge 12 commits intoAMReX-Codes:developmentfrom
ankithadas:LinearSolver-EB-GPU-Fused
Draft

[WIP] GPU specific kernels for MLEBABecLap#4922
ankithadas wants to merge 12 commits intoAMReX-Codes:developmentfrom
ankithadas:LinearSolver-EB-GPU-Fused

Conversation

@ankithadas
Copy link
Copy Markdown
Contributor

Summary

Additional background

Checklist

The proposed changes:

  • fix a bug or incorrect behavior in AMReX
  • add new capabilities to AMReX
  • changes answers in the test suite to more than roundoff level
  • are likely to significantly affect the results of downstream AMReX users
  • include documentation in the code and/or rst files, if appropriate

@ankithadas ankithadas changed the title GPU specific kernels for MLEBABecLap [WIP] GPU specific kernels for MLEBABecLap Jan 23, 2026
@ankithadas
Copy link
Copy Markdown
Contributor Author

For discussion: This PR introduces fairly substantial changes to MLEBABecLap, and I would like feedback on whether it should be split into smaller PRs. The main changes are:

  1. Refactored the linop kernels into a templated form for consistency with other linops.
  2. Removed the Box based kernels and replaced them with explicit index-based access (i, j, k, n).
  3. Introduced a multi-array version of EBData, called EBDataArrays.
  4. Added GPU-specific kernels to enable MF fusion.

@WeiqunZhang
Copy link
Copy Markdown
Member

Yes, it's a good idea to split this into smaller PRs so that it's easier for testing correctness and performance. As for making the linop kernels into templates, unless it's necessary, I don't think we should do that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants