Skip to content

Make a central utility for rank reductions #112

@inducer

Description

@inducer

To replace this:

mpi_comm = dcoll.mpi_communicator
if mpi_comm is None:
return dt_factor * (1 / c)
return (1 / c) * mpi_comm.allreduce(dt_factor, op=MPI.MIN)

and implement the nodal reductions in a distributed setting. Initially, we'll just use MPI, but likely we'll need to use a different abstraction. That abstraction doesn't exist yet, hence the request for this stopgap. Once it exists, it'll be convenient if we can change that centrally.

Spotted while reviewing #108.

@thomasgibson Could you put that on your pile?

cc @matthiasdiener

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions