Description
Is your feature request related to a problem? Please describe.
As of #1192, each process has access to only its local target indices when performing a remapping. This simplifies the logic in remap!
and isolates setup to generate_map
. However, each process still stores the entire index vectors, just with 0s at non-local indices.
Here we wish to further expand on this idea by storing only the local segments of the source and target indices, weights, and row indices in the LinearMap
of each process.
Part of SDI #188.
Describe the solution you'd like
We need to change the distributed storage of 4 quantities: source_local_idxs
, target_local_idxs
, weights
, row_indices
.
For each element in these vectors, the root process needs to be aware of the pid
of the process responsible for that element. We can find this usign the Topology2D.elem_pid
vector. We can then use this to aggregate the information for each process into a matrix, and perform one broadcast step where we send these matrices each to their respective process.
Specific steps to do this are:
- When looping over the weights vector, check the pid of
et
(target element) usingelem_pid
, and addet
and the current weight to the matrix intended for pidn
. - At the end of
generate_map
, distribute these matrices to the corresponding processes.
- Not sure how to select source indices and make sure to send them to the right place - this may require the super-halo. Either way, we need to make sure that the ordering and length of the index and weight vectors allows us to perform the remapping multiplication correctly.
Describe alternatives you've considered
MPI's scatter
function almost does what we want. The issue here is that the values we need to store all use TempestRemap's indexing, so we aren't sure that the partitioning used by scatter
will correctly put the values on the processes they need to be on.
Additional context