Skip to content

Multi-device support meta-thread #918

Open
@crusaderky

Description

@crusaderky

This is a tracker of the current state of support for more than one device at once in the Array API, its helper libraries, and the libraries that implement it.

Supporting multiple devices at the same time is typically substantially more fragile than pinning one of the available devices at interpreter level and then using that one exclusively, which typically works as intended.

Array API

array-api-strict

  • Supports three hardcoded devices, "cpu", "device1", "device2". This is fit for purpose for testing downstream bugs re. device propagation.

array-api-tests

array-api-compat

  • Adds device param to numpy 1, cupy, torch, and dask (read below).
  • Implements helper functions device() and to_device() to work around non-compliance of wrapped libraries

array-api-extra

  • Full support and testing for non-default devices, using array-api-strict only. Actual support from real backends entirely depends on the below.

NumPy

  • It supports a single dummy device, "cpu".
  • array-api-compat backports it to NumPy 1.x.

CuPy

  • Non-compliant support for multiple devices.
  • array-api-compat adds a dummy device= parameter to functions.
  • A compatibility layer is being added at the moment of writing by [DNM] ENH: CuPy multi-device support array-api-compat#293. [EDIT] it can't work, as array-api-compat can't patch methods.
  • As it doesn't have a "cpu" device, it's impossible to test multi-device ops without access to a dual-GPU host.

PyTorch

JAX

Dask

  • Dask doesn't have a concept of device
  • array-api-compat adds stub support, that returns "cpu" when wrapping around numpy and a dummy DASK_DEVICE otherwise. Notably, this is stored nowhere and does not survive a round-trip (device(to_device(x, d) == d can fail).
  • This is a non-issue when wrapping around numpy, or when wrapping around cupy with both client and workers mounting a single GPU.
  • Multi-GPU Dask+CuPy support could be achieved by starting separate worker processes on the same host and pinning the GPU at interpreter level. This is extremely inefficient as it incurs in IPC and possibly memory duplication. If a user does so, the client and array-api-compat will never know.
  • dask-cuda may improve the situation (did not investigate).

SciPy

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions