Float64 (Double Precision) Support on MPS with PyTorch on Apple Silicon?

Hi everyone, This project uses PyTorch on an Apple Silicon Mac (M1/M2/etc.), and the goal is to use the MPS backend for GPU acceleration, notes Apple Developer. However, the workflow depends on Float64 (double-precision) floating-point numbers for certain computations, notes PyTorch Forums. The error "Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead" has been encountered, notes GitHub. It seems that the MPS backend doesn't currently support Float64 for direct GPU computation. Questions for the community: Are there any known workarounds or best practices for handling Float64-dependent operations when using the MPS backend with PyTorch? For those working with high-precision tasks on Apple Silicon, what strategies are being used to balance performance with the need for Float64? Offloading to the CPU is an option, and it's of interest to know if there are any specific techniques or libraries within the Apple ecosystem that could streamline this process while aiming for optimal performance. Any insights, tips, or experiences would be appreciated. Thanks in advance, Jonaid MacBook Pro M3 Max

Answered by DTS Engineer in 856438022

Hello,

Please send us an enhancement request for float64 support in the MPS framework.

Take a look at the MLX framework as well. It looks like there is a request for float64 support there and some explanation of what's required.

Hello,

Please send us an enhancement request for float64 support in the MPS framework.

Take a look at the MLX framework as well. It looks like there is a request for float64 support there and some explanation of what's required.

Float64 (Double Precision) Support on MPS with PyTorch on Apple Silicon?
 
 
Q