TL;DR: It seems that, in the case of NumPy (in isolation), you can follow the instructions here https://github.com/conda-forge/numpy-feedstock/issues/253. But the problems with e.g. SciPy run deeper.
I agree that the question could be posed more constructively, but I too am curious about these topics.
As a new M1 Max owner, I would love to see the full potential of this awesome hardware fully exploited in my scientific computing/data analysis workflows. While it's great that NumPy can now benefit from the Accelerate framework if one is happy to compile from sources or install with miniforge using the switch described here, there seem to be deeper problems with SciPy (and, I would expect, scikit-learn too). SciPy dropped support for Accelerate a while back. One of the main technical blockers then (2018) seemed to be that
The APIs implemented by the LAPACK and BLAS libraries are outdated by about a decade. Currently the LAPACK version is 3.7.1 vs. Accelerate's 3.2.1 from 2009. This is an issue because Scipy cannot make use of recently introduced functionality in LAPACK (e.g. gh-6831, #7500). Internal LAPACK deprecations create extra maintenance efforts across different versions (e.g., #5266).
It is now 2022 and it is not clear to me whether this remains a major blocker (and whether Apple even plans to implement suitably recent APIs). The ball seems to be mostly in Apple's court as far as I can see, though maybe I am misreading the situation.
Topic:
Graphics & Games
SubTopic:
General
Tags: