Post

Replies

Boosts

Views

Activity

Reply to Writing to depth buffer
Hello, Is it still impossible for the Metal fragment shader to write to the depth buffer for the current version of Metal on the M1 CPUs? The image below shows the use case for my MRIcroGL software (with these images showing the correct behavior of OpenGL on my M1). MRIcroGL is a volume raycaster. The vertex shader simply determines the location of the front faces of the cube (left panel). The fragment shader subsequently samples each pixel from the front face of the cube to the back face, accumulating color/opacity from all the voxels it traverses. In OpenGL, I can simply write 'gl_FragDepth' for a ray when it first hits a non-transparent voxel. This has two benefits: first the depth test fro the subsequent crosshairs correctly shows the ntersection with the brain, not the cube. Second, we can read the depth buffer allowing depth picking, where clicking on the brain moves the crosshair to that location (allowing the user to work out the location of features seen on the rendering for 2D orthogonal slices). In every other respect, my software works identically when compiled for OpenGL and Metal. However, the inability to write to a depth buffer seems like a deal breaker for this usage scenario.
Topic: Graphics & Games SubTopic: General Tags:
Sep ’21
Reply to How to compile a universal version of Python 3.8 (for both Mac M1 and Intel)
Python is a complex project. I would suggest you get a pre-compiled version for your platform. A good place to start would be the miniforge distribution which includes a version compiled for arm64 (Apple Silicon) - https://github.com/conda-forge/miniforge#download. However, you may find that many modules do yet support this architecture (e.g. Pandas), some that do support it are built using experimental compilers (gcc/gFortran) that may have issues or poor performance. Even core modules like numpy exhibit some bugs - https://github.com/numpy/numpy/issues/17964 on the M1, and some native functions perform a magnitude slower than the same function run as translated code - https://github.com/numpy/numpy/issues/17989 (which suggest low hanging fruit for optimization). It seems like these issues are getting rapidly resolved, but in the short term I would suggest sticking with the translated Python unless you are explicitly trying to resolve these limitations.
Dec ’20