I see region support in MTLBuffer to MTLTexture upload calls, but I’d like to upload an entire level of same-sized mips in 1 call. Calling these routines once for every image x 2048 layers x every mip is a lot of commands. If I could do this once for each mip size, that seems more efficient, though a longer command for Metal to complete.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
When we switched from the old to new build system, that broke all of our error/warning clickthrough to headers. These are reported as ../../folder/file.h but clicking on the messages doesn't bring up the offending line. The .cpp files clickthrough fine, since they are full paths.
My understanding is that the old build system resolve these to full paths, and the new build system does not.
What is the solution here?
Our Visual Studio projects work and VC++ has a compile option /Fc to force full paths in all diagnostic message.
I have a single-windowed app that loads textures. I want to be able to switch textures to anything in the Open Recent menu item list that I add URLs into.
When I launch the app, each recent menu item calls readFromURL once. But subsequent selection of the menu item does not. I can see the documents in the document controller increasing, so I know that is likely why this callback isn't called.
If I return NO from readFromURL then it is called every time, but posts a dialog about the file failing to load. I want that behavior, but not the dialog.
I'm not really using the NSDocuments created, but I needed this mechanism to get at the URL to load the data. I can't really remove documents from list, so I just let them accumulate but they basically just store the URL.
What am I supposed to hook as a callback? Is there a switchToDocument callback hook/delegate?
So the underutilized boolean and counting occlusion queries look ideal for predicated rendering given no API support. This should work for 5s and above.
generate a set of indexes in an instance buffer
have occlusion query write an 8-byte value to that indexed buffer location
read the buffer location from each instance.
in vertex shader, lookup buffer value at index, and vertkill all instance vertices if 8-byte value is 0
Seems like this should work. This means the instance buffer is dynamically changed on the gpu after it is submitted. The buffer would need to be "private" space for gpu writes, but could be "shared" on iOS. Does this seem viable?
One thing that would be useful as an extension to occlusion
only write 0, and not any positive values to knock out instance counts.
be able to write to 4 bytes instead of 8 to knock out ICB values
being able to submit instanced boxes with queries on each one. The current queries require a scoped draw per box which is a lot of commands submitted
How do you get a macOS 10.15 QuickLook extension to work? I added the provider from the Xcode template. The .appex file is in the app bundle under Contents/Plugins/foo-thumb.appex
Can these Quicklook thumbnail providers override the system. So if macOS doesn't handle thumbnails for most types of ktx or any ktx2 files, can I override that with my own Quicklook plugin?
I tried using qlmanage with -m, and all I see are qlgenerators in there. I don't see any .appex files registered or listed. So when I try this on ktx (org.krhonos.ktx) and ktx2 files (public.ktx2) a thumbnail isn't generated.
qlmanager just seems to hang without the -x argument. And on the one file that worked, it pops up a thumbnail but I don't think it was generated by my appex.
sudo qlmanage -t /Users/Foo/tests/Toof-a.ktx -c org.khronos.ktx -x
Testing Quick Look thumbnails with files using server:
/Users/Foo/tests/Toof-a.kt
- force using content type UTI: org.khronos.ktx
2021-05-28 10:04:52.471 qlmanage[23522:4229132] *** CFMessagePort: bootstrap_register(): failed 1100 (0x44c) 'Permission denied', port = 0x984b, name = 'com.apple.tsm.portname'
See /usr/include/servers/bootstrap_defs.h for the error codes.
2021-05-28 10:04:52.477 qlmanage[23522:4229132] *** CFMessagePort: bootstrap_register(): failed 1100 (0x44c) 'Permission denied', port = 0x6607, name = 'com.apple.coredrag'
See /usr/include/servers/bootstrap_defs.h for the error codes.
* No thumbnail created for /Users/Foo/tests/Toof-a.ktx
Done producing thumbnails
There are no images on the basic primitives used in ModelIO that I've found so far. If these primitives are meant to be used by apps as starter content, then they should follow consistent modeling and uv wrapping rules.
The sphere has uv coordinates where the u direction wraps counterclockwise. The capsule has the same problem. This is the opposite of the cube which fits u clockwise on each face. Solution was to flip x with 1-uv.x.
The sphere reports 306 vertices on latest Big Sur, but after vertex 289 the rest of the vertex data is garbage data.
The uv seam on the sphere is rotated by -45 degrees from the capsule.
I had to flip the bitangent sign too in my app. This seemed to be reversed, but could be from shader issues.
On my MBP 16" I'm getting uv's that are non-zero, but all zero derivatives. I don't know yet if this is from 2.
Here's part of what I did to fix the mesh. This doesn't include the call to generate tangents and bitangent sign, and then flip the sign.
mdlMesh = [MDLMesh newEllipsoidWithRadii:(vector_float3){0.5, 0.5, 0.5} radialSegments:16 verticalSegments:16 geometryType:MDLGeometryTypeTriangles inwardNormals:NO hemisphere:NO allocator:_metalAllocator];
float angle = M_PI * 0.5;
float2 cosSin = float2m(cos(angle), sin(angle));
{
mdlMesh.vertexDescriptor = _mdlVertexDescriptor;
id<MDLMeshBuffer> pos = mdlMesh.vertexBuffers[BufferIndexMeshPosition];
MDLMeshBufferMap *posMap = [pos map];
packed_float3* posData = (packed_float3*)posMap.bytes;
id<MDLMeshBuffer> normals = mdlMesh.vertexBuffers[BufferIndexMeshNormal];
MDLMeshBufferMap *normalsMap = [normals map];
packed_float3* normalData = (packed_float3*)normalsMap.bytes;
// vertexCount reports 306, but vertex 289+ are garbage
uint32_t numVertices = 289; // mdlMesh.vertexCount
for (uint32_t i = 0; i < numVertices; ++i) {
{
auto& pos = posData[i];
// dumb rotate about Y-axis
auto copy = pos;
pos.x = copy.x * cosSin.x - copy.z * cosSin.y;
pos.z = copy.x * cosSin.y + copy.z * cosSin.x;
}
{
auto& normal = normalData[i];
auto copy = normal;
normal.x = copy.x * cosSin.x - copy.z * cosSin.y;
normal.z = copy.x * cosSin.y + copy.z * cosSin.x;
}
}
// Hack - knock out all bogus vertices on the sphere
for (uint32_t i = numVertices; i < mdlMesh.vertexCount; ++i) {
auto& pos = posData[i];
pos.x = NAN;
}
}
I'm getting "Bad Message 431" trying to access the forums from Chrome, so I'm using Safari instead. This is a thread with a ton of people hitting this problem.
https://developer.apple.com/forums/thread/123657
I tracked it down to a bug in the validation layer. This happens with Apple's own sample code. RG16Unorm and other formats shouldn't be flagged as unsupported on a RDNA1 chip in the 16" MBP.
Please also make are R16Unorm isn't flagged either. The Metal texture loader used in the sample seems to always pick the wrong format for loaded images (RG for a height map png?), with no way to control the format.
This is the feedback assistant post on it.
https://feedbackassistant.apple.com/feedback/9540775
I have an NSImageView-based preview appex plugin for macOS in Objective-C. It's based off the sample template that you can add to an app.
It completely makes it through preparePreviewOfFileAtURL, and I stuff pixel data into a CGImage, then that into an NSImage and that onto an NSImageView that is self.view from the storyboard. I can see all data via qlManager -p, since it will print error messages for caveman debugging. I have no idea and there are no docs on using the "Quick Look Simulator" which appears to do nothing.
I set background color of the NSImageView.layer to red, and it shows up red. So I know the NSImageView is visible, just not the image that it also points to.
About 30% of the time, only the red shows up, and the other 70% of the time nothing shows up in preview just smokey blurred version of the icons underneath in Finder.
My only recourse, is to disable the extension list for the preview appex, and let the thumbnailer appex provide the preview. There are no samples of this for macOS, and the default templates aren't a working version of this either.
How do I suppress these strings? These are flooding my logs when I hit pause/resume. These seem to have been left in the debug/release builds. I'm on Xcode 13.1, and building for macOS 10.15. But I'm running on an M1 mac.
2022-02-26 12:56:43.169437-0800 app[96801:1760817] [] [0x1238a0000] CVDisplayLinkStart
2022-02-26 12:56:43.169584-0800 app[96801:1760817] [] [0x1238a0020] CVDisplayLink::start
2022-02-26 12:56:43.169772-0800 app[96801:1761488] [] [0x6000029f6bc0] CVXTime::reset
2022-02-26 12:56:43.926183-0800 app[96801:1760817] [] [0x1238a0000] CVDisplayLinkStop
This must be a bug in NSTableView, but when I hide the view, the NSScroller shows its scrollbar moving through an empty list and still responds to vertical gestures.
Since this is over a view that already handles panning, this causes a giant deadzone where the table was displayed and vertical panning stops on that section of the view because the table view is intercepting it.
All I do is supply an NSTableViewDelegate. The autohide property is set on the scrollers. This should not be responding to gestures either.
I have an scrolling NSTableView that has an isFlipped MTKView as the parent. No matter how I set the growth constraints, frame.y (which seems to offset from bottom left still) or adjust the height, I can't get a gap from the top. I have a hud that this then clobbers over. I want to offset from the top-left so this doesn't happen. What is the magic to do this?
All major platform pthread APIs except for Apple, provide a pthread_setname_np call that takes a pthread_t handle as the first argument. Even pthread_getname_np takes a pthread_t handle. But Apple, only allows setting the name from within the thread. This means std::thread abstractions in C++ can't or don't provide a reasonable call to set or change the thread name from another thread.
pthread_setname_np(pthread_t thread, const char* name); <- standard api
pthread_setname_np(const char* name); <- Apple's api
Could this be standardized? Given that [NSThread setName:] exists and can perform this function, there must be a way to perform this.
I'm trying to update all my projects to C++20. The C++ only projects work fine. All the Objective-C++ files, though, suddenly stop compiling. Is this supposed to work. I can't imagine [NSString stringWithUTF8String:foo] should be failing. Setting the project back to C++17 works.
I do have -fmodules and -fcxx-modules for clang modules in some projects. Do those all need to be removed for C++20 modules?
I'm hoping the answer here is that the fp16 values get written out to the parameter buffer to save space on TBDR, but then the gpu promotes them back to fp32 for interpolation, and then back to fp16 for the receiving fragment shader. This would then work around banding if the output and interpolation was done in fp16 math like on Android. There is no documentation that I've found on this, or even on the PowerVR documentation about their gpu.
Is this a new development? I have a texture viewer that I develop, and Metal is failing to create a texture for ETC2_RGBA textures when the type is a 3D texture. 2d, cube, and 2d array seem to have support. Are other ETC2 textures preserved as compressed textures in the L1, or are these being decompressed?