All major platform pthread APIs except for Apple, provide a pthread_setname_np call that takes a pthread_t handle as the first argument. Even pthread_getname_np takes a pthread_t handle. But Apple, only allows setting the name from within the thread. This means std::thread abstractions in C++ can't or don't provide a reasonable call to set or change the thread name from another thread.
pthread_setname_np(pthread_t thread, const char* name); <- standard api
pthread_setname_np(const char* name); <- Apple's api
Could this be standardized? Given that [NSThread setName:] exists and can perform this function, there must be a way to perform this.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm hoping the answer here is that the fp16 values get written out to the parameter buffer to save space on TBDR, but then the gpu promotes them back to fp32 for interpolation, and then back to fp16 for the receiving fragment shader. This would then work around banding if the output and interpolation was done in fp16 math like on Android. There is no documentation that I've found on this, or even on the PowerVR documentation about their gpu.
Is this a new development? I have a texture viewer that I develop, and Metal is failing to create a texture for ETC2_RGBA textures when the type is a 3D texture. 2d, cube, and 2d array seem to have support. Are other ETC2 textures preserved as compressed textures in the L1, or are these being decompressed?
I've been looking at C++ build times with Xcode. The Apple STL library for basic_string auto-instantiates basic_string for 5 types into all files that include it. I'm trying to use alternate STL libraries like EASTL and FASTL which have their own string types, but have to mix in the following types where there are holes.
1. <mutex>, <condition_variable>, <thread> include <system_error> which includes <string>
2. <random> includes <string>.
So these slow the build even if header are precompiled, but creates 5x versions of basic_string in char, char8_t, char16_t, char32_t, and wchar_t flavors into every file that happens to include this even indirectly through the headers above. I only use
string and basic_string<char>
with utf8 data anyhow. I don't need the other 4 types.
How can this be improved?
I have a shader that sets rasterizationEnabled to NO on the MTLRenderPipeline. It's essentially using a vertex shader to do compute. This vertex shader reads vertex buffer data, and writes out to another buffer from within the same shader.
The problem is I don't know how to correctly wrap this in a render pass. The MTLRenderPass creation from a MTLRenderPassDescriptor complains that no width/height/format or renderTarget/depth is set on this pass. But it's not dependent on rasterization or render textures. What is the correct way to specify the enclosing RenderPass to Metal?
char str[256] = {};
strcpy(buffer, "unknown");
Shows up in the debuggre as unknown\0\0\0\0\0\0\0\0\0\0\0....
This holds a simple 7 character string, but Xcode insists on displaying all 256 characters even past the \0. That's the "Default" setting. "Show as c-string" doesn't help either, but should only display up to the end of the string.
Can this go back to the way it was before?
I never want to run a C++ app when there is build failure, but XCode seems to think this is okay to do. If you hit play fast enough, it happens all the time. How can this be fixed?
Want to use half data, but it's unclear how the A series processors handle interpolating it across the polygon. Adreno/Nvidia doesn't allow half in shader input/output due to banding. Mali recommends declaring half out of the VS to minimize the parameter buffer, and declare float in the FS.
Can Apple provide some insight as to best practices here?
This has been broken since Monterrey macOS 12.0. I am running an x64 app under Rosetta2, and trying to test ProMotion. Is this possibly fixed in macOS 14.0? I see mention of a universal CAMetalDisplayLink finally, so we can also try that, but it won't fix testing on older macOS.
https://developer.apple.com/forums/thread/701855?answerId=708409022#708409022
The macOS screen recording tool doesn't appear to support recording HDR content (f.e. in QuickTime player). This tool can record from the camera using various YCbCr 422 and 420 formats needed for HVEC and ProRes HDR10 recording, but doesn't offer any options for screen recording HDR.
So that leaves in-game screen recording with AVFoundation. Without any YCbCr formats exposed in Metal api, how do we use CVPixelBuffer with Metal, and then send these formats off to the video codes directly? Can we send Rec2020 RGB10A2Unorm data directly? I'd like the fewest conversions possible.
How is this a valid stack trace with mangled symbol names and file/line information? I've already demangled the name, even though Windows does this for me.
The recommended approach to get file/line seems to be to proc "atos" process repeatedly on the symbols to turn them all into file/line. But shouldn't there just be a function call for this, or option to backtrace_symbols() since it's looking up the symbol anyways.
I don't get how this external process call would work for iOS, and it seems slow for macOS as well.
Compare this with Window CaptureStackBackTrace, and then there is a simple function call via DbgHelp.lib to retrieve the file/line. Am I supposed to somehow use the CoreSymbolicate framework on macOS/iOS?
I have an app hosting a page with a 'drop' operation where the page accepts file drops. But I can't easily intercept that, nor seep to be able to block it in Javascript. Yet WKWebView has no way to disable the drop operation from settings, or intercept the url of the dropped file. Either would be useful here.
Trying to create a List that sorts by some criteria and handles searchable without any samples is only made even more difficult by the textfield constantly refocusing itself. So I can't even tab away from it. This is some awful bug in SwiftUI.
class FileSearcher: ObservableObject {
@Published var searchIsActive = false
@Published var searchText = ""
var files: [File] = []
...
}
NavigationSplitView() {
}
.searchablel(text: $fileSearcher.searchText,
isPresented: $fileSearcher.searchIsActive,
placement: .sidebar,
prompt: "Filter")
I have a webpage that needs to receive keypresses to zoom and scroll. This seems to be the only way to prevent the annoying NSBeep from occurring. Return true too much, and command keys stop working. So need to be explicit about which keys you want. Also no way to block delete/shift+delete from going fwd/back in the history using this same mechanism.
func isKeyHandled(_ event: NSEvent) -> Bool {
// can't block delete or shift+delete
// so the WKWebView goes back/foward through it's 1 page history.
// that loses all context for the user.
// prevent super annoying bonk/NSBeep
// if don't check modifier flags (can't check isEmpty since 256 is often set
// then the cmd+S stops working
if !(event.modifierFlags.contains(.command) ||
event.modifierFlags.contains(.control) ||
event.modifierFlags.contains(.option)) {
// wasd
if event.keyCode == Keycode.w || event.keyCode == Keycode.a || event.keyCode == Keycode.s || event.keyCode == Keycode.d {
return true
}
}
return false
}
// Apple doesn't want this to be overridden by user, but key handling
// just doesn't work atop the WKWebView without this. KeyUp/KeyDown
// overrides don't matter, since we need the WKWebView to forward them
override func performKeyEquivalent(with event: NSEvent) -> Bool {
if !isKeyHandled(event) {
return super.performKeyEquivalent(with: event)
}
return true
}
The following generates a prior definition warning from #include <__config>. But that has an ifndef in it. I'm in C++17 on Xcode latest (14.5). Is this documented anywhere?
-D_LIBCPP_ENABLE_ASSERTIONS=1