Trying to test HDR running an EDR-capable iOS app on macOS. This is running on the same M2 MBP.
On macOS, NSScreen.maximumPotentialExtendedDynamciRangeColorComponentValue returns 16. That's what I'd expect on a mini-led display like this one.
On iOS on macOS, UIScreen.potentialEDRHeadroom reports 1.0. That's not correct.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Since this api requires &__dso_handle instead of the standard file/line/func, I had to modify my entire log system to pass this down from the macro call sites.
I have many callbacks that typically just forward data from other C++ libraries that never supply a dso_handle. so it's great how this logging system breaks most logger systems and doesn't have a warning level to match fault/error. I have the forwarded threadName, timestamp, etc and no where to store that in os_log. syslog was more powerful and clear than os_log, but I'm sure it's now too late to head down a more reasonable path..
So I pass the &__dso_handle all the way to the log command and hand it into the macro
#define my_os_log_with_type(dso, log, type, format, ...) __extension__({ \
os_log_t _log_tmp = (log); \
os_log_type_t _type_tmp = (type); \
if (os_log_type_enabled(_log_tmp, _type_tmp)) { \
OS_LOG_CALL_WITH_FORMAT(_os_log_impl, \
((void*)dso, _log_tmp, _type_tmp), format, ##__VA_ARGS__); \
} \
})
Logger.mm
// This doesn't work, logging the dso from the callsite. No file/line.
my_os_log_with_type(entry.dso, os_log_create( "com.foo", entry.tag), logLevel(entry.level)), "%{public}s", text );
// This does work, but who wants to jump to the forwarding log implementation?
os_log_with_type(os_log_create( "com.foo", entry.tag), logLevel(entry.level)), "%{public}s", text );
How is this a valid stack trace with mangled symbol names and file/line information? I've already demangled the name, even though Windows does this for me.
The recommended approach to get file/line seems to be to proc "atos" process repeatedly on the symbols to turn them all into file/line. But shouldn't there just be a function call for this, or option to backtrace_symbols() since it's looking up the symbol anyways.
I don't get how this external process call would work for iOS, and it seems slow for macOS as well.
Compare this with Window CaptureStackBackTrace, and then there is a simple function call via DbgHelp.lib to retrieve the file/line. Am I supposed to somehow use the CoreSymbolicate framework on macOS/iOS?
I have a UTI for "public.directory" and can drag-drop folders onto my app and open them. I also added this to the Info.plist to say the app supported directoryies.
But the default "Open" command seems to popup up an NSOpenPanel with folders not selectable. The "Open" button stays disabled.
How do I change this? I tried implementing "openDocument", but then it lets through any file type, not just the ones in my Info.plist. So I'd like to just use the default implementation, but need an override for the NSOpenPanel.
(IBAction)openDocument:(id)sender
{
NSOpenPanel *panel = [NSOpenPanel openPanel];
[panel setCanChooseFiles:YES];
[panel setCanChooseDirectories:YES];
[panel setAllowsMultipleSelection:NO];
...
}
My build generates 10 errors, and sometimes it halts the "build and run" as expected.
Most of the time, it just runs the previously successful build anyways. Seems like a basic tenent of an IDE to not do this unless I've explicitly enabled the run.
Seems that metal-shaderconverter can build a metallib, but I need .air files. Then I link the .air files into a single metallib and metallibdsym file.
HLSL -> dxc -> DXIL -> metal-shaderconverter -> .metallib
But there's no way to link together multiple metallib into a single metallib is there?
Visual Studio, Stadia, and JetBrains are supporting natvis files, where Xcode is still stuck on lldb python files that even Xcode no longer uses to debug data structures. Apple has converted all of their scripts to native code, so there are no samples of how to write complex lldb visualizer rules.
There is already an lldb-eval that can bring in natvis files, so something like this should be brought to Xcode. C++ packages like EASTL only ship with a natvis file, and it's far simpler to edit and write than the lldb rules and python scripting.
I have the following set, but the WKWebView is losing the page content that I'm updating in a given page whenever the user presses the "delete" key. This is in SwiftUI, and I see no way to intercept and block this key.
webView.allowsBackForwardNavigationGestures = false
Isn't the whole point of setting this flag, to not just have "swipe" navigation stop, but all back/forward support stop? Also with the backForwardList immutable, it's not like I can delete an entry from it.
I have an app hosting a page with a 'drop' operation where the page accepts file drops. But I can't easily intercept that, nor seep to be able to block it in Javascript. Yet WKWebView has no way to disable the drop operation from settings, or intercept the url of the dropped file. Either would be useful here.
Trying to create a List that sorts by some criteria and handles searchable without any samples is only made even more difficult by the textfield constantly refocusing itself. So I can't even tab away from it. This is some awful bug in SwiftUI.
class FileSearcher: ObservableObject {
@Published var searchIsActive = false
@Published var searchText = ""
var files: [File] = []
...
}
NavigationSplitView() {
}
.searchablel(text: $fileSearcher.searchText,
isPresented: $fileSearcher.searchIsActive,
placement: .sidebar,
prompt: "Filter")
I have a webpage that needs to receive keypresses to zoom and scroll. This seems to be the only way to prevent the annoying NSBeep from occurring. Return true too much, and command keys stop working. So need to be explicit about which keys you want. Also no way to block delete/shift+delete from going fwd/back in the history using this same mechanism.
func isKeyHandled(_ event: NSEvent) -> Bool {
// can't block delete or shift+delete
// so the WKWebView goes back/foward through it's 1 page history.
// that loses all context for the user.
// prevent super annoying bonk/NSBeep
// if don't check modifier flags (can't check isEmpty since 256 is often set
// then the cmd+S stops working
if !(event.modifierFlags.contains(.command) ||
event.modifierFlags.contains(.control) ||
event.modifierFlags.contains(.option)) {
// wasd
if event.keyCode == Keycode.w || event.keyCode == Keycode.a || event.keyCode == Keycode.s || event.keyCode == Keycode.d {
return true
}
}
return false
}
// Apple doesn't want this to be overridden by user, but key handling
// just doesn't work atop the WKWebView without this. KeyUp/KeyDown
// overrides don't matter, since we need the WKWebView to forward them
override func performKeyEquivalent(with event: NSEvent) -> Bool {
if !isKeyHandled(event) {
return super.performKeyEquivalent(with: event)
}
return true
}
I create @State holding a WKWebView (heavyweight object), and then wrap that in a VKViewRepresentable, then
@State var myWebView = newWebView(request: URLRequest(url:URL(string: request)!))
This is the only way I can then reference the webView late on to run javascript queries on it. Otherwise, it's embedded in the View/ContentView hierarchy.
So then when I moved from Window to WindowGroup, only one of these WKWebView is created. This looks bad to have an empty detail panel in the previous Window. The docs on WindowGroup state that it makes new state to go with each Window in the WindowGroup, but in this case, that's not the case here.
The following generates a prior definition warning from #include <__config>. But that has an ifndef in it. I'm in C++17 on Xcode latest (14.5). Is this documented anywhere?
-D_LIBCPP_ENABLE_ASSERTIONS=1
I have this game controller connected to my M1, and the Simulator won't announced it via .GCControllerDidConnect. This works fine on iOS and macOS.
I have the simulator set to "Send Game Controller to Device" which the Simulator does. If I disable that, then I can control the simulator view. But once enabled, the Simulator doesn't tell the app about the controller.
First I get this
ar_world_tracking_provider_query_device_anchor_at_timestamp <0x302b9c0a0>: The device_anchor can only be queried when the world tracking provider is running.
This seemed to all break with the auto-update to 2.0.1. Simulator runs the code fine.
I seem to see an infinite stall here
frameLayer.endUpdate()
// Pace frames by waiting for the optimal prediction time.
try await LayerRenderer.Clock().sleep(until: timing.optimalInputTime, tolerance: nil)
// Start submitting the updated frame.
frameLayer.startSubmission() <-