Post

Replies

Boosts

Views

Activity

No KDKs available for macOS 26.0 Developer Beta 2 and later
As of now, there is no Kernel Debug Kit (KDK) available for macOS 26.0 Developer Betas after the first build. Kernel Debug Kits are crucial for understanding panics and other bugs within custom Kernel Extensions. Without the KDK for the corresponding macOS version, tools like kmutil fail to recognize a KDK and certain functions are disabled. Additionally, as far as I am aware, a KDK for one build of macOS isn't able to be used on a differing build. Especially since this is a developer beta, where developers are updating their software to function with the latest versions of macOS, I'd expect a KDK to be available for more than one build.
2
0
256
Jul ’25
Kext loads well after launchd and early os_log entries rarely appear in unified log
Is there a way to ensure a kernel extension in the Auxiliary Kernel Collection loads (and runs its start routines) before launchd? I'm emitting logs via os_log_t created with an os_log_create (custom subsystem/category) in both my KMOD's start function and the IOService::start() function. Those messages-- which both say "I've been run"-- inconsistently show up in log show --predicate 'subsystem == "com.bluefalconhd.pandora"' --last boot, which makes me think they are running very early. However, I also record timestamps (using mach_absolute_time, etc.) and expose them to user space through an IOExternalMethod. The results (for the most recent boot): hayes@fortis Pandora/tests main % build/pdtest Pandora Metadata: kmod_start_time: Time: 2025-07-22 14:11:32.233 Mach time: 245612546 Nanos since boot: 10233856083 (10.23 seconds) io_service_start_time: Time: 2025-07-22 14:11:32.233 Mach time: 245613641 Nanos since boot: 10233901708 (10.23 seconds) user_client_init_time: Time: 2025-07-22 14:21:42.561 Mach time: 14893478355 Nanos since boot: 620561598125 (620.56 seconds) hayes@fortis Pandora/tests main % ps -p 1 -o lstart= Tue Jul 22 14:11:27 2025 Everything in the kernel extension appears to be loading after launchd (PID 1) starts. Also, the kext isn't doing anything crazy which could cause that kind of delay. For reference, here's the Info.plist: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Pandora</string> <key>CFBundleIdentifier</key> <string>com.bluefalconhd.Pandora</string> <key>CFBundleName</key> <string>Pandora</string> <key>CFBundlePackageType</key> <string>KEXT</string> <key>CFBundleVersion</key> <string>1.0.7</string> <key>IOKitPersonalities</key> <dict> <key>Pandora</key> <dict> <key>CFBundleIdentifier</key> <string>com.bluefalconhd.Pandora</string> <key>IOClass</key> <string>Pandora</string> <key>IOMatchCategory</key> <string>Pandora</string> <key>IOProviderClass</key> <string>IOResources</string> <key>IOResourceMatch</key> <string>IOKit</string> <key>IOUserClientClass</key> <string>PandoraUserClient</string> </dict> </dict> <key>OSBundleLibraries</key> <dict> <key>com.apple.kpi.dsep</key> <string>24.2.0</string> <key>com.apple.kpi.iokit</key> <string>24.2.0</string> <key>com.apple.kpi.libkern</key> <string>24.2.0</string> <key>com.apple.kpi.mach</key> <string>24.2.0</string> </dict> </dict> </plist> My questions are: A. Why don't the early logs (from KMOD's start function and IOService::start) consistently appear in the unified log, while logs later in IOExternalMethods do? B. How can I force this kext to load earlier-- ideally before launchd? Thanks in advance for any guidance!
0
0
61
Jul ’25
Is it possible to pass the streaming output of Foundation Models down a function chain
I am writing a custom package wrapping Foundation Models which provides a chain-of-thought with intermittent self-evaluation among other things. At first I was designing this package with the command line in mind, but after seeing how well it augments the models and makes them more intelligent I wanted to try and build a SwiftUI wrapper around the package. When I started I was using synchronous generation rather than streaming, but to give the best user experience (as I've seen in the WWDC sessions) it is necessary to provide constant feedback to the user that something is happening. I have created a super simplified example of my setup so it's easier to understand. First, there is the Reasoning conversation item, which can be converted to an XML representation which is then fed back into the model (I've found XML works best for structured input) public typealias ConversationContext = XMLDocument extension ConversationContext { public func toPlainText() -> String { return xmlString(options: [.nodePrettyPrint]) } } /// Represents a reasoning item in a conversation, which includes a title and reasoning content. /// Reasoning items are used to provide detailed explanations or justifications for certain decisions or responses within a conversation. @Generable(description: "A reasoning item in a conversation, containing content and a title.") struct ConversationReasoningItem: ConversationItem { @Guide(description: "The content of the reasoning item, which is your thinking process or explanation") public var reasoningContent: String @Guide(description: "A short summary of the reasoning content, digestible in an interface.") public var title: String @Guide(description: "Indicates whether reasoning is complete") public var done: Bool } extension ConversationReasoningItem: ConversationContextProvider { public func toContext() -> ConversationContext { // <ReasoningItem title="${title}"> // ${reasoningContent} // </ReasoningItem> let root = XMLElement(name: "ReasoningItem") root.addAttribute(XMLNode.attribute(withName: "title", stringValue: title) as! XMLNode) root.stringValue = reasoningContent return ConversationContext(rootElement: root) } } Then there is the generator, which creates a reasoning item from a user query and previously generated items: struct ReasoningItemGenerator { var instructions: String { """ <omitted for brevity> """ } func generate(from input: (String, [ConversationReasoningItem])) async throws -> sending LanguageModelSession.ResponseStream<ConversationReasoningItem> { let session = LanguageModelSession(instructions: instructions) // build the context for the reasoning item out of the user's query and the previous reasoning items let userQuery = "User's query: \(input.0)" let reasoningItemsText = input.1.map { $0.toContext().toPlainText() }.joined(separator: "\n") let context = userQuery + "\n" + reasoningItemsText let reasoningItemResponse = try await session.streamResponse( to: context, generating: ConversationReasoningItem.self) return reasoningItemResponse } } I'm not sure if returning LanguageModelSession.ResponseStream<ConversationReasoningItem> is the right move, I am just trying to imitate what session.streamResponse returns. Then there is the orchestrator, which I can't figure out. It receives the streamed ConversationReasoningItems from the Generator and is responsible for streaming those to SwiftUI later and also for evaluating each reasoning item after it is complete to see if it needs to be regenerated (to keep the model on-track). I want the users of the orchestrator to receive partially generated reasoning items as they are being generated by the generator. Later, when they finish, if the evaluation passes, the item is kept, but if it fails, the reasoning item should be removed from the stream before a new one is generated. So in-flight reasoning items should be outputted aggresively. I really am having trouble figuring this out so if someone with more knowledge about asynchronous stuff in Swift, or- even better- someone who has worked on the Foundation Models framework could point me in the right direction, that would be awesome!
0
0
213
Jul ’25
Can I ignore safe area when resizing a window in macOS 15?
I am attempting to make a macOS app to show a large popup with no titlebar and a transparent background that spans the entire active display. Currently, I am attempting the last part, with the sizing. I used an example on the relevant developer documentation page. This is the code I am using in my App struct: @main struct MakeGoodChoicesApp: App { ... var body: some Scene { ... Window("Make Good Choices", id:"popup") { PopupWindowContentView() .ignoresSafeArea(.all) } .windowIdealPlacement {_, context in return WindowPlacement( x: context.defaultDisplay.bounds.minX, y: context.defaultDisplay.bounds.minY, width: context.defaultDisplay.bounds.width, height: context.defaultDisplay.bounds.height ) } } } When running my application and using some code to open the popup window, I get the following result: I would expect the window to expand past the safe area, but it seems as if macOS clamped the window's size down to inside the safe areas. I would appreciate any help, many thanks!
0
1
474
Jun ’24