Post

Replies

Boosts

Views

Activity

GroupActivities on iPadOS?
I forgot to ask this during my lab session, but I noticed iPadOS is not listed under supported OSes under the GroupActivities documentation page. iPadOS supports FaceTime, but is it that GroupActivies doesn't work on iPadOS? This would be a crying shame since one of the examples specifically involved drawing collaboratively. The iPad is the perfect device for that use case. EDIT: Quick edit. Coordinate media experiences with Group Activities mentions iPadOS support, in which case the first page I linked might have a missing OS entry.
1
0
1.1k
Jun ’21
ReplayKit Live Streaming Custom Video Feed?
Is it possible to feed ReplayKit with custom live stream data, e.g. a cvPixelBuffer created from a Metal Texture, and stream that to youtube? My use case is to give the broadcaster hidden UI manipulation controls that the stream audience cannot see. (Think of a DJ. No one gets to see all the DAW controls on the DJ's laptop and doesn't need to because that's not part of the experience.) If it's possible, might anyone be able to help figure out the correct way to implement this? From what I can tell, ReplayKit doesn't let you send custom data, in which case, what else can be done?
0
0
950
Jun ’21
Nearby Interaction data between iOS and watchOS
I’m very interested in trying to have an iOS and watchOS device pair communicate and want to know if it’s possible for the iOS device to get the direction to the watchOS device. (I cannot try this because I don’t have an Apple Watch yet.) I’m looking at the documentation here and am not sure how to interpret the wording: nearby interaction docs Nearby Interaction on iOS provides a peer device’s distance and direction, whereas Nearby Interaction on watchOS provides only a peer device's distance. I’m not sure what is considered the peer. Let’s assume I’m communicating over a custom server and not using an iOS companion app. Is the above saying that: A: iOS will send watchOS the distance from the iOS device to the watchOS device and watchOS will send out its distance and direction to the iOS device? (i.e. NearbyConnevtivity on iOS receives distance and direction of any other device, regardless of whether it’s a phone or watch, but watchOS only gets distance.) B: The watch receives distance and direction to the phone, and the phone receives only the distance to the watch. C: the iOS device only gets distance to the watchOS device, and the watchOS device only gets distance to iOS device, period. May I have clarification? A secondary question is how often and how accurate the distance and directions are calculated and sent, but first things first. I’m looking forward to a reply. That would help very much and inform my decision to develop for watchOS. I have some neat project ideas that require option A or B to be true. Thanks for your time!
0
0
958
Jul ’21
Paragraph formatting in feedback assistant?
Whenever I create a feedback assistant request, all my whitespace formatting disappears, making it unreadable for long posts. I’d like to make posts with sections such as “context,” “motivation,” etc. so the reader can better understand the request. Is it allowed just to put the body of the request in an attached text file, or will that risk making my request ignored or discarded? What is the proper etiquette for this?
0
0
459
Jun ’23
OSLog Request: split long output into structured and stdio instead of dropping long printouts entirely?
OSLog’s structured logging is nice, but the output length is limited compared with stdio’s. Currently, it looks like if I expect long variable-length print-outs, I’m forced to revert to using stdio. —or is this just an Xcode 15 beta 2 bug (discussed in the release notes) and fixed versions will match what stdio gives me? If not, could there be a way to configure oslog to fall-back to stdio dynamicallty based on whether the printout is too long or not? A custom fallback buffer allocator? Alternatively, what if I could still get the structured logging with the metadata, and use stdio for the rest of the message that doesn’t fit? That would be a nice way to guarantee that you get structured logging info without dropping the entire message.
3
0
1.3k
Jul ’23
SFSpeechRecognizer specify timeout duration?
For my project, I would really benefit from continuous on-device speech recognition without the automatic timeout, or at least with a much longer one. In the WebKit web speech implementation, it looks like there are some extra setters for SFSpeechRecognizer exposing exactly this functionality: https://github.com/WebKit/WebKit/blob/8b1a13b39bbaaf306c9d819c13b0811011be55f2/Source/WebCore/Modules/speech/cocoa/WebSpeechRecognizerTask.mm#L105 Is there a chance Apple could enable programmable duration/time-out? If it’s available in WebSpeech, then why not in native applications?
0
0
916
Jul ’23
External Peripheral Support on Vision Pro?
Does the Vision Pro allow usb peripherals like cameras, microphones, or video feeds from an iPhone or iPad? Can I use AVFoundation to access external camera feeds or microphones? Note that I am not asking about the internal cameras, which I am aware are off-limits. One use case is to support multiple viewing angles comparable to what we do with slide projectors. For example, draw using an iPad flat on your desk while wearing the Vision Pro in full passthrough mode. Simultaneously mirror the iPad’s screen on multiple walls in real-time at minimum latency (by thunderbolt connection), similar to how I can use Quicktime in macOS to mirror my iPad’s screen.
1
0
646
Feb ’24
JavaScript Core Optimization on Mobile?
Years ago, JSCore on non-macOS disabled JIT, leading to much worse performance than could possibly be achieved with JIT on. Has anything changed recently to permit greater optimizations for JSCore on mobile platforms? (iPadOS, visionOS). My guess is ”no” since the docs still llist only macOS under the MAP_JIT flag, but as far as I know, Apple could still choose to enable JSCore optimizations behind the scenes if this option were available to developers.
4
0
1.6k
Feb ’24
Decompress Video toolbox video on non-Apple hardware?
Does Video Toolbox’s compression session yield data I can decompress on a different device that doesn’t have Apple’s decompression? i.e. so I can network data to other devices that aren’t necessarily Apple? or is the format proprietary rather than just regular h.264 (for example)? If I can decompress without video toolbox, may I have reference to some examples for how to do this using cross-platform APIs? Maybe FFMPEG has something?
1
0
939
Feb ’24
How to create screen-space meshes selectively in RealityKit AR Mode Using New OrthographicCameraComponent?
I'd like to create meshes in RealityKit ( AR mode on iPad ) in screen-space, i.e. for UI. I noticed a lot of useful new functionality in RealityKit for the next OS versions, including the OrthographicCameraComponent here: https://developer.apple.com/documentation/realitykit/orthographiccameracomponent?changes=_3 I think this would help, but I need AR worldtracking as well as a regular perspective camera to work with the 3D elements. Firstly, can I have a camera attached selectively to a few entities, just for those entities? This could be the orthographic camera. Secondly, can I make it so those entities are always rendered in-front, in screenspace? (They'd need to follow the camera.) If I can't have multiple cameras, what can be done in that case? Is it actually better to use a completely different view / API for layering on-top of RealityKit? I would much rather keep everything in RealityKit, however, for simplicity.
0
0
751
Jun ’24
Unexpected Behavior: PointerEvents do not permit simultaneous pencil and multitouch at the same time. Discussing Workarounds
For many years, I've noticed that although in native code I can handle continuous and simultaneous Apple pencil and touch inputs using UIKit, Safari and WKWebView's PointerEvents only seem to allow you to use one input type at a time. i.e. Apple Pencil down blocks touch input until lifted and touch input blocks Apple Pencil input. It's as though requiresexclusivetouchtype has been set in the underlying webkit implementation. There's decades of research (e.g. https://dl.acm.org/doi/10.1145/1866029.1866036 ) and several existing native applications in production showing that multimodal inputs open-up many unique and useful applications and interactions. Even a simple "hold object with finger" + "draw with stylus" controls are the norm. I recently built a native application using multimodal simultaneous inputs, but this is impossible to port to web due to the unexpected behavior of PointerEvents (and touch events, and mouse events; any variant exhibits the same behavior). I've researched and attempted to apply every possible flag, change, and css code to get this working, but I think the behind-the-scenes implementation is what's blocking the simultaneous touch types. This is unexpected and undesired behavior because it's inconsistent with the native behavior. If it's unintended, it's a big priority to fix for creating better user experiences on the iPad. If it's intended, I do not believe that's reasonable (even if it might be more complex and used for more advanced applications). Please expose a way to support simultaneous touch types in iPadOS/iOS in both Safari and WKWebView. At minimum, may we have a discussion on how to support the desired behavior? The simplest solution I can think of is to provide a webkit-platform-specific boolean in Safari and WKWebView called requiresExclusiveTouchType, which is set to False by default to keep the current behavior, and settable to True to get the more flexible behavior I'm expecting.
2
0
633
Jan ’25