Hello, and an early "Merry Christmas" to all,
I'm building a SwiftUI app, and one of my Views is a fullscreen UIViewRepresentable (SpriteView) beneath a SwiftUI interface.
Whenever the user interacts with any SwiftUI element, the UIView registers a hit in touchesBegan(). For example, my UIView has logic for pinching (not implemented via UIGestureRecognizer), so whenever the user holds down a SwiftUI element while touching the UIView, that counts as two touches to the UIView which invokes the pinching logic.
Things I've tried to block SwiftUI from passing the gesture down to the UIView:
Adding opaque elements beneath control elements
Adding gestures to the elements above
Adding gesture masks to the gestures above
Converting eligible elements to Buttons (since those seem immune)
Adding SpriteViews beneath those elements to absorb gestures
So far nothing has worked. As long as the UIView is beneath SwiftUI elements, any interactions with those elements will be registered as a hit.
The obvious solution is to track each SwiftUI element's size and coordinates with respect to the UIView's coordinate space, then use exclusion areas, but this is both a pain and expensive, and I find it hard to believe this is the best fix for such a seemingly basic problem.
I'm probably overlooking something basic, so any suggestions will be greatly appreciated
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
My project involves no camera passthrough and relies heavily on sprites, but I've been discouraged from using the aging (and possibly dying) SpriteKit or SceneKit as my rendering engine by Apple engineers (here) so I'm exploring other options.
Is it possible to display 2D sprites fluidly using this framework in a non-AR context? Is it possible to create, say, a 2D platformer using just RealityKit?
About a month ago I asked whether I could use HDR in SpriteKit. This wasn't a well-phrased question since HDR seems to mean different things in different questions, which probably led to my question going unanswered.
What I meant to ask was whether it's possible to use assets that have a color gamut that many modern devices are capable of displaying (XDR is somewhat standard among mid- to high-end devices). In other words: Is SpriteKit keeping up with the hardware?
If not, what framework options do I have that can quickly display large Rec. 2020 images? Do any of the Core frameworks offer this capability?
Hello,
I'm making an app where users can create a multimedia object and other users can use in-app currency (purchased via in-app purchase) to gain access to the object. They can also choose to subscribe to a creator to encourage and support their content creation. Tokens can be converted to cash and sent to creators.
It is unclear to me whether this violates app store rules. After reading through the App Store Guidelines and searching through the forums (this thread was close to what I was looking for), I have yet to arrive at a clear answer.
The Guidelines state that "tipping" content creators is acceptable, but this isn't exactly what I'm looking for in a creator marketplace. The Business section doesn't contain anything else that seems relevant, and this makes it seem like only voluntary tipping of content creators is accepted.
The commerce engineer in the thread linked above discourages using in-app currency, but that doesn't seem to work for my use case (the thread's creator wants to use the IAP mechanism). Furthermore, IAP cannot be created programmatically (i.e. by users) and is error-prone.
It must be stressed that I'm not trying to deny giving Apple its 15-30% share, since users must buy in-app currency using Apple's IAP (this is not a multi-platform app). Cash in the app's economy has only one entry point, and that is via Apple's IAP.
I have asked a similar question recently, but received no response. This is probably because I didn't phrase it well enough and didn't attach the correct tags.
Creating the infrastructure for a creator marketplace app is a lot of grueling work, and I would very much like to know whether my app will be rejected for it before I embark on this quest. Any help would be greatly appreciated.
tl;dr - Is the ability to make in-app currency purchases of creator content a violation of app store rules?
A follow-up to Scrolling sticker browser on a Messages App sheet causes sheet to move, re-formulated and posted here after distilling the issue
ScrollView behaves as though the Sheet is constantly expanded and transfers the drag gesture to the Sheet when scrolled to the top (i.e. when first displayed), causing the user to move the Sheet and not the ScrollView when attempting to scroll up or down.
If this should be filed as a bug, let me know.
Notes
The problem doesn't exist if the sheet has only one detent, but since Messages App Extensions must be adjustable in phone portrait, this does nothing for me
Adding a Rectangle with hitTesting disabled doesn't solve the issue
Adding competing high priority DragGestures doesn't fix it
One partial solution is having ScrollViewReader scroll down a tiny bit upon appearing, but the issue re-emerges after the user has scrolled to the top.
Code to reproduce:
struct Playground: View {
@State private var detent = PresentationDetent.fraction(1/3)
@State private var isSheetPresented = true
var body: some View {
Rectangle()
.fill(Color(.systemGray5))
.sheet(isPresented: $isSheetPresented) {
VStack {
Text("ScrollView-in-Sheet Experiment")
.padding()
ScrollView {
ScrollViewReader { scrollProxy in
VStack(spacing: 0) {
ForEach(0...10, id: \.self) { i in
Rectangle()
.fill(.white)
.frame(height: 50)
.id(i)
.overlay { Text(i.description) }
}
}
}
}
.frame(height: 200)
.padding()
}
.background { Color(.systemGray6) }
.presentationDetents([.large, .fraction(1/3)], selection: $detent)
}
}
}
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback.
try session.setCategory(.playAndRecord, mode: .default, options: [])
try session.setActive(true, options: .notifyOthersOnDeactivation)
try session.setAllowHapticsAndSystemSoundsDuringRecording(true)
filePlayerNode ---> engine.mainMixerNode
bufferPlayerNode --> engine.mainMixerNode
engine.mainMixerNode --> engine.outputNode
//bufferPlayer.scheduleBuffer() is called on its own queue
The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse.
I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly.
The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring).
Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
Private Access Tokens (PATs) are headlined as something that can eliminate CAPTCHAs, but also includes app-to-server communications in its use cases. Because of this, they seem to perform a very similar function to DeviceCheck, since both aim to attest to the health of the device in question.
I don't really understand the difference between the two and find this confusing. Since PATs are newer and more general, I'm more inclined to adopt them, but where does this leave DeviceCheck? Is it redundant? How does App Attest fit into all of this?
If my goal is to minimize if not eliminiate fraudulent/malicious use of my app's APIs, should I use Private Access Tokens, DeviceCheck, and App Attest simultaneously to maximize my protection? If not, what is accepted to be the best practice?
I admire Apple's dedication to privacy and security, but as a new developer I feel Apple could make it easier for their app developers to find out and implement the latest best practices.
This is a follow-up to my previous question: How to attribute/credit Apple Fonts added to app?
In that previous post, I misremembered what I did and said I found fonts via macOS' FontBooks, when instead I came acrossUIFont.familyNames. Since these are included via UIKit, the legal implications should be different.
I looked at various license agreements that govern iOS app development but haven't found anything mentioning fonts. Since these are included as part of UIKit, its reasonable to assume that developers are allowed to include these fonts--but in what ways?
Am I allowed to let users create, say, documents with these fonts?
Am I only allowed to display these fonts?
There are 84 fonts, and judging by their FontBook entries, there is a wide range of licenses and restrictions. It seems unnecessarily harsh to have every iOS developer verify each one and figure out which they can legally keep if they want to offer their users access to all (for, say, a text-editing app). There must be some overarching rule that supersedes/encapsulates them, but this rule isn't clear to me after hours of research. I'm not a lawyer, and I don't think Apple expects every app developer to consult their lawyers on whether they can use system fonts.
I'm about to send an email to Apple's legal team (I will post their response here if allowed), but in the meantime I want to hear what other devs think about this.
In Xcode, entering UIFont.familyNames returns the following:
["Academy Engraved LET", "Al Nile", "American Typewriter", "Apple Color Emoji", "Apple SD Gothic Neo", "Apple Symbols", "Arial", "Arial Hebrew", "Arial Rounded MT Bold", "Avenir", "Avenir Next", "Avenir Next Condensed", "Baskerville", "Bodoni 72", "Bodoni 72 Oldstyle", "Bodoni 72 Smallcaps", "Bodoni Ornaments", "Bradley Hand", "Chalkboard SE", "Chalkduster", "Charter", "Cochin", "Copperplate", "Courier New", "Damascus", "Devanagari Sangam MN", "Didot", "DIN Alternate", "DIN Condensed", "Euphemia UCAS", "Farah", "Futura", "Galvji", "Geeza Pro", "Georgia", "Gill Sans", "Grantha Sangam MN", "Helvetica", "Helvetica Neue", "Hiragino Maru Gothic ProN", "Hiragino Mincho ProN", "Hiragino Sans", "Hoefler Text", "Impact", "Kailasa", "Kefa", "Khmer Sangam MN", "Kohinoor Bangla", "Kohinoor Devanagari", "Kohinoor Gujarati", "Kohinoor Telugu", "Lao Sangam MN", "Malayalam Sangam MN", "Marker Felt", "Menlo", "Mishafi", "Mukta Mahee", "Myanmar Sangam MN", "Noteworthy", "Noto Nastaliq Urdu", "Noto Sans Kannada", "Noto Sans Myanmar", "Noto Sans Oriya", "Optima", "Palatino", "Papyrus", "Party LET", "PingFang HK", "PingFang SC", "PingFang TC", "Rockwell", "Savoye LET", "Sinhala Sangam MN", "Snell Roundhand", "STIX Two Math", "STIX Two Text", "Symbol", "Tamil Sangam MN", "Thonburi", "Times New Roman", "Trebuchet MS", "Verdana", "Zapf Dingbats", "Zapfino"]
I need to find a way to allow recording from the mic while outputting two different sound streams to two different devices (speaker and headphones).
I've done a fair bit of reading around using AVAudioSession.Category.multiroute but haven't found any modern examples. @theanalogkid posted a nice example using obj-C nine years ago, but others have noted that the code isn't readily translatable to Swift.
To make matters worse, this is one of the very few examples on how to properly use multirouting. The official documentation is lacking, to say the least, and the WWDC 2012 session is, well, old enough to attend middle school and be a Taylor Swift fan, but definitely not in Swift. The few relevant forum posts here are spread over this middle schooler's life span and likely outdated, with most having no responses other than the poster's own plightful echo. They don't paint a pretty picture of .multiroute's health, with a recent poster noting that volume buttons don't work in this mode, contacting DTS and finding that there's no fix; another finding that it just doesn't work for certain devices, etc.
Audio is giving me enough of a headache so I'd like to avoid slogging through this if possible. .multiroute feels like the developer mode of AVAudioSession, but without documentation.
tl;dr - Without using .multiroute, is there a way to allow an app to output two different devices while simultaneously recording audio? If .multiroute is the only way to achieve this, can someone give me a quick rundown of how this category works?
Why?
Why stop there? (Why not ipod.and.imacg3? applenewton.and.vision.pro?)
I get why the older ipod symbols exist but these new pairings are odd.
If anyone ever sees these restricted symbols in the wild, or even just someone using a Vision Pro and an iPod (Touch) together in a way that's not contrived, please do let me know!
As someone who learned Swift via SwiftUI, UIKit is completely alien to me, so I apologize if this is actually a very simple issue.
I have a Messages extension that includes a sticker browser within it. In this extension, the MSMessagesAppViewController hosts a SwiftUI View, which in turn hosts a UIViewRepresentable version of MSStickerBrowserView.
The whole Messages App sheet moves with an upward drag, and can switch to its expanded mode, whenever the browser is scrolled to the top (first sticker is at top left), but it doesn't budge when the browser is scrolled to the other end when it should allow the sheet to move upward with the drag.
It seems something is reversed within the gesture priority management that allows a sheet to be moved in the appropriate direction when a contained scrollview is at the appropriate end.
Things I've tried while reaching a diagnosis include:
Limiting the presentation style to compact (the modal still moves, but never succeeds in changing)
Adding competing highPriorityGestures in the SwiftUI view, set at various locations
Inserting a rectangle with allowsHitTesting(false) beneath the browser
Changing firstResponder statuses for all relevant views
Changing GestureResponder priorities (there are no gesture responders in all views examined)
Things I've considered but don't have the technical skills to implement:
Have the view scroll a little downwards programmatically (like what can be done via ScrollViewReader in SwiftUI), but I have no idea how this can be done via MSStickerBrowserView or UIKit in general.
Maybe the MSStickerBrowserView thinks its always in the expanded state (when the sheet is expanded, the end-drags work fine). If this is the case, if there's a way to either fix this misconception (via controller's didTransition) or do away with end drags in general, the problem should go away.
Any pointers would be greatly appreciated!
After updating my devices to iOS/iPadOS 17.4, and XCode to 15.3, Network framework peer-to-peer connections have stopped working entirely. The system was working fine before and the code has not been changed.
On the client side (NWBrowser) the server (NWListener) can be seen, but upon attempting to establish a connection the client-side NWConnection.State gets permanently stuck at .preparing.
NWConnection.stateUpdateHandler doesn't enter any other state. It doesn't seem as though it's taking a long time to prepare; it's just stuck. This situation occurs across multiple connection modes (wired, common wifi, separate wifi).
Additional information
I didn't participate in the 17.4 beta and RC
The code in "Creating a custom peer-to-peer protocol" works--this code forms the basis of my code
This morning I bought my first-ever Apple Watch for the sole purpose of development and proceeded to spend six hours failing at the first step of development: getting the device to enter developer mode and connect to Xcode.
Since I'm not seeing any WatchOS 11 posts on this issue, it might just be me. This is why I'm making a new thread that's specific to WatchOS 11, Xcode 16, and maybe Series 10.
Some particulars for my case:
Overall
__Followed Xcode 16.0 documentation
On a watchOS device that you use for development, go to Settings > Privacy > Developer Mode. To toggle Developer mode, use the Developer Mode switch.
To pair an Apple Watch to a Mac, connect its companion iPhone to the Mac with a cable, and ensure that the iPhone is paired for development. After this step, follow any instructions on the Apple Watch to trust the Mac. When paired through an iPhone running iOS 17 or later, Xcode connects to the Apple Watch over Wi-Fi
__Tried all the folk remedies listed in the (many) previous posts on enabling development mode and connecting to Xcode
iOS 18.0
__In developer mode
__Connected to macOS via USB, trusts computer
WatchOS 11.0
__Prompt to trust computer appears and trust is established
__‘Developer Mode’ list item never appears at end of the ‘Privacy’ menu under ‘Settings’
__‘Developer’ item sometimes appears at the end of ‘Settings’
Despite never having seen or toggled ‘Developer Mode’ under ‘Privacy’
Persists across reboots
Possible that WatchOS 11 eliminated the item under Settings > Privacy? If so, documentation not up to date
Xcode 16.0
__Watch never appears under ‘Manage Run Destinations’
After installing sample app to phone, then attempting to install WatchOS app via iOS Watch app, “Cannot install at this time” alert appears
App icon appears on watch, and tapping on it leads to an alert with, “This app cannot be installed because its integrity could not be verified”, despite wi-fi working
Watch apps for other apps (e.g. Apple Store) can be successfully installed via iOS Watch app
Above suggests the watch isn't truly in developer mode despite Settings > Developer appearing and persisting across reboots
__The network path from Xcode to WatchOS should be clear
Reconfigured router such that devices on the same network can talk to each other
iPad and iPhone appear with network icon when not connected via cable and Xcode can run code on them
Watch on same network as iPad and iPhone
macOS 15.0
__Due to security policy, cannot use Wi-Fi (disabled both physically and via sudo /usr/sbin/networksetup -setnetworkserviceenabled 'Wi-Fi' off)
Possible that Xcode can only establish a connection to WatchOS via Wi-Fi and not via ethernet bridged to wifi. If so, a confirmation would be hugely helpful.
This is currently my prime suspect. Wi-fi cannot be re-enabled, so I'm trying workarounds like connecting watch to phone's hotspot (doesn't work) and somehow using phone to provide network to the Mac.
__Due to security policy, firewall configured to block all incoming connections
Shouldn't be an issue since Xcode doesn't need incoming connections to see non-watch devices
__Due to security policy, mDNSResponder and mDNSResponderHelper disabled
Also shouldn't be an issue, but including just in case
I need a magnifying glass function for one of my SwiftUI Views, but can't find a way to implement it as needed.
I found a Youtube video where the author renders the view twice, overlaying the second over the first, then scaling and masking it to create the illusion of magnification, but this is expensive and doesn't work in many cases where more complex views are presented (e.g. a LazyVGrid).
I've also explored continually capturing partial screenshots and scaling them up to create the illusion of magnification, but there's no straightforward way to achieve this with SwiftUI without getting into the messiness of UIViewRepresentables.
Any help would be greatly appreciated
Hello,
I'm wondering if there is a way to programmatically write a series of UIImages into an APNG, similar to what the code below does for GIFs (credit: https://github.com/AFathi/ARVideoKit/tree/swift_5). I've tried implementing a similar solution but it doesn't seem to work. My code is included below
I've also done a lot of searching and have found lots of code for displaying APNGs, but have had no luck with code for writing them.
Any hints or pointers would be appreciated.
func generate(gif images: [UIImage], with delay: Float, loop count: Int = 0, _ finished: ((_ status: Bool, _ path: URL?) -> Void)? = nil) {
currentGIFPath = newGIFPath
gifQueue.async {
let gifSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFLoopCount as String : count]]
let imageSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFDelayTime as String : delay]]
guard let path = self.currentGIFPath else { return }
guard let destination = CGImageDestinationCreateWithURL(path as CFURL, __UTTypeGIF as! CFString, images.count, nil)
else { finished?(false, nil); return }
//logAR.message("\(destination)")
CGImageDestinationSetProperties(destination, gifSettings as CFDictionary)
for image in images {
if let imageRef = image.cgImage {
CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary)
}
}
if !CGImageDestinationFinalize(destination) {
finished?(false, nil); return
} else {
finished?(true, path)
}
}
}
My adaptation of the above code for APNGs (doesn't work; outputs empty file):
func generateAPNG(images: [UIImage], delay: Float, count: Int = 0) {
let apngSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGLoopCount as String : count]]
let imageSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGDelayTime as String : delay]]
guard let destination = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.png.identifier as CFString, images.count, nil)
else { fatalError("Failed") }
CGImageDestinationSetProperties(destination, apngSettings as CFDictionary)
for image in images {
if let imageRef = image.cgImage {
CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary)
}
}
}