Post

Replies

Boosts

Views

Activity

Navigating between sticker browsers and messages extensions and the main app
Happy new year! My app has the ability to generate stickers so I'm trying to connect two separate Messages extensions: The sticker browser and the app extension. The former is a repository and the latter is where stickers are made. It would be easy to create these two extensions and have them stand separately, like Apple's Memoji, but I'm trying to find a way to better streamline the user experience, such that they can navigate from the browser to the extension and back seamlessly. As far as I can tell, there's no indication that this is possible, but also nothing to indicate that it isn't. Similarly, are there ways to navigate from the main app to a messages extension, or go the other way? From what I've read, there's no known way to do the former but there was a way to do the latter that no longer works. tl;dr - Is it possible for users to press a button to go from a MSStickerBrowserViewController to a MSMessagesAppViewController (both belonging to the same app group) or back? Or go from the main app to either or back?
0
0
584
Jan ’24
Getting a list of words recognized by Speech
Is there a way to extract the list of words recognized by the Speech framework? I'm trying to filter out words that won't appear in the transcription output, but to do that I'll need a list of words that can appear. SFSpeechLanguageModel.Configuration can be initialized with a vocabulary, but there doesn't seem to be a way to read it, and while there are ways to create custom vocabularies, I have yet to find a way to retrieve it. I added the Natural Language tag in case the framework might contribute to a solution
0
0
875
Feb ’24
AVAudioEngine: Is there a way to play audio at full volume while having an active input tap?
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback. try session.setCategory(.playAndRecord, mode: .default, options: []) try session.setActive(true, options: .notifyOthersOnDeactivation) try session.setAllowHapticsAndSystemSoundsDuringRecording(true) filePlayerNode ---> engine.mainMixerNode bufferPlayerNode --> engine.mainMixerNode engine.mainMixerNode --> engine.outputNode //bufferPlayer.scheduleBuffer() is called on its own queue The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse. I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly. The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring). Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
1
0
1.2k
Mar ’24
[iOS 17.4, XCode 15.3] Previously working NWConnection for peer-to-peer connection now permanently stuck on "Preparing"
After updating my devices to iOS/iPadOS 17.4, and XCode to 15.3, Network framework peer-to-peer connections have stopped working entirely. The system was working fine before and the code has not been changed. On the client side (NWBrowser) the server (NWListener) can be seen, but upon attempting to establish a connection the client-side NWConnection.State gets permanently stuck at .preparing. NWConnection.stateUpdateHandler doesn't enter any other state. It doesn't seem as though it's taking a long time to prepare; it's just stuck. This situation occurs across multiple connection modes (wired, common wifi, separate wifi). Additional information I didn't participate in the 17.4 beta and RC The code in "Creating a custom peer-to-peer protocol" works--this code forms the basis of my code
2
0
1.5k
Mar ’24
When using UIView.drawHierarchy, older CPU performs far better than newer CPU
Hi all, My app uses SpriteKit views which are rendered into images for various uses. For some reason, the same code performs worse on a newer CPU than on an older one. My A13 Bionic flies through the task at high resolution and 60FPS while CPU usage is <60%, while the A15 Bionic chokes and sputters at a lower resolution and 30FPS. Because of how counterintuitive this is, it took me a while to isolate the call directly responsible--with UIView.drawHierarchy commented out, both devices returned to their baseline performances. guard let sceneView = skScene.view else { return } let size = CGSize(width: outputResolution, height: outputResolution) return UIGraphicsImageRenderer(size: size).image { context in let rect = CGRect(origin: .zero, size: size) sceneView.drawHierarchy(in: rect, afterScreenUpdates: false) } Does anyone know why this is the case, and how to fix it? I tried using UIView.snapshotView, which is supposedly much faster, but it only returns blank images. Am I using it wrong or does it simply not work in this context? sceneView.snapshotView(afterScreenUpdates: false)?.draw(rect) Any hints or pointers would be greatly appreciated
0
0
668
Jun ’24
Are default fonts under UIFont.familyNames licensed for use by iOS developers?
This is a follow-up to my previous question: How to attribute/credit Apple Fonts added to app? In that previous post, I misremembered what I did and said I found fonts via macOS' FontBooks, when instead I came acrossUIFont.familyNames. Since these are included via UIKit, the legal implications should be different. I looked at various license agreements that govern iOS app development but haven't found anything mentioning fonts. Since these are included as part of UIKit, its reasonable to assume that developers are allowed to include these fonts--but in what ways? Am I allowed to let users create, say, documents with these fonts? Am I only allowed to display these fonts? There are 84 fonts, and judging by their FontBook entries, there is a wide range of licenses and restrictions. It seems unnecessarily harsh to have every iOS developer verify each one and figure out which they can legally keep if they want to offer their users access to all (for, say, a text-editing app). There must be some overarching rule that supersedes/encapsulates them, but this rule isn't clear to me after hours of research. I'm not a lawyer, and I don't think Apple expects every app developer to consult their lawyers on whether they can use system fonts. I'm about to send an email to Apple's legal team (I will post their response here if allowed), but in the meantime I want to hear what other devs think about this. In Xcode, entering UIFont.familyNames returns the following: ["Academy Engraved LET", "Al Nile", "American Typewriter", "Apple Color Emoji", "Apple SD Gothic Neo", "Apple Symbols", "Arial", "Arial Hebrew", "Arial Rounded MT Bold", "Avenir", "Avenir Next", "Avenir Next Condensed", "Baskerville", "Bodoni 72", "Bodoni 72 Oldstyle", "Bodoni 72 Smallcaps", "Bodoni Ornaments", "Bradley Hand", "Chalkboard SE", "Chalkduster", "Charter", "Cochin", "Copperplate", "Courier New", "Damascus", "Devanagari Sangam MN", "Didot", "DIN Alternate", "DIN Condensed", "Euphemia UCAS", "Farah", "Futura", "Galvji", "Geeza Pro", "Georgia", "Gill Sans", "Grantha Sangam MN", "Helvetica", "Helvetica Neue", "Hiragino Maru Gothic ProN", "Hiragino Mincho ProN", "Hiragino Sans", "Hoefler Text", "Impact", "Kailasa", "Kefa", "Khmer Sangam MN", "Kohinoor Bangla", "Kohinoor Devanagari", "Kohinoor Gujarati", "Kohinoor Telugu", "Lao Sangam MN", "Malayalam Sangam MN", "Marker Felt", "Menlo", "Mishafi", "Mukta Mahee", "Myanmar Sangam MN", "Noteworthy", "Noto Nastaliq Urdu", "Noto Sans Kannada", "Noto Sans Myanmar", "Noto Sans Oriya", "Optima", "Palatino", "Papyrus", "Party LET", "PingFang HK", "PingFang SC", "PingFang TC", "Rockwell", "Savoye LET", "Sinhala Sangam MN", "Snell Roundhand", "STIX Two Math", "STIX Two Text", "Symbol", "Tamil Sangam MN", "Thonburi", "Times New Roman", "Trebuchet MS", "Verdana", "Zapf Dingbats", "Zapfino"]
Topic: Design SubTopic: General Tags:
1
0
851
Jan ’25
How does homomorphic encryption usage affect privacy labels?
If I encrypt user data with Apple's newly released homomorphic encryption package and send it to servers I control for analysis, how would that affect the privacy label for that app? E.g. If my app collected usage data plus identifiers, then sent it for collection and analysis, would I be allowed to say that we don't collect information linked to the user? Does it also automatically exclude the relevant fields from the "Data used to track you" section? Is it possible to make even things that were once considered inextricably tied to a user identity (e.g. purchases in an in-app marketplace) something not linked, according to Apple's rules? How would I prove to Apple that the relevant information is indeed homomorphically encrypted?
0
0
796
Jul ’24
How do I output different sounds to headphones and speakers while simultaneously recording, all without using AVAudioSession.Category.multiroute?
I need to find a way to allow recording from the mic while outputting two different sound streams to two different devices (speaker and headphones). I've done a fair bit of reading around using AVAudioSession.Category.multiroute but haven't found any modern examples. @theanalogkid posted a nice example using obj-C nine years ago, but others have noted that the code isn't readily translatable to Swift. To make matters worse, this is one of the very few examples on how to properly use multirouting. The official documentation is lacking, to say the least, and the WWDC 2012 session is, well, old enough to attend middle school and be a Taylor Swift fan, but definitely not in Swift. The few relevant forum posts here are spread over this middle schooler's life span and likely outdated, with most having no responses other than the poster's own plightful echo. They don't paint a pretty picture of .multiroute's health, with a recent poster noting that volume buttons don't work in this mode, contacting DTS and finding that there's no fix; another finding that it just doesn't work for certain devices, etc. Audio is giving me enough of a headache so I'd like to avoid slogging through this if possible. .multiroute feels like the developer mode of AVAudioSession, but without documentation. tl;dr - Without using .multiroute, is there a way to allow an app to output two different devices while simultaneously recording audio? If .multiroute is the only way to achieve this, can someone give me a quick rundown of how this category works?
1
0
802
Aug ’24
Does the SpriteView of an SKScene have layers? Unable to get magnifying glass view to work with scene.
I'm trying to make a magnifying glass that shows up when the user presses a button and follows the user's finger as it's dragged across the screen. I came across a UIKit-based solution (https://github.com/niczyja/MagnifyingGlass-Swift), but when implemented in my SKScene, only the crosshairs are shown. Through experimentation I've found that magnifiedView?.layer.render(in: context) in: public override func draw(_ rect: CGRect) { guard let context = UIGraphicsGetCurrentContext() else { return } context.translateBy(x: radius, y: radius) context.scaleBy(x: scale, y: scale) context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y) removeFromSuperview() magnifiedView?.layer.render(in: context) magnifiedView?.addSubview(self) } can be removed without altering the situation, suggesting that line is not working as it should. But this is where I hit a brick wall. The view below is shown but not offset or magnified, and any attempt to add something to context results in a black magnifying glass. Does anyone know why this is? I don't think it's an issue with the code, so I'm suspecting its something specific to SpriteKit or SKScene, likely related to how CALayers work. Any pointers would be greatly appreciated. . . . Full code below: import UIKit public class MagnifyingGlassView: UIView { public weak var magnifiedView: UIView? = nil { didSet { removeFromSuperview() magnifiedView?.addSubview(self) } } public var magnifiedPoint: CGPoint = .zero { didSet { center = .init(x: magnifiedPoint.x + offset.x, y: magnifiedPoint.y + offset.y) } } public var offset: CGPoint = .zero public var radius: CGFloat = 50 { didSet { frame = .init(origin: frame.origin, size: .init(width: radius * 2, height: radius * 2)) layer.cornerRadius = radius crosshair.path = crosshairPath(for: radius) } } public var scale: CGFloat = 2 public var borderColor: UIColor = .lightGray { didSet { layer.borderColor = borderColor.cgColor } } public var borderWidth: CGFloat = 3 { didSet { layer.borderWidth = borderWidth } } public var showsCrosshair = true { didSet { crosshair.isHidden = !showsCrosshair } } public var crosshairColor: UIColor = .lightGray { didSet { crosshair.strokeColor = crosshairColor.cgColor } } public var crosshairWidth: CGFloat = 5 { didSet { crosshair.lineWidth = crosshairWidth } } private let crosshair: CAShapeLayer = CAShapeLayer() public convenience init(offset: CGPoint = .zero, radius: CGFloat = 50, scale: CGFloat = 2, borderColor: UIColor = .lightGray, borderWidth: CGFloat = 3, showsCrosshair: Bool = true, crosshairColor: UIColor = .lightGray, crosshairWidth: CGFloat = 0.5) { self.init(frame: .zero) layer.masksToBounds = true layer.addSublayer(crosshair) defer { self.offset = offset self.radius = radius self.scale = scale self.borderColor = borderColor self.borderWidth = borderWidth self.showsCrosshair = showsCrosshair self.crosshairColor = crosshairColor self.crosshairWidth = crosshairWidth } } public func magnify(at point: CGPoint) { guard magnifiedView != nil else { return } magnifiedPoint = point layer.setNeedsDisplay() } private func crosshairPath(for radius: CGFloat) -> CGPath { let path = CGMutablePath() path.move(to: .init(x: radius, y: 0)) path.addLine(to: .init(x: radius, y: bounds.height)) path.move(to: .init(x: 0, y: radius)) path.addLine(to: .init(x: bounds.width, y: radius)) return path } public override func draw(_ rect: CGRect) { guard let context = UIGraphicsGetCurrentContext() else { return } context.translateBy(x: radius, y: radius) context.scaleBy(x: scale, y: scale) context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y) removeFromSuperview() magnifiedView?.layer.render(in: context) //If above disabled, no change //Possible that nothing's being rendered into context //Could it be that SKScene view has no layer? magnifiedView?.addSubview(self) } }
0
0
660
Nov ’24
How do I make a UIViewRepresentable beneath SwiftUI elements ignore touches to these elements?
Hello, and an early "Merry Christmas" to all, I'm building a SwiftUI app, and one of my Views is a fullscreen UIViewRepresentable (SpriteView) beneath a SwiftUI interface. Whenever the user interacts with any SwiftUI element, the UIView registers a hit in touchesBegan(). For example, my UIView has logic for pinching (not implemented via UIGestureRecognizer), so whenever the user holds down a SwiftUI element while touching the UIView, that counts as two touches to the UIView which invokes the pinching logic. Things I've tried to block SwiftUI from passing the gesture down to the UIView: Adding opaque elements beneath control elements Adding gestures to the elements above Adding gesture masks to the gestures above Converting eligible elements to Buttons (since those seem immune) Adding SpriteViews beneath those elements to absorb gestures So far nothing has worked. As long as the UIView is beneath SwiftUI elements, any interactions with those elements will be registered as a hit. The obvious solution is to track each SwiftUI element's size and coordinates with respect to the UIView's coordinate space, then use exclusion areas, but this is both a pain and expensive, and I find it hard to believe this is the best fix for such a seemingly basic problem. I'm probably overlooking something basic, so any suggestions will be greatly appreciated
0
0
434
Dec ’24