Post

Replies

Boosts

Views

Activity

How do I output different sounds to headphones and speakers while simultaneously recording, all without using AVAudioSession.Category.multiroute?
I need to find a way to allow recording from the mic while outputting two different sound streams to two different devices (speaker and headphones). I've done a fair bit of reading around using AVAudioSession.Category.multiroute but haven't found any modern examples. @theanalogkid posted a nice example using obj-C nine years ago, but others have noted that the code isn't readily translatable to Swift. To make matters worse, this is one of the very few examples on how to properly use multirouting. The official documentation is lacking, to say the least, and the WWDC 2012 session is, well, old enough to attend middle school and be a Taylor Swift fan, but definitely not in Swift. The few relevant forum posts here are spread over this middle schooler's life span and likely outdated, with most having no responses other than the poster's own plightful echo. They don't paint a pretty picture of .multiroute's health, with a recent poster noting that volume buttons don't work in this mode, contacting DTS and finding that there's no fix; another finding that it just doesn't work for certain devices, etc. Audio is giving me enough of a headache so I'd like to avoid slogging through this if possible. .multiroute feels like the developer mode of AVAudioSession, but without documentation. tl;dr - Without using .multiroute, is there a way to allow an app to output two different devices while simultaneously recording audio? If .multiroute is the only way to achieve this, can someone give me a quick rundown of how this category works?
1
0
802
Aug ’24
Does the SpriteView of an SKScene have layers? Unable to get magnifying glass view to work with scene.
I'm trying to make a magnifying glass that shows up when the user presses a button and follows the user's finger as it's dragged across the screen. I came across a UIKit-based solution (https://github.com/niczyja/MagnifyingGlass-Swift), but when implemented in my SKScene, only the crosshairs are shown. Through experimentation I've found that magnifiedView?.layer.render(in: context) in: public override func draw(_ rect: CGRect) { guard let context = UIGraphicsGetCurrentContext() else { return } context.translateBy(x: radius, y: radius) context.scaleBy(x: scale, y: scale) context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y) removeFromSuperview() magnifiedView?.layer.render(in: context) magnifiedView?.addSubview(self) } can be removed without altering the situation, suggesting that line is not working as it should. But this is where I hit a brick wall. The view below is shown but not offset or magnified, and any attempt to add something to context results in a black magnifying glass. Does anyone know why this is? I don't think it's an issue with the code, so I'm suspecting its something specific to SpriteKit or SKScene, likely related to how CALayers work. Any pointers would be greatly appreciated. . . . Full code below: import UIKit public class MagnifyingGlassView: UIView { public weak var magnifiedView: UIView? = nil { didSet { removeFromSuperview() magnifiedView?.addSubview(self) } } public var magnifiedPoint: CGPoint = .zero { didSet { center = .init(x: magnifiedPoint.x + offset.x, y: magnifiedPoint.y + offset.y) } } public var offset: CGPoint = .zero public var radius: CGFloat = 50 { didSet { frame = .init(origin: frame.origin, size: .init(width: radius * 2, height: radius * 2)) layer.cornerRadius = radius crosshair.path = crosshairPath(for: radius) } } public var scale: CGFloat = 2 public var borderColor: UIColor = .lightGray { didSet { layer.borderColor = borderColor.cgColor } } public var borderWidth: CGFloat = 3 { didSet { layer.borderWidth = borderWidth } } public var showsCrosshair = true { didSet { crosshair.isHidden = !showsCrosshair } } public var crosshairColor: UIColor = .lightGray { didSet { crosshair.strokeColor = crosshairColor.cgColor } } public var crosshairWidth: CGFloat = 5 { didSet { crosshair.lineWidth = crosshairWidth } } private let crosshair: CAShapeLayer = CAShapeLayer() public convenience init(offset: CGPoint = .zero, radius: CGFloat = 50, scale: CGFloat = 2, borderColor: UIColor = .lightGray, borderWidth: CGFloat = 3, showsCrosshair: Bool = true, crosshairColor: UIColor = .lightGray, crosshairWidth: CGFloat = 0.5) { self.init(frame: .zero) layer.masksToBounds = true layer.addSublayer(crosshair) defer { self.offset = offset self.radius = radius self.scale = scale self.borderColor = borderColor self.borderWidth = borderWidth self.showsCrosshair = showsCrosshair self.crosshairColor = crosshairColor self.crosshairWidth = crosshairWidth } } public func magnify(at point: CGPoint) { guard magnifiedView != nil else { return } magnifiedPoint = point layer.setNeedsDisplay() } private func crosshairPath(for radius: CGFloat) -> CGPath { let path = CGMutablePath() path.move(to: .init(x: radius, y: 0)) path.addLine(to: .init(x: radius, y: bounds.height)) path.move(to: .init(x: 0, y: radius)) path.addLine(to: .init(x: bounds.width, y: radius)) return path } public override func draw(_ rect: CGRect) { guard let context = UIGraphicsGetCurrentContext() else { return } context.translateBy(x: radius, y: radius) context.scaleBy(x: scale, y: scale) context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y) removeFromSuperview() magnifiedView?.layer.render(in: context) //If above disabled, no change //Possible that nothing's being rendered into context //Could it be that SKScene view has no layer? magnifiedView?.addSubview(self) } }
0
0
660
Nov ’24
How do I make a UIViewRepresentable beneath SwiftUI elements ignore touches to these elements?
Hello, and an early "Merry Christmas" to all, I'm building a SwiftUI app, and one of my Views is a fullscreen UIViewRepresentable (SpriteView) beneath a SwiftUI interface. Whenever the user interacts with any SwiftUI element, the UIView registers a hit in touchesBegan(). For example, my UIView has logic for pinching (not implemented via UIGestureRecognizer), so whenever the user holds down a SwiftUI element while touching the UIView, that counts as two touches to the UIView which invokes the pinching logic. Things I've tried to block SwiftUI from passing the gesture down to the UIView: Adding opaque elements beneath control elements Adding gestures to the elements above Adding gesture masks to the gestures above Converting eligible elements to Buttons (since those seem immune) Adding SpriteViews beneath those elements to absorb gestures So far nothing has worked. As long as the UIView is beneath SwiftUI elements, any interactions with those elements will be registered as a hit. The obvious solution is to track each SwiftUI element's size and coordinates with respect to the UIView's coordinate space, then use exclusion areas, but this is both a pain and expensive, and I find it hard to believe this is the best fix for such a seemingly basic problem. I'm probably overlooking something basic, so any suggestions will be greatly appreciated
0
0
434
Dec ’24
How do I implement low-latency streaming between two local iOS devices?
Hello, I'd like to know whether Multipeer Connectivity (MPC) between two modern iOS devices can support cross-streaming of video content at low latency and relatively high resolution. I learned from Ghostbusters that crossing the streams is a bad idea, but I need this for my app so I'm going to ignore their sage advice. The application I have in mind involves one iOS device (A) live-streaming a feed to another iOS device  (B) running the same app. At the same time, B is live streaming its own feed to A. Both feeds don't need to be crystal clear, but there has to be low latency. The sample code for "Streaming an AR Experience" seems to contain some answers, as it's based on MPC (and ARKit), but my project isn't quite AR and the latency seems high. If MPC isn't suitable for this task (as my searches seem to indicate), is it possible to have one device set up a hotspot and link the two this way to achieve my cross-streaming ambitions? This seems like a more conservative method, assuming the hotspot and its client behave like they're wifi peers (not sure). I might start a new thread with just this question if that's more appropriate. A third idea that's not likely to work (for various reasons) is data transfer over a lightning-lightning or lightning-usb-c connection. If I've missed any other possible solutions to my cross-streaming conundrum, please let me know. I've been reading around this subject on this forum as well and would be hugely grateful if Eskimo would grace me with his wisdom.
6
0
3.6k
Feb ’22
How to display sprites in RealityKit?
My project involves no camera passthrough and relies heavily on sprites, but I've been discouraged from using the aging (and possibly dying) SpriteKit or SceneKit as my rendering engine by Apple engineers (here) so I'm exploring other options. Is it possible to display 2D sprites fluidly using this framework in a non-AR context? Is it possible to create, say, a 2D platformer using just RealityKit?
1
0
1.3k
Apr ’22
Using the Rec. 2020 color gamut in SpriteKit
About a month ago I asked whether I could use HDR in SpriteKit. This wasn't a well-phrased question since HDR seems to mean different things in different questions, which probably led to my question going unanswered. What I meant to ask was whether it's possible to use assets that have a color gamut that many modern devices are capable of displaying (XDR is somewhat standard among mid- to high-end devices). In other words: Is SpriteKit keeping up with the hardware? If not, what framework options do I have that can quickly display large Rec. 2020 images? Do any of the Core frameworks offer this capability?
1
0
1.2k
May ’22
Is it possible to compile images into an APNG using Swift?
Hello, I'm wondering if there is a way to programmatically write a series of UIImages into an APNG, similar to what the code below does for GIFs (credit: https://github.com/AFathi/ARVideoKit/tree/swift_5). I've tried implementing a similar solution but it doesn't seem to work. My code is included below I've also done a lot of searching and have found lots of code for displaying APNGs, but have had no luck with code for writing them. Any hints or pointers would be appreciated. func generate(gif images: [UIImage], with delay: Float, loop count: Int = 0, _ finished: ((_ status: Bool, _ path: URL?) -> Void)? = nil) { currentGIFPath = newGIFPath gifQueue.async { let gifSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFLoopCount as String : count]] let imageSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFDelayTime as String : delay]] guard let path = self.currentGIFPath else { return } guard let destination = CGImageDestinationCreateWithURL(path as CFURL, __UTTypeGIF as! CFString, images.count, nil) else { finished?(false, nil); return } //logAR.message("\(destination)") CGImageDestinationSetProperties(destination, gifSettings as CFDictionary) for image in images { if let imageRef = image.cgImage { CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary) } } if !CGImageDestinationFinalize(destination) { finished?(false, nil); return } else { finished?(true, path) } } } My adaptation of the above code for APNGs (doesn't work; outputs empty file): func generateAPNG(images: [UIImage], delay: Float, count: Int = 0) { let apngSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGLoopCount as String : count]] let imageSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGDelayTime as String : delay]] guard let destination = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.png.identifier as CFString, images.count, nil) else { fatalError("Failed") } CGImageDestinationSetProperties(destination, apngSettings as CFDictionary) for image in images { if let imageRef = image.cgImage { CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary) } } }
3
0
1.8k
Mar ’24
Scrolling sticker browser on a Messages App sheet causes sheet to move
As someone who learned Swift via SwiftUI, UIKit is completely alien to me, so I apologize if this is actually a very simple issue. I have a Messages extension that includes a sticker browser within it. In this extension, the MSMessagesAppViewController hosts a SwiftUI View, which in turn hosts a UIViewRepresentable version of MSStickerBrowserView. The whole Messages App sheet moves with an upward drag, and can switch to its expanded mode, whenever the browser is scrolled to the top (first sticker is at top left), but it doesn't budge when the browser is scrolled to the other end when it should allow the sheet to move upward with the drag. It seems something is reversed within the gesture priority management that allows a sheet to be moved in the appropriate direction when a contained scrollview is at the appropriate end. Things I've tried while reaching a diagnosis include: Limiting the presentation style to compact (the modal still moves, but never succeeds in changing) Adding competing highPriorityGestures in the SwiftUI view, set at various locations Inserting a rectangle with allowsHitTesting(false) beneath the browser Changing firstResponder statuses for all relevant views Changing GestureResponder priorities (there are no gesture responders in all views examined) Things I've considered but don't have the technical skills to implement: Have the view scroll a little downwards programmatically (like what can be done via ScrollViewReader in SwiftUI), but I have no idea how this can be done via MSStickerBrowserView or UIKit in general. Maybe the MSStickerBrowserView thinks its always in the expanded state (when the sheet is expanded, the end-drags work fine). If this is the case, if there's a way to either fix this misconception (via controller's didTransition) or do away with end drags in general, the problem should go away. Any pointers would be greatly appreciated!
2
0
1k
Jan ’24
ScrollView behaves as though Sheet is always expanded when mutiple detents enabled
A follow-up to Scrolling sticker browser on a Messages App sheet causes sheet to move, re-formulated and posted here after distilling the issue ScrollView behaves as though the Sheet is constantly expanded and transfers the drag gesture to the Sheet when scrolled to the top (i.e. when first displayed), causing the user to move the Sheet and not the ScrollView when attempting to scroll up or down. If this should be filed as a bug, let me know. Notes The problem doesn't exist if the sheet has only one detent, but since Messages App Extensions must be adjustable in phone portrait, this does nothing for me Adding a Rectangle with hitTesting disabled doesn't solve the issue Adding competing high priority DragGestures doesn't fix it One partial solution is having ScrollViewReader scroll down a tiny bit upon appearing, but the issue re-emerges after the user has scrolled to the top. Code to reproduce: struct Playground: View { @State private var detent = PresentationDetent.fraction(1/3) @State private var isSheetPresented = true var body: some View { Rectangle() .fill(Color(.systemGray5)) .sheet(isPresented: $isSheetPresented) { VStack { Text("ScrollView-in-Sheet Experiment") .padding() ScrollView { ScrollViewReader { scrollProxy in VStack(spacing: 0) { ForEach(0...10, id: \.self) { i in Rectangle() .fill(.white) .frame(height: 50) .id(i) .overlay { Text(i.description) } } } } } .frame(height: 200) .padding() } .background { Color(.systemGray6) } .presentationDetents([.large, .fraction(1/3)], selection: $detent) } } }
1
0
1.1k
Jan ’24
Wired data transfer between an app on two iOS/iPadOS devices--Possible or pipe dream?
If two iOS/iPadOS devices have your app opened, is it possible to have the apps send data to each other over a wired connection? E.g. If two iPhone 15s are connected by USB-C, can I get my app in iPhone A to send data to iPhone B and vice-versa? I've been looking around for quite a while now and at this point I just want to know if it's technically feasible.
5
0
1.5k
Feb ’24
Is the code in 'Building a custom peer-to-peer protocol' insecure?
I'm new to Networking, so forgive me if this is a silly question: In the sample code, Building a custom peer-to-peer protocol, TLS is configured as follows: // Create TLS options using a passcode to derive a pre-shared key. private static func tlsOptions(passcode: String) -> NWProtocolTLS.Options { let tlsOptions = NWProtocolTLS.Options() let authenticationKey = SymmetricKey(data: passcode.data(using: .utf8)!) var authenticationCode = HMAC<SHA256>.authenticationCode(for: "TicTacToe".data(using: .utf8)!, using: authenticationKey) let authenticationDispatchData = withUnsafeBytes(of: &authenticationCode) { (ptr: UnsafeRawBufferPointer) in DispatchData(bytes: ptr) } sec_protocol_options_add_pre_shared_key(tlsOptions.securityProtocolOptions, authenticationDispatchData as __DispatchData, stringToDispatchData("TicTacToe")! as __DispatchData) sec_protocol_options_append_tls_ciphersuite(tlsOptions.securityProtocolOptions, tls_ciphersuite_t(rawValue: TLS_PSK_WITH_AES_128_GCM_SHA256)!) return tlsOptions } The sample code touts the connection as secure ("...uses Bonjour and TLS to establish secure connections between nearby devices"), but to my untrained eye it doesn't seem so. My reasoning is as follows: If I adapt this code as-is, so connections between two instances of my app use SymmetricKeys derived from the four-digit passcode, then wouldn't my encryption be easy to break by an adversary who sends 0000...9999 and records corresponding changes in the encryption, exposing my app to all sorts of attacks? The sample uses the passcode to validate the connection (host user shows client user the passcode, which is manually entered), which is a feature I would like to keep in some form or another, which is why this is causing so many headaches. Generally speaking, is there a way to secure a local peer-to-peer connection over Network.framework that doesn't involve certificates? If certificates are the only way, are there good resources you can recommend?
6
0
1.2k
May ’24
How to attribute/credit Apple Fonts added to app?
My app lets users create things with text, and I've included Apple fonts so users can format their text with them. These were fonts I found in the Font Book app that comes with macOS. My assumption is that these, like the San Francisco font, can be distributed with apps. Do I need to attribute these fonts to their creators and publish a license in my "About" page? If so, where do I find the license(s) and what is the proper way of publishing them? Is there anything else I should know? Please let me know if this should've been posted under a different topic and tag
Topic: Design SubTopic: General Tags:
3
0
1.2k
Jan ’25
Private Access Tokens versus App Attest + DeviceCheck -- which one should I use to protect my app?
Private Access Tokens (PATs) are headlined as something that can eliminate CAPTCHAs, but also includes app-to-server communications in its use cases. Because of this, they seem to perform a very similar function to DeviceCheck, since both aim to attest to the health of the device in question. I don't really understand the difference between the two and find this confusing. Since PATs are newer and more general, I'm more inclined to adopt them, but where does this leave DeviceCheck? Is it redundant? How does App Attest fit into all of this? If my goal is to minimize if not eliminiate fraudulent/malicious use of my app's APIs, should I use Private Access Tokens, DeviceCheck, and App Attest simultaneously to maximize my protection? If not, what is accepted to be the best practice? I admire Apple's dedication to privacy and security, but as a new developer I feel Apple could make it easier for their app developers to find out and implement the latest best practices.
1
0
1.3k
Jun ’24
Is the issue of code-theft via decompilation or reverse engineering common for Swift iOS apps? And can I protect a small portion of my code?
I'm a new app developer and I've read through most relevant posts on this topic here and elsewhere. Many of the forum posts here are specific to Objective-C, or old enough to be considered outdated in the fast-moving world of computing. Many of the posts elsewhere are about protecting authentication secrets, which doesn't apply in my case, and a lot are by someone with a product to sell, which I've ignored. My app is 99.9% Swift and I'm not going to store any authentication secrets in the IPA. What I'd like to protect is the core mechanism of my product, which has to be included in the binary and is small (&lt; 10k lines). I want to make it so it's harder to steal the source code than it is to recreate my functionality from scratch, which is difficult even with the app in front of them. From what I gathered, Swift code compiled by Xcode is protected from reverse engineering / decompilation by the following: Symbolization of the app Native builds from Xcode destroys names of variable, functions, etc. Swift code is compiled in such a way that makes stealing harder than Objective-C This should make me feel better, but the threat-level is increasing with the availability of free, commercial-grade decompilers (e.g. Ghidra) and machine learning. The fact that iOS 18 supports a checkm8 (i.e. jailbreakable) device means that decrypting the IPA from memory is still trivial. Questions People talk about stealing authentication secrets via reverse-engineering, but is the same true for mechanisms (i.e. code)? How common is the issue of source-code stealing in iOS apps? Can machine learning be leveraged to make decompilation/reverse engineering easier? Will I get rejected by App Review for obfuscating a small portion of my code?
11
0
2.2k
Jun ’24
Reducing storage of similar PNGs by compressing them into a video and retrieving them losslessly--possibility or dumb idea?
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another. I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist. Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage. Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
18
0
2.1k
Jun ’24