A follow-up to Scrolling sticker browser on a Messages App sheet causes sheet to move, re-formulated and posted here after distilling the issue
ScrollView behaves as though the Sheet is constantly expanded and transfers the drag gesture to the Sheet when scrolled to the top (i.e. when first displayed), causing the user to move the Sheet and not the ScrollView when attempting to scroll up or down.
If this should be filed as a bug, let me know.
Notes
The problem doesn't exist if the sheet has only one detent, but since Messages App Extensions must be adjustable in phone portrait, this does nothing for me
Adding a Rectangle with hitTesting disabled doesn't solve the issue
Adding competing high priority DragGestures doesn't fix it
One partial solution is having ScrollViewReader scroll down a tiny bit upon appearing, but the issue re-emerges after the user has scrolled to the top.
Code to reproduce:
struct Playground: View {
@State private var detent = PresentationDetent.fraction(1/3)
@State private var isSheetPresented = true
var body: some View {
Rectangle()
.fill(Color(.systemGray5))
.sheet(isPresented: $isSheetPresented) {
VStack {
Text("ScrollView-in-Sheet Experiment")
.padding()
ScrollView {
ScrollViewReader { scrollProxy in
VStack(spacing: 0) {
ForEach(0...10, id: \.self) { i in
Rectangle()
.fill(.white)
.frame(height: 50)
.id(i)
.overlay { Text(i.description) }
}
}
}
}
.frame(height: 200)
.padding()
}
.background { Color(.systemGray6) }
.presentationDetents([.large, .fraction(1/3)], selection: $detent)
}
}
}
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Is there a way to extract the list of words recognized by the Speech framework?
I'm trying to filter out words that won't appear in the transcription output, but to do that I'll need a list of words that can appear. SFSpeechLanguageModel.Configuration can be initialized with a vocabulary, but there doesn't seem to be a way to read it, and while there are ways to create custom vocabularies, I have yet to find a way to retrieve it.
I added the Natural Language tag in case the framework might contribute to a solution
If two iOS/iPadOS devices have your app opened, is it possible to have the apps send data to each other over a wired connection?
E.g. If two iPhone 15s are connected by USB-C, can I get my app in iPhone A to send data to iPhone B and vice-versa?
I've been looking around for quite a while now and at this point I just want to know if it's technically feasible.
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback.
try session.setCategory(.playAndRecord, mode: .default, options: [])
try session.setActive(true, options: .notifyOthersOnDeactivation)
try session.setAllowHapticsAndSystemSoundsDuringRecording(true)
filePlayerNode ---> engine.mainMixerNode
bufferPlayerNode --> engine.mainMixerNode
engine.mainMixerNode --> engine.outputNode
//bufferPlayer.scheduleBuffer() is called on its own queue
The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse.
I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly.
The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring).
Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
After updating my devices to iOS/iPadOS 17.4, and XCode to 15.3, Network framework peer-to-peer connections have stopped working entirely. The system was working fine before and the code has not been changed.
On the client side (NWBrowser) the server (NWListener) can be seen, but upon attempting to establish a connection the client-side NWConnection.State gets permanently stuck at .preparing.
NWConnection.stateUpdateHandler doesn't enter any other state. It doesn't seem as though it's taking a long time to prepare; it's just stuck. This situation occurs across multiple connection modes (wired, common wifi, separate wifi).
Additional information
I didn't participate in the 17.4 beta and RC
The code in "Creating a custom peer-to-peer protocol" works--this code forms the basis of my code
Hello,
I'm wondering if there is a way to programmatically write a series of UIImages into an APNG, similar to what the code below does for GIFs (credit: https://github.com/AFathi/ARVideoKit/tree/swift_5). I've tried implementing a similar solution but it doesn't seem to work. My code is included below
I've also done a lot of searching and have found lots of code for displaying APNGs, but have had no luck with code for writing them.
Any hints or pointers would be appreciated.
func generate(gif images: [UIImage], with delay: Float, loop count: Int = 0, _ finished: ((_ status: Bool, _ path: URL?) -> Void)? = nil) {
currentGIFPath = newGIFPath
gifQueue.async {
let gifSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFLoopCount as String : count]]
let imageSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFDelayTime as String : delay]]
guard let path = self.currentGIFPath else { return }
guard let destination = CGImageDestinationCreateWithURL(path as CFURL, __UTTypeGIF as! CFString, images.count, nil)
else { finished?(false, nil); return }
//logAR.message("\(destination)")
CGImageDestinationSetProperties(destination, gifSettings as CFDictionary)
for image in images {
if let imageRef = image.cgImage {
CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary)
}
}
if !CGImageDestinationFinalize(destination) {
finished?(false, nil); return
} else {
finished?(true, path)
}
}
}
My adaptation of the above code for APNGs (doesn't work; outputs empty file):
func generateAPNG(images: [UIImage], delay: Float, count: Int = 0) {
let apngSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGLoopCount as String : count]]
let imageSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGDelayTime as String : delay]]
guard let destination = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.png.identifier as CFString, images.count, nil)
else { fatalError("Failed") }
CGImageDestinationSetProperties(destination, apngSettings as CFDictionary)
for image in images {
if let imageRef = image.cgImage {
CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary)
}
}
}
I'm new to Networking, so forgive me if this is a silly question:
In the sample code, Building a custom peer-to-peer protocol, TLS is configured as follows:
// Create TLS options using a passcode to derive a pre-shared key.
private static func tlsOptions(passcode: String) -> NWProtocolTLS.Options {
let tlsOptions = NWProtocolTLS.Options()
let authenticationKey = SymmetricKey(data: passcode.data(using: .utf8)!)
var authenticationCode = HMAC<SHA256>.authenticationCode(for: "TicTacToe".data(using: .utf8)!, using: authenticationKey)
let authenticationDispatchData = withUnsafeBytes(of: &authenticationCode) { (ptr: UnsafeRawBufferPointer) in
DispatchData(bytes: ptr)
}
sec_protocol_options_add_pre_shared_key(tlsOptions.securityProtocolOptions,
authenticationDispatchData as __DispatchData,
stringToDispatchData("TicTacToe")! as __DispatchData)
sec_protocol_options_append_tls_ciphersuite(tlsOptions.securityProtocolOptions,
tls_ciphersuite_t(rawValue: TLS_PSK_WITH_AES_128_GCM_SHA256)!)
return tlsOptions
}
The sample code touts the connection as secure ("...uses Bonjour and TLS to establish secure connections between nearby devices"), but to my untrained eye it doesn't seem so.
My reasoning is as follows: If I adapt this code as-is, so connections between two instances of my app use SymmetricKeys derived from the four-digit passcode, then wouldn't my encryption be easy to break by an adversary who sends 0000...9999 and records corresponding changes in the encryption, exposing my app to all sorts of attacks?
The sample uses the passcode to validate the connection (host user shows client user the passcode, which is manually entered), which is a feature I would like to keep in some form or another, which is why this is causing so many headaches.
Generally speaking, is there a way to secure a local peer-to-peer connection over Network.framework that doesn't involve certificates? If certificates are the only way, are there good resources you can recommend?
Hello,
I had a WWDC Lab with two CloudKit engineers who asked me to file a "Feedback Request" for critical information regarding CloudKit. I've filed the FB and have also decided to post a forum post to increase my chances of having these critical questions answered. If allowed, I will also post responses to my FB here.
CKAssets
I would like to know how large assets attached to a CKAsset can get before being rejected by the system. If the figure differs for private and public databases, please also let me know.
CloudKit pricing information
There used to be pricing information available on the website, but there's basically no information now. This makes it hard to calibrate user upload limits for my app in order to avoid overage fees.
I'm not looking to game the system, (something this strange opaqueness is likely meant to prevent); I'm just looking to avoid a situation where competitors and vandals abuse my the content upload system so I get smacked by large bills out of nowhere. A rough figure of how many GB of data each active user adds to my app's CloudKit public database would suffice.
While we're at it, if I have two apps that share a public database (if that's possible), do the active user counts of both contribute to the public database's free threshold?
Hi all,
My app uses SpriteKit views which are rendered into images for various uses. For some reason, the same code performs worse on a newer CPU than on an older one.
My A13 Bionic flies through the task at high resolution and 60FPS while CPU usage is <60%, while the A15 Bionic chokes and sputters at a lower resolution and 30FPS.
Because of how counterintuitive this is, it took me a while to isolate the call directly responsible--with UIView.drawHierarchy commented out, both devices returned to their baseline performances.
guard let sceneView = skScene.view
else { return }
let size = CGSize(width: outputResolution, height: outputResolution)
return UIGraphicsImageRenderer(size: size).image { context in
let rect = CGRect(origin: .zero, size: size)
sceneView.drawHierarchy(in: rect, afterScreenUpdates: false)
}
Does anyone know why this is the case, and how to fix it?
I tried using UIView.snapshotView, which is supposedly much faster, but it only returns blank images. Am I using it wrong or does it simply not work in this context?
sceneView.snapshotView(afterScreenUpdates: false)?.draw(rect)
Any hints or pointers would be greatly appreciated
Private Access Tokens (PATs) are headlined as something that can eliminate CAPTCHAs, but also includes app-to-server communications in its use cases. Because of this, they seem to perform a very similar function to DeviceCheck, since both aim to attest to the health of the device in question.
I don't really understand the difference between the two and find this confusing. Since PATs are newer and more general, I'm more inclined to adopt them, but where does this leave DeviceCheck? Is it redundant? How does App Attest fit into all of this?
If my goal is to minimize if not eliminiate fraudulent/malicious use of my app's APIs, should I use Private Access Tokens, DeviceCheck, and App Attest simultaneously to maximize my protection? If not, what is accepted to be the best practice?
I admire Apple's dedication to privacy and security, but as a new developer I feel Apple could make it easier for their app developers to find out and implement the latest best practices.
I'm a new developer who is looking to make my first app easier to manage on my end by staying in the Apple ecosystem. My ideal backend is just pure and simple CloudKit. This should help me cut down on costs and increase my security, or so I thought.
The more I looked into the issue of mobile app security --more specifically, preventing fraudulent access to backend APIs-- the more it seems like CloudKit is a disaster waiting to happen. While data in transit is encrypted and there's even end-to-end encryption for private DBs, securing an app's public DB in the presence of modified apps is a daunting, if not impossible task. My assumption is that a modified app cannot be trusted to make honest assertions about itself, the device, or its iCloud account, and can potentially lie its way into restricted areas of the DB. If an app is compromised, CloudKit queries from that app can be used to make malicious queries or even changes to the databases.
I'm hoping App Attest, even with its potentially circular logic, can at least make life harder for fraudsters, competitors, and vandals (when combined with other security measures like jailbreak, debugging, hooking, and tampering detections), but I have not found a single mention on how App Attest might be used to protect CloudKit. There doesn't even seem to be a verified way for me to build a third party server that can handle App Attest and then tell CloudKit to allow a user through (with all the security hazards a new developer faces when configuring an authentication server). The message seems to be: App Attest is important, but you can't use it with CloudKit, so build your own server.
Questions
Is my assumption that a compromised app can make malicious queries or changes to an app's CloudKit DB correct?
Can App Attest be made to protect a CloudKit public DB, with or without the involvement of a third-party server to handle attestations?
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another.
I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist.
Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage.
Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
I'm trying to add JPEG-XL encoding/decoding capabilities to my app and haven't been able to find a trustworthy pre-compiled version. The only one I've found is in https://github.com/awxkee/jxl-coder-swift.
As a result I've been trying to compile my own iOS version from the reference implementation (https://github.com/libjxl/libjxl), having done virtually no compiling before. When I started out, my gut said, "Compiling for a different platform should be easy since it's not like I'm actually writing or modifying the implementation", but the more I research and try, the more doubtful I've become. So far I've figured out it means compiling all the dependencies (brotli, highway, libpng, skcms, etc.) too, but I've also gotten nowhere with them, having tried my hand at modifying cmake toolchains and CMakeList.txt files.
As a novice, am I biting off more than I can chew with this? Is the seemingly simple task, "Compile this C++ library for iOS" actually something that freelancers charge huge amounts for? (If so, this makes the free compiled version mentioned above even more questionable)
Any help or pointers would be greatly appreciated.
I'm a new app developer and I've read through most relevant posts on this topic here and elsewhere. Many of the forum posts here are specific to Objective-C, or old enough to be considered outdated in the fast-moving world of computing. Many of the posts elsewhere are about protecting authentication secrets, which doesn't apply in my case, and a lot are by someone with a product to sell, which I've ignored.
My app is 99.9% Swift and I'm not going to store any authentication secrets in the IPA. What I'd like to protect is the core mechanism of my product, which has to be included in the binary and is small (< 10k lines). I want to make it so it's harder to steal the source code than it is to recreate my functionality from scratch, which is difficult even with the app in front of them.
From what I gathered, Swift code compiled by Xcode is protected from reverse engineering / decompilation by the following:
Symbolization of the app
Native builds from Xcode destroys names of variable, functions, etc.
Swift code is compiled in such a way that makes stealing harder than Objective-C
This should make me feel better, but the threat-level is increasing with the availability of free, commercial-grade decompilers (e.g. Ghidra) and machine learning. The fact that iOS 18 supports a checkm8 (i.e. jailbreakable) device means that decrypting the IPA from memory is still trivial.
Questions
People talk about stealing authentication secrets via reverse-engineering, but is the same true for mechanisms (i.e. code)?
How common is the issue of source-code stealing in iOS apps?
Can machine learning be leveraged to make decompilation/reverse engineering easier?
Will I get rejected by App Review for obfuscating a small portion of my code?
If I encrypt user data with Apple's newly released homomorphic encryption package and send it to servers I control for analysis, how would that affect the privacy label for that app?
E.g. If my app collected usage data plus identifiers, then sent it for collection and analysis, would I be allowed to say that we don't collect information linked to the user? Does it also automatically exclude the relevant fields from the "Data used to track you" section?
Is it possible to make even things that were once considered inextricably tied to a user identity (e.g. purchases in an in-app marketplace) something not linked, according to Apple's rules?
How would I prove to Apple that the relevant information is indeed homomorphically encrypted?