Post

Replies

Boosts

Views

Activity

Reply to iOS BLE questions
In our tests—with iOS devices acting as peripherals in iPhone 16(peripheral)-to-NB(central) scenarios: 15ms connection interval, throughput is about 500kbps 60ms connection interval, throughput is about 200kbps And when iPhone 16(peripheral)-to-iPhone 13 pro(central) scenarios, the throughput is roughly 200kpbs. In this case, are we still limited to only 4 packets, and is the throughput related to this?
Topic: App & System Services SubTopic: Core OS Tags:
Feb ’25
Reply to iOS Peripheral Throughput is low due to updateValue return false
I take the comment out here.: Sorry I forgot to say that the PeripheralManagerIsReady callback time might ranging in 0.01ms ~ 250ms. Is it unstable normal? iOS limits the packet number to 4 per connection event for instance, so if the connection interval is 30ms, then I can just send 4 packet during 30ms, is that right? I'm trying to find out the low throughput issue And if IOS support DLE? If yes, how to enable it?
Topic: App & System Services SubTopic: Core OS Tags:
Feb ’25
Reply to iOS Peripheral Throughput is low due to updateValue return false
Additionally, we've observed that even when the data is only 1 byte, the first call to updateValue returns true, but all subsequent calls return false. We must wait for the peripheralManagerIsReady callback before we can call updateValue again—otherwise, the second call still returns false. Even if we wait 5 ms (using usleep(5000)) after the first updateValue call before trying again, it still fails. Could you please confirm whether this behavior is expected?
Topic: App & System Services SubTopic: Core OS Tags:
Feb ’25
Reply to iOS Peripheral Throughput is low due to updateValue return false
Hi, Thanks for your response. I understand that connection parameters are negotiated automatically and we can't directly control them from the app. We're testing with BLE 5.2 using 2M PHY, but our throughput (~206 kbps) is far below the expected values (around 1300 kbps). We haven't identified any implementation errors on our side. Could this low throughput be due to iOS’s internal flow control when acting as a Peripheral? Or is there something else we're missing? Would implementing L2CAP channels be the recommended solution to boost throughput? Any guidance is greatly appreciated. Thanks!
Topic: App & System Services SubTopic: Core OS Tags:
Feb ’25
Reply to Apple Watch CMMotionManager acceleration direction
Sorry, I uploaded the wrong file which contains acceleration data affected by gravity and other factors, including noise. That should be the reason that acceleration keeps some value. Here, the raw data has been logged by CMMotionManager.startDeviceMotionUpdates with userAcceleration : move_left_to_right_motion.txt You can see that if I move the watch from left to right, the initial value of acceleration.x will be negative. Then, once the watch stops moving, acceleration.x becomes positive.
Topic: App & System Services SubTopic: General Tags:
Jan ’25
Reply to MultiThreaded rendering with actor
Hi, thanks for your explanation! Ideally, I would like to draw as many captured frames as possible. However, I understand that if processing speed isn’t sufficient, I may need to drop some frames to keep up with real-time rendering. That said, my goal is definitely not to draw only the latest frame, as I want to preserve as much of the original capture data as possible. Let me know if this aligns with what you’re asking!
Topic: Programming Languages SubTopic: Swift Tags:
Nov ’24
Reply to Core ML Async API Seems to Not Work Properly
I update a version that runs without crash. But the prediction speed is almost the same as sync version API. The createFrameAsync is called from ScreenCaptureKit stream. private func createFrameAsync(for sampleBuffer: CMSampleBuffer ) { if let surface = getIOSurface(for: sampleBuffer) { Task { do { try await runModelAsync(surface) } catch { os_log("error: \(error)") } } } } func runModelAsync(_ surface: IOSurface) async throws { try Task.checkCancellation() guard let model = mlmodel else {return} do { // Resize input var px: Unmanaged<CVPixelBuffer>? let status = CVPixelBufferCreateWithIOSurface(kCFAllocatorDefault, surface, nil, &px) guard status == kCVReturnSuccess, let px2 = px?.takeRetainedValue() else { return } guard let data = resizeIOSurfaceIntoPixelBuffer( of: px2, from: CGRect(x: 0, y: 0, width: InputWidth, height: InputHeight) ) else { return } // Model Prediction var results: [Float] = [] let inferenceStartTime = Date() let input = model_smallInput(input: data) let prediction = try await model.model.prediction(from: input) // Get result into format if let output = prediction.featureValue(for: "output")?.multiArrayValue { if let bufferPointer = try? UnsafeBufferPointer<Float>(output) { results = Array(bufferPointer) } } // Set Render Data for Metal Rendering await ScreenRecorder.shared .setRenderDataNormalized(surface: surface, depthData: results) } catch { print("Error performing inference: \(error)") } } Since Async prediction API cannot speed up the prediction, is there anything else I can do? The prediction time is almost the same on macbook M2 Pro and macbook M1 Air!
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Oct ’24
Reply to iOS BLE questions
In our tests—with iOS devices acting as peripherals in iPhone 16(peripheral)-to-NB(central) scenarios: 15ms connection interval, throughput is about 500kbps 60ms connection interval, throughput is about 200kbps And when iPhone 16(peripheral)-to-iPhone 13 pro(central) scenarios, the throughput is roughly 200kpbs. In this case, are we still limited to only 4 packets, and is the throughput related to this?
Topic: App & System Services SubTopic: Core OS Tags:
Replies
Boosts
Views
Activity
Feb ’25
Reply to iOS Peripheral Throughput is low due to updateValue return false
I take the comment out here.: Sorry I forgot to say that the PeripheralManagerIsReady callback time might ranging in 0.01ms ~ 250ms. Is it unstable normal? iOS limits the packet number to 4 per connection event for instance, so if the connection interval is 30ms, then I can just send 4 packet during 30ms, is that right? I'm trying to find out the low throughput issue And if IOS support DLE? If yes, how to enable it?
Topic: App & System Services SubTopic: Core OS Tags:
Replies
Boosts
Views
Activity
Feb ’25
Reply to iOS Peripheral Throughput is low due to updateValue return false
Additionally, we've observed that even when the data is only 1 byte, the first call to updateValue returns true, but all subsequent calls return false. We must wait for the peripheralManagerIsReady callback before we can call updateValue again—otherwise, the second call still returns false. Even if we wait 5 ms (using usleep(5000)) after the first updateValue call before trying again, it still fails. Could you please confirm whether this behavior is expected?
Topic: App & System Services SubTopic: Core OS Tags:
Replies
Boosts
Views
Activity
Feb ’25
Reply to iOS Peripheral Throughput is low due to updateValue return false
Hi, Thanks for your response. I understand that connection parameters are negotiated automatically and we can't directly control them from the app. We're testing with BLE 5.2 using 2M PHY, but our throughput (~206 kbps) is far below the expected values (around 1300 kbps). We haven't identified any implementation errors on our side. Could this low throughput be due to iOS’s internal flow control when acting as a Peripheral? Or is there something else we're missing? Would implementing L2CAP channels be the recommended solution to boost throughput? Any guidance is greatly appreciated. Thanks!
Topic: App & System Services SubTopic: Core OS Tags:
Replies
Boosts
Views
Activity
Feb ’25
Reply to Apple Watch CMMotionManager acceleration direction
Thank you for your reply. So if I use the acceleration value to calculate Apple Watch movement, I should negative the value to get the result? Ex: V1 = V0 + A0 + (A1-A0) / 2 M = V0 + (V1-V0) / 2. The A0 and A1 should become -A0 and -A1 ?
Topic: App & System Services SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jan ’25
Reply to Apple Watch CMMotionManager acceleration direction
Sorry, I uploaded the wrong file which contains acceleration data affected by gravity and other factors, including noise. That should be the reason that acceleration keeps some value. Here, the raw data has been logged by CMMotionManager.startDeviceMotionUpdates with userAcceleration : move_left_to_right_motion.txt You can see that if I move the watch from left to right, the initial value of acceleration.x will be negative. Then, once the watch stops moving, acceleration.x becomes positive.
Topic: App & System Services SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jan ’25
Reply to Apple Watch CMMotionManager acceleration direction
I added the raw data for watch moving horizontally from left to right direction: raw.csv
Topic: App & System Services SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jan ’25
Reply to Apple Watch CMMotionManager acceleration direction
I've noticed that when stopping the watch suddenly during movement, it seems to keep the last acceleration value. Is this expected, and can I still reliably use acceleration data for movement calculations, or should I look into alternative methods?
Topic: App & System Services SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jan ’25
Reply to MultiThreaded rendering with actor
draw(in:) is a callback from MTKViewDelegate, and it’s called whenever the OS notifies an update. It’s driven by the OS.
Topic: Programming Languages SubTopic: Swift Tags:
Replies
Boosts
Views
Activity
Nov ’24
Reply to MultiThreaded rendering with actor
Hi, thanks for your explanation! Ideally, I would like to draw as many captured frames as possible. However, I understand that if processing speed isn’t sufficient, I may need to drop some frames to keep up with real-time rendering. That said, my goal is definitely not to draw only the latest frame, as I want to preserve as much of the original capture data as possible. Let me know if this aligns with what you’re asking!
Topic: Programming Languages SubTopic: Swift Tags:
Replies
Boosts
Views
Activity
Nov ’24
Reply to Core ML Model Prediction in 120 FPS faster than 60 FPS
I notice that each time when calling CoreML predict, the processing time varies, and the difference can exceed 5ms. Is this normal? Additionally, I've observed that when P-CPU utilization increases, Neural Engine utilization also increases, which in turn reduces the prediction time. Is this behavior also normal?
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
Nov ’24
Reply to MultiThreaded rendering with actor
Yes Yes. And I print the timestamp after commitFrame and it shows sequential. So I think that is so strange and I cannot get where is the problem
Topic: Programming Languages SubTopic: Swift Tags:
Replies
Boosts
Views
Activity
Nov ’24
Reply to Core ML Async API Seems to Not Work Properly
I update a version that runs without crash. But the prediction speed is almost the same as sync version API. The createFrameAsync is called from ScreenCaptureKit stream. private func createFrameAsync(for sampleBuffer: CMSampleBuffer ) { if let surface = getIOSurface(for: sampleBuffer) { Task { do { try await runModelAsync(surface) } catch { os_log("error: \(error)") } } } } func runModelAsync(_ surface: IOSurface) async throws { try Task.checkCancellation() guard let model = mlmodel else {return} do { // Resize input var px: Unmanaged<CVPixelBuffer>? let status = CVPixelBufferCreateWithIOSurface(kCFAllocatorDefault, surface, nil, &px) guard status == kCVReturnSuccess, let px2 = px?.takeRetainedValue() else { return } guard let data = resizeIOSurfaceIntoPixelBuffer( of: px2, from: CGRect(x: 0, y: 0, width: InputWidth, height: InputHeight) ) else { return } // Model Prediction var results: [Float] = [] let inferenceStartTime = Date() let input = model_smallInput(input: data) let prediction = try await model.model.prediction(from: input) // Get result into format if let output = prediction.featureValue(for: "output")?.multiArrayValue { if let bufferPointer = try? UnsafeBufferPointer<Float>(output) { results = Array(bufferPointer) } } // Set Render Data for Metal Rendering await ScreenRecorder.shared .setRenderDataNormalized(surface: surface, depthData: results) } catch { print("Error performing inference: \(error)") } } Since Async prediction API cannot speed up the prediction, is there anything else I can do? The prediction time is almost the same on macbook M2 Pro and macbook M1 Air!
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
Oct ’24
Reply to Core ML Model Performance report shows prediction speed much faster than actual app runs
Hi, I think I do the async wrong. My app captures screen with ScreenCaptureKit, and using Core ML model to convert its style then Draw on Metal View. I think this situation might not be able to use async prediction to get the result due to the screenshots have their order. Is it still possible to speed up the prediction?
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
Oct ’24
Reply to Core ML Model Performance report shows prediction speed much faster than actual app runs
Thank you for your insight. That's a good point about the potential thermal throttling issue. I'm curious about how we can maintain efficient execution while avoiding thermal throttling. Do you have any recommendations for optimizing the prediction runs to balance performance and thermal management?
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
Oct ’24