Hi,
I've seen the documents that Apple Technical Q&A QA1931 said: Note: If Bluetooth Low Energy HID is one of the connected services of an accessory, connection interval down to 11.25 ms may be accepted by the Apple product.
But in real world, is there any "iOS device" connection interval down to 11.25 CI that is accepted?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I want to fully control the BLE send timing myself, instead of relying on the iOS system.
When I try to send a large data block using BLE (i.e., splitting it and sending multiple times), I follow Apple’s official guide. However, after the first successful updateValue() call in my loop, the second call always fails unless I wait for the peripheralManagerIsReady(toUpdateSubscribers:) callback. This callback timing is managed by the system, so I can’t control exactly when I can send the next packet.
If I send data manually by clicking a button, updateValue() always returns true—even if I add a long delay (like sleep(10)) between calls. But in a loop, after the first send, updateValue() returns false until the thread leaves or the callback occurs. I suspect this is a thread or queue issue blocking subsequent sends.
I also tried using DispatchQueue.global().async {} in the loop, but the result is the same.
Is there any way to fully control when I call updateValue(), without waiting for peripheralManagerIsReady()? Or is this a limitation of iOS BLE?
Thank you!
I'm using the "Converting side-by-side 3D video to multiview HEVC and spatial video" sample code on iOS. It takes about 8 seconds to convert a 6-second video. At this rate, a 1-hour video would take 1.3 hours to convert.
How can I speed up the conversion?
BTW, are there solutions to convert side-by-side 3D video to spatial video for Windows?
I want to try an any resolution image input Core ML model.
So I wrote the model following the Core ML Tools "Set the Range for Each Dimensionas" sample code, modified as below:
# Trace the model with random input.
example_input = torch.rand(1, 3, 50, 50)
traced_model = torch.jit.trace(model.eval(), example_input)
# Set the input_shape to use RangeDim for each dimension.
input_shape = ct.Shape(shape=(1,
3,
ct.RangeDim(lower_bound=25, upper_bound=1920, default=45),
ct.RangeDim(lower_bound=25, upper_bound=1920, default=45)))
scale = 1/(0.226*255.0)
bias = [- 0.485/(0.229) , - 0.456/(0.224), - 0.406/(0.225)]
# Convert the model with input_shape.
mlmodel = ct.convert(traced_model,
inputs=[ct.ImageType(shape=input_shape, name="input", scale=scale, bias=bias)],
outputs=[ct.TensorType(name="output")],
convert_to="mlprogram",
)
# Save the Core ML model
mlmodel.save("image_resize_model.mlpackage")
It converts OK but when I predict the result with an image It will get the error as below:
You will not be able to run predict() on this Core ML model. Underlying exception message was: {
NSLocalizedDescription = "Failed to build the model execution plan using a model architecture file '/private/var/folders/8z/vtz02xrj781dxvz1v750skz40000gp/T/model-small.mlmodelc/model.mil' with error code: -7.";
}
Where did I do wrong?
Hi all,
I tried the "isSpatialVideoCaptureEnabled" with AVCaptureMovieFileOutput mentioned in WWDC24: Build compelling spatial photo and video experiences, and it works.
But there are some issues and questions:
Below codes, the change.newValue always nil so the code seems not work.
let observation = videoDevice.observe(\.spatialCaptureDiscomfortReasons) { (device, change) in
guard let newValue = change.newValue else { return }
if newValue.contains(.subjectTooClose) {
// Guide user to move back
}
if newValue.contains(.notEnoughLight) {
// Guide user to find a brighter environment
}
}
AVCaptureMovieFileOutput is support spatial video capturing.
May I ask if AVCaptureVideoDataOutput will also support spatial video capturing?
Deploy machine learning and AI models on-device with Core ML say the performance report can see the ops run on which unit and why it cannot run on Neural Engine.
I tested my model and the report shows a gray checkmark at the Neural Engine, indicating it can run on the Neural Engine. However, it's not executing on the Neural Engine but on the CPU. Why is this happening?
I just follow the video and add the codes, but when I switch to spatial video capturing, the videoPreviewLayer shows black.
<<<< FigCaptureSessionRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSessionRemote.m:405) - (err=0)
<<<< FigCaptureSessionRemote >>>> captureSessionRemote_getObjectID signalled err=-16405 (kFigCaptureSessionError_ServerConnectionDied) (Server connection was lost) at FigCaptureSessionRemote.m:405
<<<< FigCaptureSessionRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSessionRemote.m:421) - (err=-16405)
<<<< FigCaptureSessionRemote >>>> Fig assert: "msg" at bail (FigCaptureSessionRemote.m:744) - (err=0)
Did I miss something?