I'm using an AVCaptureSession to send video and audio samples to an AVAssetWriter. When I play back the resultant video, sometimes there is a significant lag between the audio compared with the video, so they're just not in sync. But sometimes they are, with the same code.
If I look at the very first presentation time stamps of the buffers being sent to the delegate, via
func captureOutput(_: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)
I see something like this:
Adding audio samples for pts time 227711.0855328798,
Adding video samples for pts time 227710.778785374
That is, the clock for audio vs video is behind: the first audio sample I receive is at 11.08 something, while the video video sample is earlier in time, at 10.778 something. The times are the presentation time stamps of the buffer, and the outputPresentationTimeStamp is the exact same number.
It feels like "video" vs the "audio" clock are just mismatched.
This doesn't always happen: sometimes they're synced. Sometimes they're not.
Any ideas? The device I'm recording is a webcam, on iPadOS, connected via the usb-c port.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
This is an issue with the Insta360 Flow Pro 2.
My iOS app uses DockKit to control the gimbal; in particular, my app disables tracking and sends angular velocity commands to control the gimbal's orientation. I only try to modify the yaw (rotation around the vertical axis); never the pitch or yaw. Note that I don't send the gimbal to a particular orientation directly; I modify the velocity.
Everything works great for a long period of time: typically for a continuous run of 4-6 hours; in the most recent case, I managed about 36 hours of continous operation before the following problem occurred.
I came back to check on the system, and because no visual activity had occurred in the camera's field of view for a while, the phone had commanded the gimbal to rotate back to a yaw angle of 0 degrees.
So the phone in the gimbal should have been looking straight ahead (i.e. the 0 degree yaw position), but it was definitely looking off at an angle. I've seen this twice now. The first time, when it should have been looking straight ahead, it was in fact looking 60 degrees off center. This time (caught on video, see below), it was off by 22 degrees from center.
Here's the weird part: the gimbal reports this way off center positioning as zero degrees (well close enough to zero, like 0.2 or something that's fine). But, mechanically, the gimbal still knows where zero degrees is: if we double click on the trigger of the Flow Pro 2, which is supposed to reset the gimbal to 0 degrees yaw and pitch, the gimbal responds correctly and reorients to a 0 degree position. However, the yaw values it reports are not zero, but as shown in my video, 22 degrees off axis or so.
Power cycling the gimbal and restarting immediately fixes the problem. Also, I switched from my app to the Insta360 app, which caused the phone to flip from landscape to portrait, then when I returned to my app and switched back to landscape, the gimbal now started reporting correct yaw angles.
Is there a possibility this is a bug in the DockKit framework? Has anyone seen this? I have a case open with Insta360, but although it's clearly a software issue, it's not clear if it's in Insta360's code or the DockKit layer. Any ideas for how I can get out of this mode? My concern is that the phone is in a tripod about 10' off the floor, and not very accessible. Also, if all goes well, we may have about 50 of these systems running, and having to fix them one by one after a few hours is not good.
For a demonstration of this bug, see the following video:
https://octoparry.com/offset.MOV
Any help greatly appreciated.
I have two Dockkit anomolies to report. Hoping a DTS person has seen these and/or can comment.
First, my setup: I am controlling the accessory by making repeated calls to set the angular velocity. And the first thing I do is make a call
dockManager.setSystemTrackingEnabled(false)
because I'm doing my own tracking.
I would note that I tried calling track() on my own, with a bunch of observation rectangles (or even just one) but it didn't work well, even though I was calling at the correct rate. Instead, I measure the angular deviation to where I wish my camera was pointed, and set the angular velocity proportional to the error.
First issue: in normal operation, the green tracking light is on, on the hardware (the Instaflow Pro 360 motorized dock). Squeezing the trigger toggles the green light on/off; only when the light is on will the dock accept my calls to set the angular velocity. Fine.
But sometimes squeezing the trigger won't reactivate the green light. In this case, the ONLY thing that seems to work is switching to the Instaflow Pro 360 app, and activating the camera. Immediately the green light turns on, and I'm good (and can return to my own app, with the green light still on).
So what hidden API call does Instaflow have, that I don't that can make this happen? Sure, it's their own app, but I imagine they don't have access to calls I don't, so how does their app manage to get the green light back on?
It doesn't always happen. Would love to know how to snap out of this.
Second issue: While I usually use rectangle from running the vision system to guide my camera position, sometimes I left the user directly control the angular "yaw" velocity (rotation around the vertical axis) directly (by issuing commands over the network).
Sometimes, when the user sets a non-zero velocity, when they set a zero velocity a short time later, the camera doesn't immedately respond and stop. (It's not a network issue. I can verify the API sends a call to set the angular velocity to zero, and the camera keeps rotating for a good fraction of a second.) Most times the camera stops immediately, but sometimes it doesn't.
Oddly, I never see this issue when letting the user set the angular velocity in the "pitch up/down" axis. Just the yaw axis.
Anybody else seen this? I feel like it wasn't a problem till I got to iOS18 but I won't swear to it.
Any advice/assistance/discussion greatly appreciated.
So I have a small homebuilt device that has a simple Arduino-like chip with wifi capabilities (to be precise, the Xiao Seeed ESP32C, for anyone who cares), and I need my iOS app to talk to this device.
Using the CoreBluetooth framework, we've had no problems --- except that in "noisy" environments sometimes we have disconnects. So we want to try wifi.
We assume that there is no public wifi network available. We'd love to do peer-to-peer networking using Network, but that's only if both devices are from Apple. They're not.
Now, the Xiao device can act as an access point, and presumably I could put my iPhone on that network and use regular TCP calls to talk to it. The problem is that my app wants to both talk to this home-built device, but ALSO make http calls to my server an amazon.
So: how do I let my iOS app talk over wifi to this simple chip, while not losing the ability to also have my app reach a general server (and receive push notifications, etc.)
To be more concrete, imagine that my app needs to be able to discover the access point provided by my device, use low-level TCP socket calls to talk to this local wifi device, all without losing the ability to also make general http calls and be just accessible to push notifications as it was before connecting to this purely local (and very short range, i.e. no more than 30 meters distant) device.
Does this make sense? Have I explained it well enough?
Topic:
App & System Services
SubTopic:
Networking
This bites me a lot. I'm looking at the documentation for, say, UNUserNotificationCenter.
And NOWHWERE but NOWHERE do I see anything that says, "hey, on platform XXX you should import YYY to use this class."
Am I just not looking in the right place in Apple documentation to find this?
Surely, somewhere at the top level of documentation, it must tell you want the proper package to import is, per platform?
Let's say I have an iOS app on the app store. Anyone can download and use it, but I would like to restrict the app from granting access to certain features to a select set of people I can personally vouch for. So, for example, to get access, the app send email to me, you have to convince me I know you, and if you do, I send you back some kind of token string which you can enter into the app.
However, I'd like for that token to not be shareable, and to be locked to that device.
Is there any kind of persistent ID associated with a device that I can use to tie the token I grant to that persistent ID?
Or can someone suggest a way that once I trust a user, I can give them a token which will cannot be shared to anyone else?
Also, does anyone know if restricting access to app features in this way is any kind of issue with regards to the app review process? The app itself is free, and there are no in-app purchases. I simply don't want certain features of the app (which end up sending push notifications) to get abused.
I'm brand new to Metal. I've googled, but can't get the right answer to come up. (Thanks, unhelpful ChatGPT generated answers polluting everything, but I digress...)
Ultimately, I'm trying to figure out how to use Metal to render 3D DICOM data on iOS specifically. If you're not familiar with DICOM, let's just say I've got a whole stack of CT image slices. Or to get really simple, I've got a cube of voxel values with differing values at each voxel coordinate.
Where do I even start in Metal to render something like this?
(I was trying to get the VTK toolkit compiled for iOS, which uses OpenGL, but that appears to be a dead end. And besides, Metal is supposed to be so much better.)
Thanks for any tips/leads/suggestions/general pointers.
In Xcode 7.2 (and possibly earlier versions of xcode 7.3, though I can't swear to it) it seems like "local" project results (e.g. files in my project) were favored by being put at the top of the completion list. For example, if I had the file "DirectoryViewController.swift" then typing "dire" into the open quickly search box would put my local file at the top, and system stuff like "dirent.h" etc. at the bottom.Now in Xcode 7.3 I find that Open Quickly is swamping me with results that come from stuff used by my project (e.g. names of public variables in swift classes from Foundation or UIKit) rather than stuff that I defined in my project (my own file names/public methods). Everything is being found, but local results no longer appear at the top.Is this intentional? Any chance the order could revert to what it was? A preference?Since there's far more global stuff (that one didn't write) than local stuff, it's very disconcerting that open quickly has become much less useful to me (i have to type many more characters to get what i want now near the top of the list) than it was in previous versions.any suggestions welcome (as are any fixes planned!)