The title says it all, XCode 15 freezes in so many cases that it has to be force quit and reopened, be it opening Swift packages or recognising connected devices. This happens on Intel based 2018 Macbook Pro. Is it the problem on Intel based Macs or XCode 15 is super buggy on all devices? For instance, I can not open and build this project downloaded from Github, specifically opening the file named MetalViewUI.swift in the project, it hangs forever.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have been using MTKView to display CVPixelBuffer from the camera. I use so many options to configure color space of the MTKView/CAMetalLayer that may be needed to tonemap content to the display (CAEDRMetadata for instance). If however I use AVSampleBufferDisplayLayer, there are not many configuration options for color matching. I believe AVSampleBufferDisplayLayer uses pixel buffer attachments to determine the native color space of the input image and does the tone mapping automatically. Does AVSampleBufferDisplayLayer have any limitations compared to MTKView, or both can be used without any compromise on functionality?
I am embedding SwiftUI VideoPlayer in a VStack and see that the screen goes black (i.e. the content disappears even though video player gets autorotated) when the device is rotated. The issue happens even when I use AVPlayerViewController (as UIViewControllerRepresentable). Is this a bug or I am doing something wrong?
var videoURL:URL
let player = AVPlayer()
var body: some View {
VStack {
VideoPlayer(player: player)
.frame(maxWidth:.infinity)
.frame(height:300)
.padding()
.ignoresSafeArea()
.background {
Color.black
}
.onTapGesture {
player.rate = player.rate == 0.0 ? 1.0 : 0.0
}
Spacer()
}
.ignoresSafeArea()
.background(content: {
Color.black
})
.onAppear {
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: AVAudioSession.CategoryOptions.duckOthers)
} catch {
NSLog("Unable to set session category to playback")
}
let playerItem = AVPlayerItem(url: videoURL)
player.replaceCurrentItem(with: playerItem)
}
}
Dear StoreKit Engineers,
I recently migrated my app to freemium model from paid and am using AppTransaction to get original purchase version and original purchase date to determine if user already paid for the app before or not. It seems to be working for normal AppStore users but I am now flooded with complains from VPP users who previously purchased the app. It seems AppTransaction history is absent for these users and these users are asked to sign-in using Apple ID.
What is the solution here?
I am trying to recreate Discrete scrubber in SwiftUI with haptic feedback and snap to nearest integer step. I use ScrollView and LazyHStack as follows:
struct DiscreteScrubber: View {
@State var numLines:Int = 100
var body: some View {
ScrollView(.horizontal, showsIndicators: false) {
LazyHStack {
ForEach(0..<numLines, id: \.self) { _ in
Rectangle().frame(width: 2, height: 10, alignment: .center)
.foregroundStyle(Color.red)
Spacer().frame(width: 10)
}
}
}
}
}
Problem: I need to add content inset of half the frame width of ScrollView so that the first line in the scrubber starts at the center of the view and so does the last line, and also generate haptic feedback as it scrolls. This was easy in UIKit but not obvious in SwiftUI.
I am trying to build a video editing timeline using SwiftUI which consists of a series of trimmers (sample code for trimmer below). I plan to embed multiple of these trimmers in an HStack to make an editing timeline (HStack in turn will be embedded in a ScrollView). What I want to achieve is that if I trim end of any of these trimmers by dragging it's left/right edge, the other trimmers on timeline should move left/right to fill the gap. I understand as the view shrinks/expands during trimming, there might be a need for spacers on the edges of HStack whose widths need to be adjusted while trimming is ongoing.
Right now the code I have for the trimmer uses a fixed frameWidth of the view, so the view occupies full space even if the trimming ends move. It's not clear how to modify my code below to achieve what I want. This was pretty much possible to do in UIKit by the way.
import SwiftUI
struct SimpleTrimmer: View {
@State var frameWidth:CGFloat = 300
let minWidth: CGFloat = 30
@State private var leftOffset: CGFloat = 0
@State private var rightOffset: CGFloat = 0
@GestureState private var leftDragOffset: CGFloat = 0
@GestureState private var rightDragOffset: CGFloat = 0
private var leftAdjustment: CGFloat {
// NSLog("Left offset \(leftOffset + leftDragOffset)")
var adjustment = max(0, leftOffset + leftDragOffset)
if frameWidth - rightOffset - adjustment - 60 < minWidth {
adjustment = frameWidth - rightOffset - minWidth - 60.0
}
return adjustment
}
private var rightAdjustment: CGFloat {
var adjustment = max(0, rightOffset - rightDragOffset)
if frameWidth - adjustment - leftOffset - 60 < minWidth {
adjustment = frameWidth - leftOffset - 60 - minWidth
}
return adjustment
}
var body: some View {
HStack(spacing: 10) {
Image(systemName: "chevron.compact.left")
.frame(width: 30, height: 70)
.background(Color.blue)
.offset(x: leftAdjustment)
.gesture(
DragGesture(minimumDistance: 0)
.updating($leftDragOffset) { value, state, trans in
state = value.translation.width
}
.onEnded { value in
var maxLeftOffset = max(0, leftOffset + value.translation.width)
if frameWidth - rightAdjustment - maxLeftOffset - 60 < minWidth {
maxLeftOffset = frameWidth - rightAdjustment - minWidth - 60
}
leftOffset = maxLeftOffset
}
)
Spacer()
Image(systemName: "chevron.compact.right")
.frame(width: 30, height: 70)
.background(Color.blue)
.offset(x: -rightAdjustment)
.gesture(
DragGesture(minimumDistance: 0)
.updating($rightDragOffset) { value, state, trans in
state = value.translation.width
}
.onEnded { value in
var minRightOffset = max(0, rightOffset - value.translation.width)
if minRightOffset < leftAdjustment - 60 - minWidth {
minRightOffset = leftAdjustment - 60 - minWidth
}
rightOffset = minRightOffset
}
)
}
.foregroundColor(.black)
.font(.title3.weight(.semibold))
.padding(.horizontal, 7)
.padding(.vertical, 3)
.background {
RoundedRectangle(cornerRadius: 7)
.fill(.yellow)
.padding(.leading, leftAdjustment)
.padding(.trailing, rightAdjustment)
}
.frame(width: frameWidth)
}
}
#Preview {
SimpleTrimmer()
}
I have implemented a sample video editing timeline using SwiftUI and am facing issues. So I am breaking up the problem in chunks and posting issue each as a separate question. In the code below, I have a simple timeline using an HStack comprising of a left spacer, right spacer(represented as simple black color) and a trimmer UI in the middle. The trimmer resizes as the left and right handles are dragged. The left and right spacers also adjust in width as the trimmer handles are dragged.
Problem: I want to keep the background thumbnails (implemented currently as simple Rectangles filled in different colors) in the trimmer stationary as the trimmer resizes. Currently they move along as the trimmer resizes as seen in the gif below. How do I fix it?
import SwiftUI
struct SampleTimeline: View {
let viewWidth:CGFloat = 340 //Width of HStack container for Timeline
@State var frameWidth:CGFloat = 280 //Width of trimmer
var minWidth: CGFloat {
2*chevronWidth + 10
} //min Width of trimmer
@State private var leftViewWidth:CGFloat = 20
@State private var rightViewWidth:CGFloat = 20
var chevronWidth:CGFloat {
return 24
}
var body: some View {
HStack(spacing:0) {
Color.black
.frame(width: leftViewWidth)
.frame(height: 70)
HStack(spacing: 0) {
Image(systemName: "chevron.compact.left")
.frame(width: chevronWidth, height: 70)
.background(Color.blue)
.gesture(
DragGesture(minimumDistance: 0)
.onChanged({ value in
leftViewWidth = max(leftViewWidth + value.translation.width, 0)
if leftViewWidth > viewWidth - minWidth - rightViewWidth {
leftViewWidth = viewWidth - minWidth - rightViewWidth
}
frameWidth = max(viewWidth - leftViewWidth - rightViewWidth, minWidth)
})
.onEnded { value in
}
)
Spacer()
Image(systemName: "chevron.compact.right")
.frame(width: chevronWidth, height: 70)
.background(Color.blue)
.gesture(
DragGesture(minimumDistance: 0)
.onChanged({ value in
rightViewWidth = max(rightViewWidth - value.translation.width, 0)
if rightViewWidth > viewWidth - minWidth - leftViewWidth {
rightViewWidth = viewWidth - minWidth - leftViewWidth
}
frameWidth = max(viewWidth - leftViewWidth - rightViewWidth, minWidth)
})
.onEnded { value in
}
)
}
.foregroundColor(.black)
.font(.title3.weight(.semibold))
.background {
HStack(spacing:0) {
Rectangle().fill(Color.red)
.frame(width: 70, height: 60)
Rectangle().fill(Color.cyan)
.frame(width: 70, height: 60)
Rectangle().fill(Color.orange)
.frame(width: 70, height: 60)
Rectangle().fill(Color.brown)
.frame(width: 70, height: 60)
Rectangle().fill(Color.purple)
.frame(width: 70, height: 60)
}
}
.frame(width: frameWidth)
.clipped()
Color.black
.frame(width: rightViewWidth)
.frame(height: 70)
}
.frame(width: viewWidth, alignment: .leading)
}
}
#Preview {
SampleTimeline()
}
I have the following code in my ObservableObject class and recently XCode started giving purple coloured runtime issues with it (probably in iOS 18):
Issue 1: Performing I/O on the main thread can cause slow launches.
Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
Issue 3: Interprocess communication on the main thread can cause non-deterministic delays.
Here is the code:
@Published var cameraAuthorization:AVAuthorizationStatus
@Published var micAuthorization:AVAuthorizationStatus
@Published var photoLibAuthorization:PHAuthorizationStatus
@Published var locationAuthorization:CLAuthorizationStatus
var locationManager:CLLocationManager
override init() {
// Issue 1 (Performing I/O on the main thread can cause slow launches.)
cameraAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.video)
micAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.audio)
photoLibAuthorization = PHPhotoLibrary.authorizationStatus(for: .addOnly)
//Issue 1: Performing I/O on the main thread can cause slow launches.
locationManager = CLLocationManager()
locationAuthorization = locationManager.authorizationStatus
super.init()
//Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
locationManager.delegate = self
}
And also in route Change notification handler of AVAudioSession.routeChangeNotification,
//Issue 3: Hangs - Interprocess communication on the main thread can cause non-deterministic delays.
let categoryPlayback = (AVAudioSession.sharedInstance().category == .playback)
I wonder how checking authorisation status can give these issues? What is the fix here?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Core Location
AVFoundation
Concurrency
Xcode Sanitizers and Runtime Issues
I want to understand the utility of using AsyncStream when iOS 17 introduced @Observable macro where we can directly observe changes in the value of any variable in the model(& observation tracking can happen even outside SwiftUI view). So if I am observing a continuous stream of values, such as download progress of a file using AsyncStream in a SwiftUI view, the same can be observed in the same SwiftUI view using onChange(of:initial) of download progress (stored as a property in model object). I am looking for benefits, drawbacks, & limitations of both approaches.
Specifically, my question is with regards to AVCam sample code by Apple where they observe few states as follows. This is done in CameraModel class which is attached to SwiftUI view.
// MARK: - Internal state observations
// Set up camera's state observations.
private func observeState() {
Task {
// Await new thumbnails that the media library generates when saving a file.
for await thumbnail in mediaLibrary.thumbnails.compactMap({ $0 }) {
self.thumbnail = thumbnail
}
}
Task {
// Await new capture activity values from the capture service.
for await activity in await captureService.$captureActivity.values {
if activity.willCapture {
// Flash the screen to indicate capture is starting.
flashScreen()
} else {
// Forward the activity to the UI.
captureActivity = activity
}
}
}
Task {
// Await updates to the capabilities that the capture service advertises.
for await capabilities in await captureService.$captureCapabilities.values {
isHDRVideoSupported = capabilities.isHDRSupported
cameraState.isVideoHDRSupported = capabilities.isHDRSupported
}
}
Task {
// Await updates to a person's interaction with the Camera Control HUD.
for await isShowingFullscreenControls in await captureService.$isShowingFullscreenControls.values {
withAnimation {
// Prefer showing a minimized UI when capture controls enter a fullscreen appearance.
prefersMinimizedUI = isShowingFullscreenControls
}
}
}
}
If we see the structure CaptureCapabilities, it is a small structure with two Bool members. These changes could have been directly observed by a SwiftUI view. I wonder if there is a specific advantage or reason to use AsyncStream here & continuously iterate over changes in a for loop.
/// A structure that represents the capture capabilities of `CaptureService` in
/// its current configuration.
struct CaptureCapabilities {
let isLivePhotoCaptureSupported: Bool
let isHDRSupported: Bool
init(isLivePhotoCaptureSupported: Bool = false,
isHDRSupported: Bool = false) {
self.isLivePhotoCaptureSupported = isLivePhotoCaptureSupported
self.isHDRSupported = isHDRSupported
}
static let unknown = CaptureCapabilities()
}
I imported few files in my Xcode project from other projects using drag and drop. Even though the files are copied in the new project and there are no softlinks pointing to the location of other project, the issue is whenever I do a git commit and push, Xcode keeps showing all the projects to commit to rather than just the current project. There seem to be no setting to permanently remove git dependency of other projects. Is there anyway to remove references to other projects?
I have this build error with Xcode 26 beta 2:
var asset:AVURLAsset?
func loadAsset() {
let assetURL = URL.documentsDirectory
.appendingPathComponent("sample.mov")
asset = AVURLAsset(url: assetURL, options: [AVURLAssetPreferPreciseDurationAndTimingKey: true])
/*Error: Type of expression is ambiguous without a type annotation */
if let result = try? await asset?.load(.tracks, .isPlayable, .isComposable) {
}
}
Is there an issue with try? in the new Swift compiler?
Error: Type of expression is ambiguous without a type annotation
XCode 26 beta 2 is taking more than 20 seconds to start AVCaptureSession when AVCaptureVideoDataOutput and AVCaptureAudioDataOutput are added. The problem occurs only during debugging and is clearly seen with Cinematic Capture sample code by Apple. I am using iPhone 14 Pro running iOS 26 beta 2 for reference.
Adding both AVCaptureMovieFileOutput and AVCaptureVideoDataOutput is supported in AVCaptureSession as seen in documentation (copied snippet below) but then when AVCaptureDevice is configured with ProRes422 codec, it fails unless one of the two outputs is removed from the capture session. It is very much reproducible on iPhone 14 pro running iOS 26.0.
Prior to iOS 16, you can add an AVCaptureVideoDataOutput and an AVCaptureMovieFileOutput to the same session, but only one may have its connection active. If you attempt to enable both connections, the system chooses the movie file output as the active connection and disables the video data output’s connection. For apps that link against iOS 16 or later, this restriction no longer exists.
I want to confirm if this is a bug or a programming error. Very easy to reproduce it by modifying AVCam sample code. Steps to reproduce:
Add AVCaptureVideoDataOutput to AVCaptureSession, no need to set delegate in AVCam sample code (CaptureService actor)
private let videoDataOutput = AVCaptureVideoDataOutput()
and then in configureSession method, add the following line
try addOutput(videoDataOutput)
if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
}
And next modify set HDR method:
/// Sets whether the app captures HDR video.
func setHDRVideoEnabled(_ isEnabled: Bool) {
// Bracket the following configuration in a begin/commit configuration pair.
captureSession.beginConfiguration()
defer { captureSession.commitConfiguration() }
do {
// If the current device provides a 10-bit HDR format, enable it for use.
if isEnabled, let format = currentDevice.activeFormat10BitVariant {
try currentDevice.lockForConfiguration()
currentDevice.activeFormat = format
currentDevice.unlockForConfiguration()
isHDRVideoEnabled = true
if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange) {
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]
}
} else {
captureSession.sessionPreset = .high
isHDRVideoEnabled = false
if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_32BGRA) {
print("Setting sdr pixel format \(kCVPixelFormatType_32BGRA)")
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_32BGRA]
}
try currentDevice.lockForConfiguration()
currentDevice.activeColorSpace = .sRGB
currentDevice.unlockForConfiguration()
}
} catch {
logger.error("Unable to obtain lock on device and can't enable HDR video capture.")
}
The problem now is toggling HDR on and off no longer works in video mode. If after setting HDR on, you set HDR to off, active format of device does not change (setting sessionPreset has no effect). This does not happen if video data output is not added to session.
Is there any workaround available?
I have doubts about Core Image coordinate system, way transforms are applied and way the image extent is determined. I couldn't find much in documentation or on internet so I tried the following code to rotate CIImage and display it in UIImageView. As I understand there is no absolute coordinate system in Core Image. The bottom left corner of an image is supposed to be (0,0). But my experiments show something else.
I created a prototype to rotate a CIImage by pi/10 radians on each button click. Here is the code I wrote.
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
imageView.contentMode = .scaleAspectFit
let uiImage = UIImage(contentsOfFile: imagePath)
ciImage = CIImage(cgImage: (uiImage?.cgImage)!)
imageView.image = uiImage
}
private var currentAngle = CGFloat(0)
private var ciImage:CIImage!
private var ciContext = CIContext()
@IBAction func rotateImage() {
let extent = ciImage.extent
let translate = CGAffineTransform(translationX: extent.midX, y: extent.midY)
let uiImage = UIImage(contentsOfFile: imagePath)
currentAngle = currentAngle + CGFloat.pi/10
let rotate = CGAffineTransform(rotationAngle: currentAngle)
let translateBack = CGAffineTransform(translationX: -extent.midX, y: -extent.midY)
let transform = translateBack.concatenating(rotate.concatenating(translate))
ciImage = CIImage(cgImage: (uiImage?.cgImage)!)
ciImage = ciImage.transformed(by: transform)
NSLog("Extent \(ciImage.extent), Angle \(currentAngle)")
let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent)
imageView.image = UIImage(cgImage: cgImage!)
}
But in the logs, I see the extent of images have negative origin.x and origin.y. What does it mean? Relative to whom it is negative and where exactly is (0,0) then? What exactly is image extent and how does Core Image coordinate system work?
2021-09-24 14:43:29.280393+0400 CoreImagePrototypes[65817:5175194] Metal API Validation Enabled
2021-09-24 14:43:31.094877+0400 CoreImagePrototypes[65817:5175194] Extent (-105.0, -105.0, 1010.0, 1010.0), Angle 0.3141592653589793
2021-09-24 14:43:41.426371+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.6283185307179586
2021-09-24 14:43:42.244703+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.9424777960769379