Post

Replies

Boosts

Views

Activity

Reply to How can a user be logged into iCloud (iCloud Available), but still Not Authenticated?
Apparently the issue is com.apple.developer.icloud-container-environment: Development vs Production. Any of the iCloud posts that I made under Development aren't accessible while the Entitlement is set to Production. On the flip side any of the iCloud posts that I made under Production aren't accessible while the Entitlement is set to Entitlement. I have no idea why it works this way but in short the NotAuthenticated issue occurs when trying to view a post that was made while in Development but the value is currently set to Production Can anyone explain why it works this way?
Jan ’22
Reply to Swift -Audio URL to Video URL Doesn't Play in Photos Library
So it seems that although the code from my question did covert the audio file to a video file, there still wasn't a video track. I know this for a fact because after I got the exporter's videoURL from my question, I tried to add a watermark to it and in the watermark code it kept crashing on let videoTrack = asset.tracks(withMediaType: AVMediaType.video)[0] Basically the code from my question coverts audio to video but doesn't add a video track. What I assumed happened is when the Files app reads the file, it knows that it's a .mov or .mp4 file and then it'll play the audio track even if the video track is missing. Conversely, when the Photos app reads the file it also know's that it's a .mov or .mp4 file but if there isn't a video track, it won't play anything. I had to combine these 2 answers to get the audio to play as a video in the Photos app. 1st- I added my app icon as 1 image to an array of images to make a video track using the code from How do I export UIImage array as a movie? answered by scootermg. The code from scootermg's answer is at this GitHub here by dldnh 2nd- I combined the app icon video that I just made with the audio url from my question using the code from Swift Merge audio and video files into one video answered by TungFam In the mixCompostion from TungFam's answer I used the audio url's asset duration for the length of the video. do { try mutableCompositionVideoTrack[0].insertTimeRange(CMTimeRangeMake(start: .zero, duration: aAudioAssetTrack.timeRange.duration), of: aVideoAssetTrack, at: .zero) try mutableCompositionAudioTrack[0].insertTimeRange(CMTimeRangeMake(start: .zero, duration: aAudioAssetTrack.timeRange.duration), of: aAudioAssetTrack, at: .zero) if let aAudioOfVideoAssetTrack = aAudioOfVideoAssetTrack { try mutableCompositionAudioOfVideoTrack[0].insertTimeRange(CMTimeRangeMake(start: .zero, duration: aAudioAssetTrack.timeRange.duration), of: aAudioOfVideoAssetTrack, at: .zero) } } catch { print(error.localizedDescription) }
Topic: Media Technologies SubTopic: Audio Tags:
Feb ’22
Reply to Another photo orientation question
@HeshanY is correct, use the answer from Yodagama on SO. This only thing is he didn't add how to detect the orientation if let photoOutputConnection = self.photoOutput.connection(with: .video) { // USE the below function HERE photoOutputConnection.videoOrientation = videoOrientation() } photoOutput.capturePhoto(with: settings, delegate: self) func to detect device orientation: func videoOrientation() -> AVCaptureVideoOrientation { var videoOrientation: AVCaptureVideoOrientation! let orientation: UIDeviceOrientation = UIDevice.current.orientation switch orientation { case .faceUp, .faceDown, .unknown: // let interfaceOrientation = UIApplication.shared.statusBarOrientation if let interfaceOrientation = UIApplication.shared.windows.first(where: { $0.isKeyWindow })?.windowScene?.interfaceOrientation { switch interfaceOrientation { case .portrait, .portraitUpsideDown, .unknown: videoOrientation = .portrait case .landscapeLeft: videoOrientation = .landscapeRight case .landscapeRight: videoOrientation = .landscapeLeft @unknown default: videoOrientation = .portrait } } case .portrait, .portraitUpsideDown: videoOrientation = .portrait case .landscapeLeft: videoOrientation = .landscapeRight case .landscapeRight: videoOrientation = .landscapeLeft @unknown default: videoOrientation = .portrait } return videoOrientation }
Topic: Media Technologies SubTopic: Audio Tags:
Feb ’22
Reply to AVMutableVideoCompositionLayerInstruction -Misalignment when Merging Videos
I got it working for both portrait and landscape. I tested this answer with videos recorded in portrait, landscape left/right, upside down, front camera, and the back camera. I haven't had any issues. I'm far from a CGAffineTransform expert, so if anyone has a better answer please post it. Ray Wenderlich's merging code works, but it doesn't work for videos with different orientations. I used this answer to check the properties of the preferredTransform for the orientation check. One thing to point out is the comments from DonMag told me about the benefit of using 720x1280. The code below will merge all of the videos together with a renderSize of 720x1280 which will keep them the same size. code: // class property let renderSize = CGSize(width: 720, height: 1280) // for higher quality use CGSize(width: 1080, height: 1920) func mergVideos() { let mixComposition = AVMutableComposition() let videoCompositionTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) let audioCompositionTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) var count = 0 var insertTime = CMTime.zero var instructions = [AVMutableVideoCompositionInstruction]() for videoAsset in arrOfAssets { guard let firstTrack = videoAsset.tracks.first, let _ = videoAsset.tracks(withMediaType: .video).first else { continue } do { try videoCompositionTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .video)[0], at: insertTime) if let audioTrack = videoAsset.tracks(withMediaType: .audio).first { try audioCompositionTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.duration), of: audioTrack, at: insertTime) } let layerInstruction = videoCompositionInstruction(firstTrack, asset: videoAsset, count: count) let videoCompositionInstruction = AVMutableVideoCompositionInstruction() videoCompositionInstruction.timeRange = CMTimeRangeMake(start: insertTime, duration: videoAsset.duration) videoCompositionInstruction.layerInstructions = [layerInstruction] instructions.append(videoCompositionInstruction) insertTime = CMTimeAdd(insertTime, videoAsset.duration) count += 1 } catch { } } let videoComposition = AVMutableVideoComposition() videoComposition.instructions = instructions videoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30) videoComposition.renderSize = self.renderSize // <--- **** IMPORTANT **** // ... exporter.videoComposition = videoComposition } Most important part of this answer that replaces the RW code: func videoCompositionInstruction(_ firstTrack: AVAssetTrack, asset: AVAsset, count: Int) -> AVMutableVideoCompositionLayerInstruction { let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstTrack) let assetTrack = asset.tracks(withMediaType: .video)[0] let t = assetTrack.fixedPreferredTransform // new transform fix let assetInfo = orientationFromTransform(t) if assetInfo.isPortrait { let scaleToFitRatio = self.renderSize.width / assetTrack.naturalSize.height let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio) var finalTransform = assetTrack.fixedPreferredTransform.concatenating(scaleFactor) // From the OP that I used for the portrait part. I haven't tested this, his words: "if video not taking entire screen and leaving some parts black - don't know when actually needed so test" if assetInfo.orientation == .rightMirrored || assetInfo.orientation == .leftMirrored { finalTransform = finalTransform.translatedBy(x: -transform.ty, y: 0) } instruction.setTransform(finalTransform, at: CMTime.zero) } else { let renderRect = CGRect(x: 0, y: 0, width: self.renderSize.width, height: self.renderSize.height) let videoRect = CGRect(origin: .zero, size: assetTrack.naturalSize).applying(assetTrack.fixedPreferredTransform) let scale = renderRect.width / videoRect.width let transform = CGAffineTransform(scaleX: renderRect.width / videoRect.width, y: (videoRect.height * scale) / assetTrack.naturalSize.height) let translate = CGAffineTransform(translationX: .zero, y: ((self.renderSize.height - (videoRect.height * scale))) / 2) instruction.setTransform(assetTrack.fixedPreferredTransform.concatenating(transform).concatenating(translate), at: .zero) } if count == 0 { instruction.setOpacity(0.0, at: asset.duration) } return instruction } New orientation check: func orientationFromTransform(_ transform: CGAffineTransform) -> (orientation: UIImage.Orientation, isPortrait: Bool) { var assetOrientation = UIImage.Orientation.up var isPortrait = false if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 { assetOrientation = .right isPortrait = true } else if transform.a == 0 && transform.b == 1.0 && transform.c == 1.0 && transform.d == 0 { assetOrientation = .rightMirrored isPortrait = true } else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 { assetOrientation = .left isPortrait = true } else if transform.a == 0 && transform.b == -1.0 && transform.c == -1.0 && transform.d == 0 { assetOrientation = .leftMirrored isPortrait = true } else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 { assetOrientation = .up } else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 { assetOrientation = .down } } preferredTransform fix: extension AVAssetTrack { var fixedPreferredTransform: CGAffineTransform { var t = preferredTransform switch(t.a, t.b, t.c, t.d) { case (1, 0, 0, 1): t.tx = 0 t.ty = 0 case (1, 0, 0, -1): t.tx = 0 t.ty = naturalSize.height case (-1, 0, 0, 1): t.tx = naturalSize.width t.ty = 0 case (-1, 0, 0, -1): t.tx = naturalSize.width t.ty = naturalSize.height case (0, -1, 1, 0): t.tx = 0 t.ty = naturalSize.width case (0, 1, -1, 0): t.tx = naturalSize.height t.ty = 0 case (0, 1, 1, 0): t.tx = 0 t.ty = 0 case (0, -1, -1, 0): t.tx = naturalSize.height t.ty = naturalSize.width default: break } return t } }
Topic: Media Technologies SubTopic: Audio Tags:
Mar ’22
Reply to CGAffineTransform -How to Align Video in Screen Center
If anyone has a better answer please post, I'll check and accept it Unbeknownst to me the video was in the correct position but the negative black bar space was causing the video to be appear that it was misaligned. Changing the AVMutableVideoCompositionInstruction() show the .backgroundColor show the negative black bar space issue in yellow: instruction.backgroundColor = UIColor.yellow.cgColor To fix it, I divided the finalTransform.ty in half and subtracted that from a translation-y-value, so now the code is: // ... let finalTransform = transform3.concatenating(rotateFromUpsideDown) let ty = finalTransform.ty var divided = ty/2 if divided < 0 { divided = 0 } let translation = CGAffineTransform(translationX: 0, y: -divided) let new_finalTransform = finalTransform.concatenating(translation) let transformer = AVMutableVideoCompositionLayerInstruction(assetTrack: track) transformer.setTransform(new_finalTransform, at: .zero) // ... The fix: The fix with the negative black bar space in yellow to show how it's now centered:
Topic: Media Technologies SubTopic: Audio Tags:
May ’22
Reply to CGAffineTransform -How to Align Video in Screen Center
I posted the above answer and it should be noted that the answer only works for videos that have an orientation of .landscapeRight. You must check the following before using the above code: let t = track.preferredTransform // LandscapeRight if (t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0) { print(" *** the above answer will only work for landscapeRight *** ") transform1 = transform1.concatenating(CGAffineTransform(rotationAngle: CGFloat(90.0 * .pi / 180))) } // LandscapeLeft if (t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0) { print(" *** the above answer will NOT work for landscapeLeft *** ") }
Topic: Media Technologies SubTopic: Audio Tags:
May ’22
Reply to CGAffineTransform -How to Align Video in Screen Center
I'm the same person who posted the other question and 2 previous answer. This is what I came up with for both landscapeRight and landscapeLeft videos func turnHorizontalVideoToPortraitVideo(asset: AVURLAsset) -> AVVideoComposition { let track = asset.tracks(withMediaType: AVMediaType.video)[0] let renderSize = CGSize(width: 720, height: 1280) let t = track.preferredTransform if (t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0) { print("landscapeRight") } var isLandscapeLeft = false if (t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0) { print("landscapeLeft") isLandscapeLeft = true } var transform1 = t transform1 = transform1.concatenating(CGAffineTransform(rotationAngle: CGFloat(90.0 * .pi / 180))) transform1 = transform1.concatenating(CGAffineTransform(translationX: track.naturalSize.width, y: 0)) let transform2 = CGAffineTransform(translationX: track.naturalSize.height, y: (track.naturalSize.width - track.naturalSize.height) / 2) var p = Double.pi/2 if isLandscapeLeft { p = -Double.pi/2 } let transform3 = transform2.rotated(by: CGFloat(p)).concatenating(transform1) let finalTransform = transform3 let transformer = AVMutableVideoCompositionLayerInstruction(assetTrack: track) if isLandscapeLeft { let ty = finalTransform.ty let dividedNum = ty/2.5 let translation = CGAffineTransform(translationX: 0, y: dividedNum) let new_finalTransform = finalTransform.concatenating(translation) transformer.setTransform(new_finalTransform, at: .zero) } if !isLandscapeLeft { let translate = CGAffineTransform(translationX: renderSize.width, y: renderSize.height) let rotateFromUpsideDown = translate.rotated(by: CGFloat(Double.pi)) let transformRotated = finalTransform.concatenating(rotateFromUpsideDown) let ty = transformRotated.ty var dividedNum = ty/2 if dividedNum < 0 { dividedNum = 0 } let translation = CGAffineTransform(translationX: 0, y: -dividedNum) let new_finalTransform = transformRotated.concatenating(translation) transformer.setTransform(new_finalTransform, at: .zero) } let instruction = AVMutableVideoCompositionInstruction() //instruction.backgroundColor = UIColor.yellow.cgColor instruction.timeRange = CMTimeRange(start: .zero, duration: asset.duration) instruction.layerInstructions = [transformer] let videoComposition = AVMutableVideoComposition() videoComposition.frameDuration = CMTime(value: 1, timescale: 30) videoComposition.renderSize = renderSize videoComposition.instructions = [instruction] return videoComposition }
Topic: Media Technologies SubTopic: Audio Tags:
May ’22