Post

Replies

Boosts

Views

Activity

Reply to How to reduce CMSampleBuffer volume
The issue is that I’m writing a video from two different sources: The first comes from an AVAssetReader; The second is from an AVSpeechSynthesizer.write(buffer:). Using AVAssetWriter, I want to concatenate content from both sources in order. setRampVolume only works when writing video that comes from AVAssetReader using AVAssetExportSession. I’m using AVAssetWriter, so AVAssetExportSession doesn't apply.
Topic: Media Technologies SubTopic: General Tags:
May ’25
Reply to How to reduce CMSampleBuffer volume
Hello, but in this case, are the loop values from the extension initially in decibels or linear values? The idea I have is: I convert to a linear value, then multiply by 0.4. After that, I convert back to decibels. Give me an example of the formula, reducing the volume to 40%.
Topic: Media Technologies SubTopic: General Tags:
Mar ’25
Reply to AVASSETREADER and AVAssetWriter: ideal settings
Hello, Here's a detailed and formatted translation of your text into English: I was able to add frames using the side-by-side eye example. However, when I tried to add audio, the app crashed. I tried debugging it, but I couldn't figure it out. Below are the audio configurations I used. Audio Configuration for AVAssetReader // Track to audio guard let audioTrack = try await asset.loadTracks(withMediaType: .audio).first else { fatalError("Track not loaded.") } let audioReaderSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatLinearPCM ] let audioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioReaderSettings) if reader.canAdd(audioOutput) { reader.add(audioOutput) } Audio Configuration for AVAssetWriterInput // Audio settings for video input guard let loadFormat = try await audioTrack.load(.formatDescriptions).first, let format = loadFormat as? CMFormatDescription, let asbd = format.audioStreamBasicDescription else { fatalError("Audio format not loaded.") } let audioSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: asbd.mSampleRate, AVEncoderBitRatePerChannelKey: 64000, AVNumberOfChannelsKey: asbd.mChannelsPerFrame ] let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings) guard assetWriter.canAdd(audioInput) else { fatalError("Error adding audio as input") } assetWriter.add(audioInput) requestMediaDataWhenReady Implementation audioInput.requestMediaDataWhenReady(on: DispatchQueue(label: "Add audio")) { while audioInput.isReadyForMoreMediaData { autoreleasepool { if let sampleBuffer = audioOutput.copyNextSampleBuffer() { audioInput.append(sampleBuffer) } else { assetWriter.finishWriting { if assetWriter.status == .completed { continuation.resume() print("Operation completed successfully: \(url.absoluteString)") msg("Operation completed successfully: \(url.absoluteString)") self.canExport = true } else { print(assetWriter.error?.localizedDescription ?? "Error not found.") msg(assetWriter.error?.localizedDescription ?? "Error not found.") } } } } if assetWriter.status != .writing { break } } } Issue The app keeps crashing when I try to add audio. Please help me. There aren't good examples of audio processing with AVAssetReader. I look forward to your response.
Topic: Media Technologies SubTopic: Video Tags:
Dec ’24
Reply to AVASSETREADER and AVAssetWriter: ideal settings
Hello, Here’s the translation: I actually want an example of two while loops after setting up the configuration: one while loop for the AVAssetReader to handle the image and audio, and then a while loop for the output video using AVAssetWriter. I understood the configuration part; I just need the part with the while loops for the source and destination videos. An example of both, handling both image and audio.
Topic: Media Technologies SubTopic: Video Tags:
Nov ’24
Reply to Managing Excessive Memory Usage with AVAssetReader and AVASSETWriter
Hello,So much inaccessibility. The tags are not accessible for selection on a Braille display, and I am a Braille display user. I tried manually adding the backtick to create the code block, but it didn’t work. It’s very frustrating. I’m facing a memory issue in my app. I was asked to reduce the project size, which I did. However, the issue wasn’t resolved. I need someone who can truly focus on this. I've tried everything with my app—many, many workarounds. The main project is much larger than this small segment. I need special assistance because this disability isn’t just a "difficulty"; it’s a real barrier. From my experiences over the past several months, after more than a year as a visually and hearing-impaired developer, I’d go as far as to say that Apple’s preparedness for developers with disabilities is poor and in urgent need of significant improvements.
Topic: Media Technologies SubTopic: Video Tags:
Nov ’24
Reply to Writing video using AVAssetWriter, AVAssetReader, and AVSPEECHSYNTHESIZER
Hello, I am deaf-blind and use a wheelchair. I received guidance from Apple Developer Support that is beyond my capacity. It involved viewing a panel in Xcode to fix a bug in my app, but this panel is not accessible. I need special attention from Apple Support. I’m worried they may not assist me fully in this. I can only follow instructions related to source code, not Xcode panels very well. Please inform Support that I am deaf-blind, use a Braille display, and am mainly limited to working with source code text. Panels like Instruments > Allocations are challenging in Braille. Please help me!
Topic: Media Technologies SubTopic: Video Tags:
Nov ’24
Reply to Writing video using AVAssetWriter, AVAssetReader, and AVSPEECHSYNTHESIZER
Hello! I have a few questions' I don't use GitHub since I am deafblind and a Braille display user. I avoid overloading my usage with extra resources, as accessibility tends to complicate things' If I provide a link, should the project be temporary, or must I leave it there permanently? As for a crash log, I don't have one, but the project is very small' I don’t have an error message either' Thank you!
Topic: Media Technologies SubTopic: Video Tags:
Oct ’24
Reply to How to save video asynchronously?
Hello, I'm putting an audio of AVSpeechSynthesizer.write() in a video and some photos. I tried to put a very long text, to the point of having made a video of around 50 minutes. When saving the video to the gallery, the APP would freeze until it was saved. In other cases, the APP would crash and I would have to compile again. I tried to use PHPhotoLibrary.shared().performChanges() instead of UISaveVideoAtPathToSavedPhotosAlbum . But the APP would crash until you saved the video to the gallery or the APP it crashed and wouldn't come back. Here's the code: private let synthesizer = AVSpeechSynthesizer() private var counterImage = 0 let semaphore = DispatchSemaphore(value: 0) init(_ texts: [String]) { Misc.obj.lData.removeAll() Misc.obj.selectedPhotos.append(createBlueImage(CGSize(width: 100, height: 100))) Misc.obj.selectedPhotos.append(createBlueImage(CGSize(width: 100, height: 100))) super.init() synthesizer.delegate = self DispatchQueue.global().async { do { try self.nextText(texts) msgErro("Completed.") } catch { msgErro(error.localizedDescription) } } } func nextText(_ texts: [String]) throws { var audioBuffers = [CMSampleBuffer]() var videoBuffers = [CVPixelBuffer]() var lTime = [0.0] for text in texts { var time = Double.zero var duration = AVAudioFrameCount.zero let utterance = AVSpeechUtterance(string: texto) utterance.voice = AVSpeechSynthesisVoice(language: "pt-BR") utterance.rate = 0.2 synthesizer.write(utterance) { buffer in if let buffer = buffer as? AVAudioPCMBuffer, let sampleBuffer = buffer.toCMSampleBuffer(presentationTime: .zero) { audioBuffers.append(sampleBuffer) duration += buffer.frameLength time += Double(buffer.frameLength) / buffer.format.sampleRate } } semaphore.wait() if Misc.obj.selectedPhotos.indices.contains(contadorImagem) { let image = Misc.obj.selectedPhotos[counterImage] //let imagemEscrita = imagem.addTexto(textos[quantTxt]) let pixelBuffer = image.toCVPixelBuffer() videoBuffers.append(pixelBuffer!) lTime.append(time) // Increase counterImage counterImage += 1 if counterImage == Misc.obj.selectedPhotos.count { counterImage = 0 } } } let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("teste/output.mp4") try FileManager.default.createDirectory(at: url.deletingLastPathComponent(), withIntermediateDirectories: true) if FileManager.default.fileExists(atPath: url.path()) { try FileManager.default.removeItem(at: url) } let audioProvider = SampleProvider(buffers: audioBuffers) let videoProvider = SampleProvider(buffers: videoBuffers, lTempo: lTempo) let audioInput = createAudioInput(audioBuffers: audioBuffers) let videoInput = createVideoInput(videoBuffers: videoBuffers) let adaptor = createPixelBufferAdaptor(videoInput: videoInput) let assetWriter = try AVAssetWriter(outputURL: url, fileType: .mp4) assetWriter.add(videoInput) assetWriter.add(audioInput) assetWriter.startWriting() assetWriter.startSession(atSourceTime: .zero) let writerQueue = DispatchQueue(label: "Asset Writer Queue") videoInput.requestMediaDataWhenReady(on: writerQueue) { if let buffer = videoProvider.getNextBuffer() { adaptor.append(buffer, withPresentationTime: videoProvider.getPresentationTime()) } else { videoInput.markAsFinished() if audioProvider.isFinished() { self.semaphore.signal() } } } audioInput.requestMediaDataWhenReady(on: writerQueue) { if let buffer = audioProvider.getNextBuffer() { audioInput.append(buffer) } else { audioInput.markAsFinished() if audioProvider.isFinished() { self.semaphore.signal() } } } semaphore.wait() assetWriter.finishWriting { switch assetWriter.status { case .completed: msgRelatos("Completed.") UISaveVideoAtPathToSavedPhotosAlbum(url.path, nil, nil, nil) case .failed: if let error = assetWriter.error { msgErro("Error: \(error.localizedDescription)") } else { msgRelatos("No recorded.") } default: msgRelatos("Error not found.") } } } } extension TesteFala: AVSpeechSynthesizerDelegate { func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) { semaphore.signal() } }
Jan ’24