Post

Replies

Boosts

Views

Activity

How to use Front UW and TrueDepth in iPad
I want to use both front UW and TrueDepth cameras in iPad which has front UW camera. Firstly, I have used only front builtInDualCamera by AVFoundation and tried all the formats that can be used with builtInDualCamera, but there was no format that could capture UW. Secondly, I have tried to both front builtInDualCamera and builtInUltraWideCamera, but there was no combination that could use builtInUltraWideCamera and builtInDualCamera. Is there any way ?
0
0
160
Sep ’25
AVCapturePhotoCaptureDelegate callbacks wrong color for iOS17 beta
I'm creating an app that uses AVCapturePhotoCaptureDelegate to save still image color and depth. However, when using it on an iPhone with iOS17 beta, the still image color becomes strange. With iOS17, the colors are darker and strange reflections occur, but with iOS16 you can get the correct color. If anyone has the same symptoms or knows how to cure it, please let me know.
3
0
620
Sep ’23
WebView doesn't play youtube after speech recognization
I use WKWebView to play embed youtube video, and also use SFSpeechRecognizer to recognize speech. But after recognize speech, webview does not play video. Please tell me how to fix it. 2023-03-04 15:27:14.100700+0900 SpeechRecognition[66390:23051497] [assertion] Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)}> 2023-03-04 15:27:14.100756+0900 SpeechRecognition[66390:23051497] [ProcessSuspension] 0x113000400 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'WebKit Media Playback' for process with PID=66392, error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)} import UIKit import AVFAudio import Speech import MediaPlayer import WebKit class ViewController: UIViewController { @IBOutlet weak var startButton: UIButton! @IBOutlet weak var cancelButton: UIButton! var recognitionTask: SFSpeechRecognitionTask? = nil var webView: WKWebView! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. print("viewDidLoad") startButton.addTarget(self, action: #selector(StartSpeechRecognition), for: .touchUpInside) cancelButton.addTarget(self, action: #selector(StopSpeechRecognition), for: .touchUpInside) //startButton.addTarget(self, action: #selector(ChangeVolume), for: .touchUpInside) let webConfiguration = WKWebViewConfiguration() webConfiguration.allowsInlineMediaPlayback = true webView = WKWebView(frame: CGRect(x: 0, y: 0, width: 200, height: 200), configuration: webConfiguration) self.view.addSubview(webView) let myURL = URL(string: "https://www.youtube.com/embed/B7BxrAAXl94?playsinline=1") let myRequest = URLRequest(url: myURL!) webView.load(myRequest) } @objc public func StartSpeechRecognition() { let audioEngine = AVAudioEngine() // Configure the audio session for the app. let audioSession = AVAudioSession.sharedInstance() try? audioSession.setCategory(.record, mode: .measurement, options: .duckOthers) try? audioSession.setActive(true, options: .notifyOthersOnDeactivation) let inputNode = audioEngine.inputNode // Create and configure the speech recognition request. let recognitionRequest = SFSpeechAudioBufferRecognitionRequest() recognitionRequest.shouldReportPartialResults = true // Configure the microphone input. let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in recognitionRequest.append(buffer) } audioEngine.prepare() try? audioEngine.start() // Create a recognition task for the speech recognition session. // Keep a reference to the task so that it can be canceled. guard let speechRecognizer = SFSpeechRecognizer(locale: Locale(identifier: "ja-JP")) else { return } print("Start Recognize") recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest) { result, error in var isFinal = false if let result = result { // Update the text view with the results. isFinal = result.isFinal print("Text \(result.bestTranscription.formattedString)") } if error != nil || isFinal { // Stop recognizing speech if there is a problem. audioEngine.stop() inputNode.removeTap(onBus: 0) print("Stop") print(self.recognitionTask?.isFinishing) self.recognitionTask = nil } } } @objc public func StopSpeechRecognition() { guard let recognitionTask = recognitionTask else { return } recognitionTask.finish() } }
0
0
1.1k
Mar ’23