I test with VLC as RTSP audio client on MacOS.
Every 5 minutes, I hear noise.
The noise continue for 3 sec, happens every 5 min exactly.
During noise period, kernel_task use +25% CPU for 3 sec, Console->wifi.log put message staring with
SCAN request received from pid ??? (locationd) with priority=2, qos=-1 (default), frontmost=no
I checked Wireshark, it receives RTP/UDP packets every 20ms. But during noise period, no package for 140ms. That makes no sound period and noise.
If I disable WiFi and use Ether cable, the noise is gone.
If I disable Settings -> Security & Privacy -> Location Services, the noise is gone.
Is there any way to receive RTP/UDP package during locationd's scan?
My environment:
macOS Big Sur ver 11.4
iMac (Retina 5K, 27-inch, 2017)
VLC 3.0.16(Intel 64bit)
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have Apple TV 4K connected router A, IP is 192.168.1.10.
Apple TV send Bluetooth Low Energy(BLE) advertisement with the IP.
I captured by BLE sniffer.
I try "Screen Mirroring" from MacBook on router B, IP is 192.168.2.10
MacBook send "GET /info...RTSP/1.0" to appleTV:7000.
Apple TV replay with 1368 bytes of "RTSP/1.0 200 OK..." that includes device name, type, features.
But MacBook does not show my AppleTV as Display list.
I like to know why my AppleTV is not recognized as mirroring display even all RTSP traffic has no error.
mDNS from AppleTV is blocked by router.
Ping from MacBook to Apple TV was success.
If Apple TV and MacBook connect on same router, screen mirroring was success.
Router A and B : Netgear Nighthawk
Router netmask : 255.255.255.0 (both)
MacBook : macOS Monterey 12.4
Apple TV : tvOS 15.6(19M65)
I write macOS menu app with TextField by SwiftUI on Japanese Input mode. On some conditions, the TextFiled lost focus, no key input, no mouse click. User cannot do anything.
Setup
MacOS Ventura 13.3.1 (a)
Install Japanese Romaji Input source by System Preferences -> Keyboard
Set input mode as "Romaji"
Build test source code
On Xcode 14.3, create new macOS app project "FocusTest" with SwiftUI, Swift.
Replace FocusTestApp.swift with attached code.
Build on Xcode
Steps
Set input mode as "Romaji"
Run FocusTestApp
Click T square icon on top menu
Small windows with globe appear
Click Desktop background area
Click T square icon on top menu
Click PIN
T with PIN textField View appear
That textField lost focus, click inside of textField
Key or click is not accepted.
With US keyboard mode, key input become possible on Step 10. But Focused blue square is missing.
Code of FocusTestApp.swift
import SwiftUI
@main
struct focusTestApp: App {
var body: some Scene {
MenuBarExtra("Test", systemImage: "t.square") {
MainView()
}.menuBarExtraStyle(.window)
}
}
struct MainView: View {
@State private var showingPIN: Bool = false
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundColor(.accentColor)
Button("PIN") {
print("clicked")
showingPIN = true
}
}
.padding()
.sheet(isPresented: $showingPIN) {
PinView()
}
}
}
struct PinView: View {
@Environment(\.presentationMode) var presentationMode
@State private var pin: String = ""
@FocusState private var pinIsFocused: Bool
var body: some View {
VStack {
Image(systemName: "t.square")
.resizable()
.aspectRatio(contentMode: .fit)
.frame(width: 64.0, height: 64.0)
.foregroundColor(.accentColor)
Text("Enter PIN code")
HStack {
TextField("", text: $pin)
.font(Font.system(size: 28, design: .default))
.frame(width:4*28.0, height:28.0)
.focusable()
.focused($pinIsFocused)
}
.onAppear(){
pinIsFocused = true
}
}
.padding()
}
}
I like to know NullAudio.c is official SDK sample or not.
And the reason of enum and UID is defined in NullAudio.c, not defined in SDK header files.
I try to use kObjectID_Mute_Output_Master, but it defined different values on each 3rd party plugin.
kObjectID_Mute_Output_Master = 10 // NullAudio.c
kObjectID_Mute_Output_Master = 9 // https://github.com/ExistentialAudio/BlackHole
kObjectID_Mute_Output_Master = 6 // https://github.com/q-p/SoundPusher
I can build BlackHole and SoundPusher, these plugin worked.
This enum should be defined SDK header and keep same value on each SDK version.
I like to know why 3rd party defined different value.
If you know the history of NullAudio.c, please let me know.
I have old ScreenCaptureKit sample downloaded on Oct 2022.
That sample worked on Oct 2022. But it does not work on Apr 2024 on Sonoma 14.4.1 M1 MacBook. It only shows black screen.
I also download updated ScreenCaptureKit sample and test it. It works on Sonoma 14.4.1 M1 MacBook. I noticed latest sample have SCContentSharingPicker and other changes.
I have my screen capture application based on old ScreenCaptureKit sample. My app only shows black screen.
Do I have to add SCContentSharingPicker and SCContentSharingPickerObserver on my application for capturing screen on Sonoma?
Old way of screen capture without SCContentSharingPicker is not supported anymore on Sonoma?
I am working on ScreenCaptureKit sample with SCContentSharingPickerObserver.
My Target is SwiftUI based and calling Objective-C class method. I added [MyTarget]-Bridging.h and [MyTarget]-Swift.h
I got compile error of unknown class name SCContentSharingPickerObserver in [MyTarget]-Swift.h. But I do not know how to fix this error since [MyTarget]-Swift.h is Xcode generated file.
I set macOS target is 14.0, swift language ver is 5
Anyone know how to fix this error or waiting for Xcode update?
I made CameraExtension and installed by OSSystemExtensionRequest.
I got success callback. I did uninstall old version of my CameraExtension and install new version of my CameraExtension.
"systemextensionsctl list" command shows "[activated enabled]" on my new version.
But no daemon process with my CameraExtension is not running. I need to reboot OS to start the daemon process. This issue is new at macOS Sonoma 14.5. I did not see this issue on 14.4.x
I wrote simple NSMutableData test project.
I profiled with allocations instruments. It shows alloc1() total bytes are 55MB.
But alloc1() only called once and alloced byte should be 1MB. I cannot find the reason of 55MB allocation in alloc1()
Replace this code with fresh macOS App project on Xcode13.
#import "ViewController.h"
@implementation ViewController {
NSTimer *mTimer;
NSMutableData *mData1;
NSMutableData *mData2;
}
- (void)viewDidLoad {
[super viewDidLoad];
mData1 = nil;
mData2 = nil;
mTimer = [NSTimer scheduledTimerWithTimeInterval:1.0 target:self
selector:@selector(timer_cb) userInfo:nil repeats:YES];
}
- (void) timer_cb {
if (mData1 == nil) {
[self alloc1];
}
if (mData2 == nil) {
[self alloc2];
}
[self copy1];
}
- (void) alloc1 {
NSLog(@"alloc1");
mData1 = [NSMutableData dataWithCapacity:1024*1024];
}
- (void) alloc2 {
NSLog(@"alloc2");
mData2 = [NSMutableData dataWithCapacity:1024*1024];
[mData2 resetBytesInRange:NSMakeRange(0, 1024*1024)];
}
- (void) copy1 {
[mData1 replaceBytesInRange:NSMakeRange(0, 1024*1024) withBytes:mData2.bytes];
}
@end
I test with ScreenCaptureKit example and success to get desktop image as IOSurface.
I know IOSurface holds GPU memory and not easy to access as DRAM.
Do you know a way to h.264 compress the IOSurface?
Or do I have to convert to CVPixelBuffer from IOSurface?
If you have any sample code for handling IOSurface, it would be useful to me.
I try to run ScreenCaptureKit sample code.
That sample require macOS 13.0 for audio capture.
When I run, the app shows with "No screen recording permission".
I grant Screen Recording permission on System Settings -> Privacy & Security.
But same error happens. I cannot find a way to grant the permission.
I tried restart app, restart Xcode, reboot macOS and
rm -rf ~/Library/Developer/Xcode/DerivedData/CaptureSample-...
This sample app worked after comment out "streamConfig.capturesAudio" and related code on Monterey. This permission issue did not happen on Monterey.
Env:
macOS Ventura 13.0,
Xcode 14.1(14B47b)
Sample code URL : https://developer.apple.com/documentation/screencapturekit/capturing_screen_content_in_macos?language=objc
I try to get AVCaptureDevice instance of a virtual audio plugin, like blackhole.
I need to call AVCaptureDevice.DiscoverySession, because old method (AVCaptureDevice.devicesWithMediaType) is deprecated.
First, I cannot find enum for virtual audio plugin. I try .externalUnknown or .builtInMicrophone. Both result is empty.
I like to know how to list virtual microphone and get AVCaptureDevice instance.
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(
deviceTypes: [ .externalUnknown ],
mediaType: .audio,
position: .unspecified
)
let devs = deviceDiscoverySession.devices
print("devices=\(devs)") // empty list
I need to develop Emergency communication app. This app is not used frequently but it need to start immediately when user need to make emergency communication.
If this app is not used for a month, it uninstalled by "Offload unused apps" feature. Is there way to put a flag into app to prevent uninstall?
If application's Info.plist has setting like "Preventing offloading unused apps", it would be great to me.
If I do these tasks on random order, CMIO CameraExtension go into unstable condition.
Copy MyApp.app under /Applications or /Applications/MyAppGroup/
Install by MyApp sending OSSystemExtensionRequest.activationRequest
Check install condition by command : "systemextensionsctl list"
uninstall by MyApp sending OSSystemExtensionRequest.deactivationRequest
Remove /Applications/MyAppGroup/ by command line and Finder
Remove /Applications/MyApp.app by command line and Finder
Kill MyApp.app during activationRequest.
Once my CMIO CameraExtension go into unstable condition, it is impossible to remove on normal way.
"systemextensionsctl list" shows my extension is activated.
Remove by API failed with code=4.
Removing file of MyApp.app does not remove CameraExtension
Only way to remove CameraExtension is "Boot macOS as recovery mode", disable SIP, "systemextensionsctl uninstall"
Audio HAL extension is file based and ATOMIC. I can check file existence by "ls" command and remove by "rm -rf" command. I never met unstable condition.
I use VideoToolbox HW h.264 encoder with M1 MacBook for screen mirroring. I need run encoder with minimal delay mode.
I use these values as encoderSpecification
kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder:true
kVTCompressionPropertyKey_ProfileLevel:kVTProfileLevel_H264_Baseline_AutoLevel
kVTCompressionPropertyKey_RealTime:true
kVTCompressionPropertyKey_AllowFrameReordering:false
I set presentation timestamp with current time.
In compressed callback, I got encoded frame with wrong order.
[13.930533]encode pts=...13.930511
[13.997633]encode pts=...13.997617
[14.013678]compress callback with pts=...13.997617 dts=...13.930511
[14.023443]compress callback with pts=...13.930511 dts=...13.997617
in[]:log time,
pts:presentation timestamp,
dts:decode timestamp
AllowFrameReordering is not working as I expected.
If I need to set other property, please let me know.
I also does not like buffering 2 video frames. If you know settings for no buffering frame, please let me know.
I made s target of "Camera Extension" on Xcode macOS Swift app.
I got Swift code with CMIOExtensionDeviceSource.
I add NSLog() and String.write() to file under FileManager.default.temporaryDirectory.
My camera extension installaion was success and running with FaceTime.
But I cannot see NSLog output or debug output temp file on Xcode or Console.
How can I see debug output from my Camera Extension?