I'm extending an AudioUnit to generate multi-channel output, and trying to write a unit test using AVAudioEngine. My test installs a tap on the AVAudioNode's output bus and ensures the output is not silence. This works for stereo.
I've currently got:
auto avEngine = [[AVAudioEngine alloc] init];
[avEngine attachNode:avAudioUnit];
auto format = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100. channels:channelCount];
[avEngine connect:avAudioUnit to:avEngine.mainMixerNode format:format];
where avAudioUnit is my AU.
So it seems I need to do more than simply setting the channel count for the format when connecting, because after this code, [avAudioUnit outputFormatForBus:0].channelCount is still 2.
Printing the graph yields:
AVAudioEngineGraph 0x600001e0a200: initialized = 1, running = 1, number of nodes = 3
******** output chain ********
node 0x600000c09a80 {'auou' 'ahal' 'appl'}, 'I'
inputs = 1
(bus0, en1) <- (bus0) 0x600000c09e00, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
node 0x600000c09e00 {'aumx' 'mcmx' 'appl'}, 'I'
inputs = 1
(bus0, en1) <- (bus0) 0x600000c14300, {'augn' 'brnz' 'brnz'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
outputs = 1
(bus0, en1) -> (bus0) 0x600000c09a80, {'auou' 'ahal' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
node 0x600000c14300 {'augn' 'brnz' 'brnz'}, 'I'
outputs = 1
(bus0, en1) -> (bus0) 0x600000c09e00, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
So AVAudioEngine just silently ignores whatever channel counts I pass to it.
If I do:
auto numHardwareOutputChannels = [avEngine.outputNode outputFormatForBus:0].channelCount;
NSLog(@"hardware output channels %d\n", numHardwareOutputChannels);
I get 30, because I have an audio interface connected. So I would think AVAudioEngine would support this. I've also tried setting the format explicitly on the connection between the mainMixerNode and the outputNode to no avail.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm getting this error when using fragmentLinkedFunctions in Metal.
Compiler failed to build request
exception: Error Domain=CompilerError Code=2 "
Linking two modules of different data layouts: '' is '' whereas '1' is 'e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v16:16:16-v24:32:32-v32:32:32-v48:64:64-v64:64:64-v96:128:128-v128:128:128-v192:256:256-v256:256:256-v512:512:512-v1024:1024:1024-n8:16:32'
SC compilation failure
More boolean const than hw allows" UserInfo={NSLocalizedDescription=
Linking two modules of different data layouts: '' is '' whereas '1' is 'e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v16:16:16-v24:32:32-v32:32:32-v48:64:64-v64:64:64-v96:128:128-v128:128:128-v192:256:256-v256:256:256-v512:512:512-v1024:1024:1024-n8:16:32'
SC compilation failure
More boolean const than hw allows}
Anyone know what that all means?
If I replace the body of my intersection function with just return {false, 0.0f}, I get only the More boolean const than hw allows.
In Platforms State of the Union, there was a reference to "Custom Gestures" in SwiftUI, among new features. I didn't see anything about it in What's New with SwiftUI. Did I miss it? Anyone have more info?
I'm trying to implement de-noising of AO in my app, using the MPSDynamicScene example as a guide: https://developer.apple.com/documentation/metalperformanceshaders/animating_and_denoising_a_raytraced_scene
In that example, it computes motion vectors in UV coordinates, resulting in very small values:
// Compute motion vectors
if (uniforms.frameIndex > 0) {
// Map current pixel location to 0..1
float2 uv = in.position.xy / float2(uniforms.width, uniforms.height);
// Unproject the position from the previous frame then transform it from
// NDC space to 0..1
float2 prevUV = in.prevPosition.xy / in.prevPosition.w * float2(0.5f, -0.5f) + 0.5f;
// Next, remove the jittering which was applied for antialiasing from both
// sets of coordinates
uv -= uniforms.jitter;
prevUV -= prevUniforms.jitter;
// Then the motion vector is simply the difference between the two
motionVector = uv - prevUV;
}
Yet the documentation for MPSSVGF seems to indicate the offsets should be expressed in texels:
The motion vector texture must be at least a two channel texture representing how many texels
* each texel in the source image(s) have moved since the previous frame. The remaining channels
* will be ignored if present. This texture may be nil, in which case the motion vector is assumed
* to be zero, which is suitable for static images.
Is this a mistake in the example code?
Asking because doing something similarly in my own app leaves AO trails, which would indicate the motion vector texture values are too small in magnitude. I don't really see trails in the example, even when I speed up the animation, but that could be due to the example being monochrome.
Update:
If I multiply the uv offsets by the size of the texture, I get a bad result. Which seems to indicate the header is misleading and they are in fact in uv coordinates. So perhaps the trails I'm seeing in my app are for some other reason.
I also wonder who is actually using this API other than me? I would think most game engines are doing their own thing. Perhaps some of apple's own code uses it.
I've got the following code to generate an MDLMaterial from my own material data model:
public extension MaterialModel {
var mdlMaterial: MDLMaterial {
let f = MDLPhysicallyPlausibleScatteringFunction()
f.metallic.floatValue = metallic
f.baseColor.color = CGColor(red: CGFloat(color.x), green: CGFloat(color.y), blue: CGFloat(color.z), alpha: 1.0)
f.roughness.floatValue = roughness
return MDLMaterial(name: name, scatteringFunction: f)
}
}
When exporting to OBJ, I get the expected material properties:
# Apple ModelI/O MTL File: testExport.mtl
newmtl material_1
Kd 0.163277 0.0344635 0.229603
Ka 0 0 0
Ks 0
ao 0
subsurface 0
metallic 0
specularTint 0
roughness 0
anisotropicRotation 0
sheen 0.05
sheenTint 0
clearCoat 0
clearCoatGloss 0
newmtl material_2
Kd 0.814449 0.227477 0.124541
Ka 0 0 0
Ks 0
ao 0
subsurface 0
metallic 0
specularTint 0
roughness 1
anisotropicRotation 0
sheen 0.05
sheenTint 0
clearCoat 0
clearCoatGloss 0
However when exporting USD I just get:
#usda 1.0
(
defaultPrim = "_0"
endTimeCode = 0
startTimeCode = 0
timeCodesPerSecond = 60
upAxis = "Y"
)
def Xform "Obj0"
{
def Mesh "_"
{
uniform bool doubleSided = 0
float3[] extent = [(896, 896, 896), (1152, 1152, 1148.3729)]
int[] faceVertexCounts = ...
int[] faceVertexIndices = ...
point3f[] points = ...
}
def Mesh "_0"
{
uniform bool doubleSided = 0
float3[] extent = [(898.3113, 896.921, 1014.4961), (1082.166, 1146.7178, 1152)]
int[] faceVertexCounts = ...
int[] faceVertexIndices = ...
point3f[] points = ...
matrix4d xformOp:transform = ( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )
uniform token[] xformOpOrder = ["xformOp:transform"]
}
}
There aren't any material properties.
FWIW, this specifies a set of common material parameters for USD: https://openusd.org/release/spec_usdpreviewsurface.html
(Note: there is no tag for ModelIO, so using SceneKit, etc.)
Anyone have a sense of what could cause this? Running on iOS 17.0.2. This seems to be a regression in iOS 17.
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x100ad4437fff8)
* frame #0: 0x00000001ca2264ec AttributeGraph`AG::swift::existential_type_metadata::project_value(void const*) const + 40
frame #1: 0x00000001ca2349a8 AttributeGraph`AG::LayoutDescriptor::compare_existential_values(AG::swift::existential_type_metadata const*, unsigned char const*, unsigned char const*, unsigned int) + 108
frame #2: 0x00000001ca21b938 AttributeGraph`AG::LayoutDescriptor::Compare::operator()(unsigned char const*, unsigned char const*, unsigned char const*, unsigned long, unsigned int) + 560
frame #3: 0x00000001ca21b9b8 AttributeGraph`AG::LayoutDescriptor::Compare::operator()(unsigned char const*, unsigned char const*, unsigned char const*, unsigned long, unsigned int) + 688
frame #4: 0x00000001ca21b674 AttributeGraph`AG::LayoutDescriptor::compare(unsigned char const*, unsigned char const*, unsigned char const*, unsigned long, unsigned int) + 96
frame #5: 0x00000001ca21afb0 AttributeGraph`AGGraphSetOutputValue + 268
frame #6: 0x00000001a7bdd924 SwiftUI`___lldb_unnamed_symbol227590 + 72
frame #7: 0x00000001a6ce9194 SwiftUI`___lldb_unnamed_symbol111702 + 20
frame #8: 0x000000019bca3994 libswiftCore.dylib`Swift.withUnsafePointer<τ_0_0, τ_0_1>(to: inout τ_0_0, _: (Swift.UnsafePointer<τ_0_0>) throws -> τ_0_1) throws -> τ_0_1 + 28
frame #9: 0x00000001a6c6d70c SwiftUI`___lldb_unnamed_symbol110270 + 1592
frame #10: 0x00000001a7bdeb3c SwiftUI`___lldb_unnamed_symbol227617 + 408
frame #11: 0x00000001a7bde698 SwiftUI`___lldb_unnamed_symbol227614 + 876
frame #12: 0x00000001a7619cfc SwiftUI`___lldb_unnamed_symbol184045 + 32
frame #13: 0x00000001ca21e854 AttributeGraph`AG::Graph::UpdateStack::update() + 512
frame #14: 0x00000001ca215504 AttributeGraph`AG::Graph::update_attribute(AG::data::ptr<AG::Node>, unsigned int) + 424
frame #15: 0x00000001ca21ff58 AttributeGraph`AG::Subgraph::update(unsigned int) + 848
frame #16: 0x00000001a7a621d4 SwiftUI`___lldb_unnamed_symbol216794 + 384
frame #17: 0x00000001a7a63610 SwiftUI`___lldb_unnamed_symbol216852 + 24
frame #18: 0x00000001a710a638 SwiftUI`___lldb_unnamed_symbol143862 + 28
frame #19: 0x00000001a7b55a0c SwiftUI`___lldb_unnamed_symbol223201 + 108
frame #20: 0x00000001a7b481f4 SwiftUI`___lldb_unnamed_symbol223031 + 96
frame #21: 0x00000001a710187c SwiftUI`___lldb_unnamed_symbol143639 + 84
frame #22: 0x00000001a7a635d8 SwiftUI`___lldb_unnamed_symbol216851 + 200
frame #23: 0x00000001a7a634c4 SwiftUI`___lldb_unnamed_symbol216850 + 72
frame #24: 0x00000001a74514c0 SwiftUI`___lldb_unnamed_symbol170645 + 28
frame #25: 0x00000001a6d196d4 SwiftUI`___lldb_unnamed_symbol114472 + 120
frame #26: 0x00000001a6d19780 SwiftUI`___lldb_unnamed_symbol114473 + 72
frame #27: 0x00000001a490ad94 UIKitCore`_UIUpdateSequenceRun + 84
frame #28: 0x00000001a490a484 UIKitCore`schedulerStepScheduledMainSection + 144
frame #29: 0x00000001a490a540 UIKitCore`runloopSourceCallback + 92
frame #30: 0x00000001a2684acc CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 28
frame #31: 0x00000001a2683d48 CoreFoundation`__CFRunLoopDoSource0 + 176
frame #32: 0x00000001a26824fc CoreFoundation`__CFRunLoopDoSources0 + 244
frame #33: 0x00000001a2681238 CoreFoundation`__CFRunLoopRun + 828
frame #34: 0x00000001a2680e18 CoreFoundation`CFRunLoopRunSpecific + 608
frame #35: 0x00000001e51415ec GraphicsServices`GSEventRunModal + 164
frame #36: 0x00000001a4a8f350 UIKitCore`-[UIApplication _run] + 888
frame #37: 0x00000001a4a8e98c UIKitCore`UIApplicationMain + 340
frame #38: 0x00000001a7457354 SwiftUI`___lldb_unnamed_symbol171027 + 176
frame #39: 0x00000001a7457198 SwiftUI`___lldb_unnamed_symbol171025 + 152
frame #40: 0x00000001a70d4434 SwiftUI`___lldb_unnamed_symbol142421 + 128
I'm getting
Thread 5: EXC_RESOURCE (RESOURCE_TYPE_MEMORY: high watermark memory limit exceeded) (limit=6 MB)
My thumbnails do render (and they require more than 6mb to do so), so I wonder about the behavior here. Does the OS try to render thumbnails with a very low memory limit and then retry if that fails?
I'm trying to get boost to compile for the iOS simulator on my M2 Mac.
I've got this script:
set -euxo pipefail
# See https://formulae.brew.sh/formula/boost
# See https://stackoverflow.com/questions/1577838/how-to-build-boost-libraries-for-iphone
wget https://boostorg.jfrog.io/artifactory/main/release/1.83.0/source/boost_1_83_0.tar.bz2
tar zxf boost_1_83_0.tar.bz2
mv boost_1_83_0 boost
root=`pwd`
cd boost
B2_ARGS="-a -j12 --with-iostreams --with-regex"
# Build for simulator
./bootstrap.sh --prefix=$root/install-ios-sim
IOSSIM_SDK_PATH=$(xcrun --sdk iphonesimulator --show-sdk-path)
cat << EOF >> project-config.jam
# IOS Arm Simulator
using clang : iphonesimulatorarm64
: xcrun clang++ -arch arm64 -stdlib=libc++ -std=c++20 -miphoneos-version-min=16.0 -fvisibility-inlines-hidden -target arm64-apple-ios16.0-simulator -isysroot $IOSSIM_SDK_PATH
;
EOF
./b2 $B2_ARGS --prefix=$root/install-ios-sim toolset=clang-iphonesimulatorarm64 link=static install
xcodebuild -create-xcframework thinks ./install-ios-sim/libboost_iostreams.a is not for the simulator.
Specifically, if you run the following after the build script, it will show the binary is ios-arm64.
xcodebuild -create-xcframework \
-library install-ios-sim/lib/libboost_iostreams.a \
-headers install-ios-sim/include \
-output boost.xcframework
I know how to use lipo, etc to determine the architecture of a library, but I don't know how create-xcframework differentiates a simulator binary from an iOS binary.
Note: I've also tried using the boost build script by Pete Goodliffe which generates an xcframework. However, I need a normal install of boost because I'm compiling other libraries against it. I couldn't get the script to do that. I also don't understand how the script successfully generates a simulator binary.
I've got a full-screen animation of a bunch of circles filled with gradients, with plenty of (careless) overdraw, plus real-time audio processing driving the animation, plus the overhead of SwiftUI's dependency analysis, and that app uses less energy (on iPhone 13) than the Xcode "Metal Game" template which is a rotating textured cube (a trivial GPU workload). Why is that? How can I investigate further?
Does CoreAnimation have access to a compositor fast-path that a Metal app cannot access?
Maybe another data point: when I do the same circles animation using SwiftUI's Canvas, the energy use is "Very High" and GPU utilization is also quite high. Eventually the phone's thermal state goes "Serious" and I get a message on the device that "Charging will resume when iPhone returns to normal temperature".
I'm trying to create a MTLFXTemporalScaler as follows (this is adapted from the sample code):
func updateTemporalScaler() {
let desc = MTLFXTemporalScalerDescriptor()
desc.inputWidth = renderTarget.renderSize.width
desc.inputHeight = renderTarget.renderSize.height
desc.outputWidth = renderTarget.windowSize.width
desc.outputHeight = renderTarget.windowSize.height
desc.colorTextureFormat = .bgra8Unorm
desc.depthTextureFormat = .depth32Float
desc.motionTextureFormat = .rg16Float
desc.outputTextureFormat = .bgra8Unorm
guard let temporalScaler = desc.makeTemporalScaler(device: device) else {
fatalError("The temporal scaler effect is not usable!")
}
temporalScaler.motionVectorScaleX = Float(renderTarget.renderSize.width)
temporalScaler.motionVectorScaleY = Float(renderTarget.renderSize.height)
mfxTemporalScaler = temporalScaler
}
I'm getting the following error the 3rd time the code is called:
/AppleInternal/Library/BuildRoots/91a344b1-f985-11ee-b563-fe8bc7981bff/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Runtimes/MPSRuntime/Operations/RegionOps/ANRegion.mm:855: failed assertion `ANE intermediate buffer handle not same!'
When I copy the code out to a playground, it succeeds when called with the same sequence of descriptors. Does this seem like a bug with MTLFXTemporalScaler?
I've been using an XCFramework of LuaJIT for years now in my app. After upgrading to Xcode 16, calls to the LuaJIT interpreter started segfaulting on macOS in release mode only. Works on iOS (both debug/release). Works fine with Xcode 15.4 (both debug/release). I have a very simple repro.
Wondering what could actually cause that sort of thing, given it's the same XCFramework (i.e. precompiled with optimization on). Guessing this is something having to do with the way LuaJIT is called, or the environment in which it runs, when the optimizer is turned on.
https://github.com/LuaJIT/LuaJIT/issues/1290
(FB15512926)
In my Swift app, Xcode has decided to put various important settings like, SWIFT_VERSION under "User Defined."
Does anyone know why?
I'm trying to switch to UIKit's document lifecycle due to serious bugs with SwiftUI's version.
However I'm noticing the template project from Xcode isn't compatible with Swift 6 (I already migrated my app to Swift 6.). To reproduce:
File -> New -> Project
Select "Document App" under iOS
Set "Interface: UIKit"
In Build Settings, change Swift Language Version to Swift 6
Run app
Tap "Create Document"
Observe: crash in _dispatch_assert_queue_fail
Does anyone know of a work around other than downgrading to Swift 5?
I'm streaming mp3 audio data using URLSession/AudioFileStream/AVAudioConverter and getting occasional silent buffers and glitches (little bleeps and whoops as opposed to clicks). The issues are present in an offline test, so this isn't an issue of underruns.
Doing some buffering on the input coming from the URLSession (URLSessionDataTask) reduces the glitches/silent buffers to rather infrequent, but they do still happen occasionally.
var bufferedData = Data()
func parseBytes(data: Data) {
bufferedData.append(data)
// XXX: this buffering reduces glitching
// to rather infrequent. But why?
if bufferedData.count > 32768 {
bufferedData.withUnsafeBytes { (bytes: UnsafeRawBufferPointer) in
guard let baseAddress = bytes.baseAddress else { return }
let result = AudioFileStreamParseBytes(audioStream!,
UInt32(bufferedData.count),
baseAddress,
[])
if result != noErr {
print("❌ error parsing stream: \(result)")
}
}
bufferedData = Data()
}
}
No errors are returned by AudioFileStream or AVAudioConverter.
func handlePackets(data: Data,
packetDescriptions: [AudioStreamPacketDescription]) {
guard let audioConverter else {
return
}
var maxPacketSize: UInt32 = 0
for packetDescription in packetDescriptions {
maxPacketSize = max(maxPacketSize, packetDescription.mDataByteSize)
if packetDescription.mDataByteSize == 0 {
print("EMPTY PACKET")
}
if Int(packetDescription.mStartOffset) + Int(packetDescription.mDataByteSize) > data.count {
print("❌ Invalid packet: offset \(packetDescription.mStartOffset) + size \(packetDescription.mDataByteSize) > data.count \(data.count)")
}
}
let bufferIn = AVAudioCompressedBuffer(format: inFormat!, packetCapacity: AVAudioPacketCount(packetDescriptions.count), maximumPacketSize: Int(maxPacketSize))
bufferIn.byteLength = UInt32(data.count)
for i in 0 ..< Int(packetDescriptions.count) {
bufferIn.packetDescriptions![i] = packetDescriptions[i]
}
bufferIn.packetCount = AVAudioPacketCount(packetDescriptions.count)
_ = data.withUnsafeBytes { ptr in
memcpy(bufferIn.data, ptr.baseAddress, data.count)
}
if verbose {
print("handlePackets: \(data.count) bytes")
}
// Setup input provider closure
var inputProvided = false
let inputBlock: AVAudioConverterInputBlock = { packetCount, statusPtr in
if !inputProvided {
inputProvided = true
statusPtr.pointee = .haveData
return bufferIn
} else {
statusPtr.pointee = .noDataNow
return nil
}
}
// Loop until converter runs dry or is done
while true {
let bufferOut = AVAudioPCMBuffer(pcmFormat: outFormat, frameCapacity: 4096)!
bufferOut.frameLength = 0
var error: NSError?
let status = audioConverter.convert(to: bufferOut, error: &error, withInputFrom: inputBlock)
switch status {
case .haveData:
if verbose {
print("✅ convert returned haveData: \(bufferOut.frameLength) frames")
}
if bufferOut.frameLength > 0 {
if bufferOut.isSilent {
print("(haveData) SILENT BUFFER at frame \(totalFrames), pending: \(pendingFrames), inputPackets=\(bufferIn.packetCount), outputFrames=\(bufferOut.frameLength)")
}
outBuffers.append(bufferOut)
totalFrames += Int(bufferOut.frameLength)
}
case .inputRanDry:
if verbose {
print("🔁 convert returned inputRanDry: \(bufferOut.frameLength) frames")
}
if bufferOut.frameLength > 0 {
if bufferOut.isSilent {
print("(inputRanDry) SILENT BUFFER at frame \(totalFrames), pending: \(pendingFrames), inputPackets=\(bufferIn.packetCount), outputFrames=\(bufferOut.frameLength)")
}
outBuffers.append(bufferOut)
totalFrames += Int(bufferOut.frameLength)
}
return // wait for next handlePackets
case .endOfStream:
if verbose {
print("✅ convert returned endOfStream")
}
return
case .error:
if verbose {
print("❌ convert returned error")
}
if let error = error {
print("error converting: \(error.localizedDescription)")
}
return
@unknown default:
fatalError()
}
}
}
The document-based SwiftUI example app (https://developer.apple.com/documentation/swiftui/building-a-document-based-app-with-swiftui) doesn't specify a launch image.
It would seem per the HIG that the "pinkJungle" background in the app would be a decent candidate for a launch image, since it will be in the background when the document browser comes up.
However when specifying it as the UIImageName, it is not aligned the same as the background image. I'm having trouble figuring out how it should be aligned to match the image. The launch image seems to be scaled up a bit over scaledToFill.
I suppose a launch storyboard might make this more explicit, but I still should be able to do it without one.
This is the image when displayed as the launch image:
and this is how it's rendered in the background right before the document browser comes up: