I've discovered what appears to be a system-level memory leak when pressing any key in Swift UI projects. This issue occurs even in a completely empty SwiftUI project with no custom code or event handlers.
When monitoring with Instruments' Leaks tool, I observe multiple memory leaks each time any key is pressed. These leaks consist primarily of:
NSExtraData objects (240 bytes each)
NSMenuItem objects (112 bytes each)
Other AppKit and Foundation objects
Has anyone else encountered this issue? How can I fix this behavior? While the leaks are small (about 5-6KB per keypress), they could potentially accumulate in applications where keyboard input is frequent.
Instruments
RSS for tagInstruments is a performance-analysis and testing tool for iOS, iPadOS, watchOS, tvOS, and macOS apps.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am using XCode16 and macOS 14.7.2. Previously, using the instruments on an iPhone with iOS 14.3 was normal, but when I upgraded to iOS 18, the instruments often couldn't find the library.
I have to restart the instruments to restore normal operation, but the problem will occur again after using it for a period of time
Context
I created a short script to CPU profile a program from the command line. I am able to record via the Instruments app, but when I try from the command line I get the following error shown below. This example is just profiling the grep command.
Error:
% cpu_profile /usr/bin/grep \
--recursive "Brendan Gregg" \
"$(xcode-select --print-path)/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk"
Profiling /usr/bin/grep into /tmp/cpu_profile_grep.trace
Starting recording with the CPU Profiler template. Launching process: grep.
Ctrl-C to stop the recording
Run issues were detected (trace is still ready to be viewed):
* [Error] Failed to start the recording: Failed to force all hardware CPU counters: 13.
Recording failed with errors. Saving output file...
Script:
#!/bin/sh
set -o errexit
set -o nounset
if [ "$#" -lt 1 ]
then
echo "Usage $0 <program> [arguments...]" 1>&2
exit 1
fi
PROGRAM="$(realpath "$1")"
shift
OUTPUT="/tmp/cpu_profile_$(basename "$PROGRAM").trace"
echo "Profiling $PROGRAM into $OUTPUT" 1>&2
# Delete potential previous traces
rm -rf "$OUTPUT"
xcrun xctrace record \
--template 'CPU Profiler' \
--no-prompt \
--output "$OUTPUT" \
--target-stdout - \
--launch -- "$PROGRAM" "$@"
open "$OUTPUT"
I think the error has to do with xctrace based on this post, but according to this post it should have been resolved in MacOS version 15.4.
System
Chip: Apple M3 Pro
macOS: Sequoia 15.4.1
xctrace version: 16.0 (16E140)
xcrun version: 70.
Xcode version: 16.3 (16E140)
Working Screenshots from Instruments App:
Topic:
Developer Tools & Services
SubTopic:
Instruments
Tags:
Developer Tools
Instruments
Xcode
Debugging
I want to check the disassembly code of vImage function by Instrument.
But when I double click the stack of time profiler, I can see nothing.
My Mac is intel chip and iPhone is 14 pro max
Xcode version is 16.3.
How can I fix this(maybe no .dSYM in iOS device support?)
I am trying to record the requests and responses in a WKWebView, but instruments does not seem to record them. Is this to be expected?
The webView is set to inspectable, and I am using the HTTP Traffic instrument.
All the requests the app is doing are recorded, but neither the original request for the webView, nor subsequent traffic is recorded.
When I use Safari to inspect the webView, all I see is the last page (even when I start the inspector before the first request is made).
How can I see these requests?
I made a box with MDLMesh.newBox(). I added normals.
let mdlMesh = MDLMesh.newBox(withDimensions: SIMD3<Float>(1, 1, 1),
segments: SIMD3<UInt32>(2, 2, 2),
geometryType: MDLGeometryType.triangles,
inwardNormals:false,
allocator: allocator)
mdlMesh.addNormals(withAttributeNamed: MDLVertexAttributeNormal, creaseThreshold: 0.25)
After I convert to MTKMesh the normals are (0,0,0) for a group of vertices. I can only inspect the geometry after I convert to MTKMesh. Is there a way you can use Geometry Viewer on a MDLMesh?
hello, I got a question about coreml.
I loaded the coreml model in the project and set the computing unit to CPU+GPU.
When I used instruments to analyze the performance, I found that there was an overhead of prepare gpu request before each inference. I also checked the freezing point graph and found that memory was frequently allocated.
Is this as expected? Is there any way to avoid frequent prepares?
I have tried some methods, such as memory sharing of predict interface input parameters, but it seems to be ineffective.
Hi, I need help to get Instruments running to profile my application. I tried to profile my main app (Qt-5.15-Framework, c++, Intel-arch only) from Xcode. My app starts and Instruments runs time profiler or Leaks for about 15 seconds and the quits. No crash, no message nothing.
This has been happening for a while on my Mac Studio M1 Max running macOS 14.7.6 and Xcode 15.4 IDE with a toolchain from Xcode 14.3 for the qmake (qt) project. However, this also happens if i set up a new vanilla Swift UI project from scratch wihtout any Qt stuff.
In addition to the Mac Studio I also have Mac Book Pro M4 running macOS 15.5 and Xcode 16.4. On that machine I get the same results, no matter if I try Instruments on my qt project or a vanilla SwiftUI project.
Also it does not make a difference if I change the toolchain with:
sudo xcode-select -s /Applications/Xcode_143.app
or
sudo xcode-select -s /Applications/Xcode_164.app.
Same results in either case.
I also tried switching to Debug build in the editing the scheme for profiling, but got no better results.
I also tried to lauch Instruments from Xcode using the Open Developer Tool menu entry, but got no better results.
When I start Instruments first, run my program in Xcode and attach to it, I get the same results.
Do you have any advice what to check for or to setup, maybe in signing or such? I am probably missing something basic.
Thanks in advance
I kept CoreLocation’s startUpdatingLocation running for a full day and used Performance trace - PowerProfiler to track the power usage during that time. The trace file was successfully generated on the iOS device, and I later transferred it to my MacBook.
However, when I tried to open the .atrc file, I received the following warning:
The document cannot be imported because of an error: File ‘/Users/jun/Downloads/PowerProfiler_25-06-16_181049_to_25-06-17_091037_001.atrc’ doesn’t contain any events.
Why is this happening? Is there a known issue with PowerProfiler in iOS 26, or am I missing something in the tracing setup?
Note: The .aar file and the extracted .atrc file are not attached here, as forum uploads do not support these formats.
Instruments: GPU Service reported error: Selected counter profile is not supported on target device`
I could use the Metal System Trace before the most recent update, but now whenever I try to profile using the Metal Counter instrument, I get the
[Warning] GPU Service reported error: Selected counter profile is not supported on target device. What is the issue here?
It seems as though using any initializer of SubscriptionOfferView or StoreView will create a memory leak.
This can be simply reproduced by adding this to your SwiftUI view:
SubscriptionOfferView(groupID: "yourgroupID", visibleRelationship: .all, useAppIcon: true)
or
StoreView(ids: ["monthly", "yearly"])
Tested on iOS 26 beta 2
Dear Apple Developer Support team,
I would like to request an official confirmation regarding the handling of transaction status in the App Store Server API, specifically for the GET /inApps/v1/transactions/{transactionId} endpoint.
As per our current understanding from the official documentation (Get Transaction Info), the API’s behavior appears to be:
If a transaction is finalized and successfully processed by App Store, querying this API will return HTTP 200 OK along with transaction details.
If a transaction is still in a pending or deferred state (such as awaiting Ask to Buy approval or pending authorization), the API will not return a 200, and instead respond with HTTP 404 Not Found or an appropriate error.
Could you please confirm if this behavior is accurate and officially supported?
Specifically:
Does a 200 OK response guarantee that a transaction is finalized and successfully recorded on App Store servers?
In cases where a transaction is pending approval (e.g. Ask to Buy), is it correct that GET /transactions/{transactionId} would return 404 Not Found until the transaction is finalized?
We would greatly appreciate your confirmation to align our server-side logic for transaction validation accordingly.
Thank you very much for your support!
Kind regards,
cuongnx
Topic:
Developer Tools & Services
SubTopic:
Instruments
Tags:
Wallet
StoreKit Test
StoreKit
Apple Pay
Hi,
My name is Hani Nemati, and I work at Microsoft, where we support several macOS applications such as Microsoft Edge and Teams. I’m also the primary contributor to Microsoft Performance Tools for Apple (https://github.com/microsoft/Microsoft-Performance-Tools-Apple), an open-source project aimed at improving trace analysis across platforms.
We are exploring ways to enhance our performance tracing capabilities on macOS and are particularly interested in the ability to attach PMU (Performance Monitoring Unit) counters to context switch events during trace collection. For reference, this capability is supported on Linux via LTTng using the add-context option (https://lttng.org/man/1/lttng-add-context/v2.13), and on Windows through Windows Performance Recorder (WPR), which allows PMU counters to be added at the start and end of context switches, enabling delta computation.
Would it be possible to introduce similar support in Instruments for macOS?
I’d appreciate any guidance or suggestions you might have on this request.
Thank you,
Hani Nemati
Email: hanemati@microsoft
Hello,
I wanted to try new Bottleneck analysis mode showcased in recent Apple's video, however when I select CPU Counters template in Instruments, there's no such option - just the same old "sample by Time/Events".
I have the latest XCode 16.4 and OS Sonoma 15.5, the system is M4 Max. While Instruments shows version 16.0 in About dialog for some reason (a bug?), it definitely comes from the Xcode 16.4 package and the build id is the same (16F6) as for XCode 16.4. I also checked just in case on another M1 system (all updated as well) and it's all the same.
Any clues why Bottleneck analysis is missing?
Regards,
Maxim
Topic:
Developer Tools & Services
SubTopic:
Instruments
Hello
We use Datadog Mobile Vitals in our app and I'm trying to run some tools in Instruments for comparison. I'm not sure what tool should I use for some of those metrics:
Slow Renders
Description: With slow renders data, you can monitor which views are taking longer than 16ms or 60Hz to render.
Instruments equivalent: Hangs including microhangs (?)
CPU ticks per second
Description: RUM tracks CPU ticks per second for each view and the CPU utilization over the course of a session. The recommended range is <40 for good and <80 for moderate.
Instruments equivalent: CPU Profiler (?)
Frozen Frames -
Description: Frames that take longer than 700ms to render appear as stuck and unresponsive in your application. These are classified as frozen frames.
Instruments equivalent: Hangs with > 500ms (?)
Memory Utilization
Description: The amount of physical memory used by your application in bytes for each view, over the course of a session. The recommended range is <200MB for good and <400MB for moderate.
Instruments equivalent: Allocation (?)
Hello,
I have recently been using the new Power Profiler tool introduced in Xcode 26 to analyze the power consumption of my app. My app primarily operates in the background. During a profiling session of 5 hours and 30 minutes, I observed that the app was active in the background for 2 hours and 30 minutes, while it remained in a suspended state for the remaining 3 hours.
While the Power Profiler allows me to identify spikes in CPU, networking, and other resource usage at specific points, it is difficult to determine whether these values are objectively considered high.
For example, in my case, the total QoS Execution Time of CPU Impact recorded during the 5 hours and 30 minutes was 12.18 seconds. I am wondering whether this is considered a good metric.
Could you please advise on the following points?
1. Is there a commonly accepted or recommended ratio between app active time and CPU time that developers should aim for?
2. Are there any guidelines or reference materials on how to interpret CPU usage and other resource metrics for apps that primarily run in the background?
Any insights or advice would be greatly appreciated.
Thank you.
Is there an xctrace instrument capable of capturing the complete control flow of a process? So far the best I can find is high-frequency sampling, but what I need is a trace of all machine instructions executed. This is easily done on Linux/Intel using the perf tool, which provides access to Intel's hardware-assisted tracing module (ptrace). According to the arm specification, my mac mini M1 (armv8.4-a) and M4 (armv9.2-a) both have hardware support in the CoreSight ETM (embedded trace macrocell) for full instruction tracing (i.e., no sampling, no gaps, no statistics--capturing the complete execution path). But it's not clear how I can access these features, if they are supported by the macos XNU kernel at all. After hours of searching online, it's nothing but dead ends. Any suggestions for documentation or Xcode tools or open-source tools or built-in macos tools would be much appreciated!
Topic:
Developer Tools & Services
SubTopic:
Instruments
We have 2 iPhones (16 pro - iOS 18.2, 16 regular - iOS 18.5 ) in single app mode and sometimes we need to shut them down manually. After holding Power and VolumeUp, shutdown screen appears as usual, but the slider isn't responding to touch, as well as the whole screen. After force restart using volume buttons, this issue disappears, but reappears after next phone restart.
If we disable single app mode -the issue is gone and touch screen works every time on shutdown screen. Both iPhones share the same behavior.
Is there any other way to reliably shut down the iPhone locally without using MDM or a way to fix this issue?
According to the ARM documentation for the CPU models available in Apple Silicon, the CoreSight implementation includes an Embedded Trace Macrocell which can perform a complete "Instruction Trace" (https://developer.arm.com/documentation/102119/0200/What-is-trace-). Although other operating systems such as Linux make this easy, we have not been able to find any tools or even a system-level API for accessing this feature of the ETM.
In the "Instruments" window of Xcode 16+, there is a "Processor Trace" instrument, but this performs sampling and is totally unrelated to the Instruction Trace we need for debugging and analysis purposes. Because it produces a complete, contiguous sequence of branch instructions, the Instruction Trace is essential for identifying precise execution behaviors that are otherwise invisible to the developer. On other platforms, an alternative is debugger scripting, but we have found far too many bugs and reliability issues with the macOS implementation of lldb.
Any suggestions would be greatly appreciated!
Topic:
Developer Tools & Services
SubTopic:
Instruments
I tried using Create ML of Xcode 26.0 beta 7 to generate a model using the "Word Tagging" template, and I received the error: Training progress unavailable - Unexpected error.
Using Create ML of XCode 16.4 with the same documentation, I was able to build the model and use it in a test app.
I'd like to understand why Create ML of Xcode 26 no longer works.
Topic:
Developer Tools & Services
SubTopic:
Instruments