Does this setup seem correct to get precompiled headers? When I look at the file compile times they're way too long, like it's not using the pch.
App.pch file, I set
#include "MyConfig.h"
Then in Build Settings:
GCC_PREFIX_HEADER = pathto/App.pch
GCC_PRECOMPILE_PREFIX_HEADER = YES
Force include of a header. This avoids needing to include the header first in every file.
-include MyConfig.h
or should it be?
-include App.pch
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I've been looking at C++ build times with Xcode. The Apple STL library for basic_string auto-instantiates basic_string for 5 types into all files that include it. I'm trying to use alternate STL libraries like EASTL and FASTL which have their own string types, but have to mix in the following types where there are holes.
1. <mutex>, <condition_variable>, <thread> include <system_error> which includes <string>
2. <random> includes <string>.
So these slow the build even if header are precompiled, but creates 5x versions of basic_string in char, char8_t, char16_t, char32_t, and wchar_t flavors into every file that happens to include this even indirectly through the headers above. I only use
string and basic_string<char>
with utf8 data anyhow. I don't need the other 4 types.
How can this be improved?
I have a shader that sets rasterizationEnabled to NO on the MTLRenderPipeline. It's essentially using a vertex shader to do compute. This vertex shader reads vertex buffer data, and writes out to another buffer from within the same shader.
The problem is I don't know how to correctly wrap this in a render pass. The MTLRenderPass creation from a MTLRenderPassDescriptor complains that no width/height/format or renderTarget/depth is set on this pass. But it's not dependent on rasterization or render textures. What is the correct way to specify the enclosing RenderPass to Metal?
I need to be able to drop these onto my app kram. But using the UTType library reports the following:
metallib - "application/octet-stream"
gltf - "model/gltf+json",
glb - "model/gltf+binary"
[UTType typeWithFilenameExtension: @"metallib"].identifier,
[UTType typeWithFilenameExtension: @"gltf"].identifier,
[UTType typeWithFilenameExtension: @"glb"].identifier
dyn.ah62d4rv4ge8043pyqf0g24pc, // ick - metallib
dyn.ah62d4rv4ge80s5dyq2, // ick - gltf
dyn.ah62d4rv4ge80s5dc // ick - glb
```
char str[256] = {};
strcpy(buffer, "unknown");
Shows up in the debuggre as unknown\0\0\0\0\0\0\0\0\0\0\0....
This holds a simple 7 character string, but Xcode insists on displaying all 256 characters even past the \0. That's the "Default" setting. "Show as c-string" doesn't help either, but should only display up to the end of the string.
Can this go back to the way it was before?
I have MSL .metal files generated by a parser that go into a blue reference folder. Xcode barely highlights "half" and "return" in purple and nothing else. .metal files that are included in project have blue and other highlights.
It's also quite limiting that VSCode has hlsl and a metal plugin, but Xcode doesn't syntax highlight my source HLSL files that I generate the .metal files from. This is quite common to go from HLSL or spirv back to MSL, since there is no path from MSL to spriv.
I'm working on a parser which translates HLSL to HLSL/MSL. But valid MSL isn't compiling when passing the depth2d to a class and class ctor. The ctor use allows globals to be referenced as member variables by the MSL which typically passes it's parameters from call to call.
This reports the following which makes no sense. The code is fine with use of texture2d and references, so seems to be a metal compiler bug. It's saying the ctor input needs to be device space, but it's already decleared as such. This limits any use of depth style textures in MSL.
DepthTest.metal:31:16: error: no matching constructor for initialization of 'SamplePSNS'
SamplePSNS shader(shadowMap, sampleBorder);
^ ~~~~~~~~~~~~~~~~~~~~~~~
DepthTest.metal:18:5: note: candidate constructor not viable: address space mismatch in 1st argument ('depth2d<float>'), parameter type must be 'device depth2d<float> &'
SamplePSNS(
^
DepthTest.metal:5:8: note: candidate constructor (the implicit copy constructor) not viable: requires 1 argument, but 2 were provided
struct SamplePSNS {
^
DepthTest.metal:5:8: note: candidate constructor (the implicit copy constructor) not viable: requires 1 argument, but 2 were provided
#include <metal_stdlib>
using namespace metal;
struct SamplePSNS {
struct InputPS {
float4 position [[position]];
};
device depth2d<float>& shadowMap;
thread sampler& sampleBorder;
float4 SamplePS(InputPS input) {
return shadowMap.sample_compare(sampleBorder, input.position.xy, input.position.z);
};
SamplePSNS(
device depth2d<float>& shadowMap,
thread sampler& sampleBorder)
: shadowMap(shadowMap),
sampleBorder(sampleBorder)
{}
};
fragment float4 SamplePS(
SamplePSNS::InputPS input [[stage_in]],
depth2d<float> shadowMap [[texture(0)]],
sampler sampleBorder [[sampler(0)]])
{
SamplePSNS shader(shadowMap, sampleBorder);
return shader.SamplePS(input);
}
I never want to run a C++ app when there is build failure, but XCode seems to think this is okay to do. If you hit play fast enough, it happens all the time. How can this be fixed?
Want to use half data, but it's unclear how the A series processors handle interpolating it across the polygon. Adreno/Nvidia doesn't allow half in shader input/output due to banding. Mali recommends declaring half out of the VS to minimize the parameter buffer, and declare float in the FS.
Can Apple provide some insight as to best practices here?
The memory layout doesn't change in this sort of cast, and this is a common construct when transforming normal and tangents.
float3 normal = input.normal * (float3x3)skinTfm;
no matching conversion for functional-style cast from 'metal::float4x4' (aka 'matrix<float, 4, 4>') to 'metal::float3x3' (aka 'matrix<float, 3, 3>')
There was a good 2021 WWDC presentation on using ProMotion on iOS, and using Adaptive Sync (ProMotion) on macOS. But while the macOS presentation showed how to detect ProMotion (fullscreen + min/maxInterval mismatch). The iOS side doesn't have this same mechanism. The talk mentions Metal sample code for the talk, but I don't see ProMotion mentioned anywhere in the Metal samples when I do search.
https://developer.apple.com/videos/play/wwdc2021/10147/
This has been broken since Monterrey macOS 12.0. I am running an x64 app under Rosetta2, and trying to test ProMotion. Is this possibly fixed in macOS 14.0? I see mention of a universal CAMetalDisplayLink finally, so we can also try that, but it won't fix testing on older macOS.
https://developer.apple.com/forums/thread/701855?answerId=708409022#708409022
Trying to test HDR running an EDR-capable iOS app on macOS. This is running on the same M2 MBP.
On macOS, NSScreen.maximumPotentialExtendedDynamciRangeColorComponentValue returns 16. That's what I'd expect on a mini-led display like this one.
On iOS on macOS, UIScreen.potentialEDRHeadroom reports 1.0. That's not correct.
Since this api requires &__dso_handle instead of the standard file/line/func, I had to modify my entire log system to pass this down from the macro call sites.
I have many callbacks that typically just forward data from other C++ libraries that never supply a dso_handle. so it's great how this logging system breaks most logger systems and doesn't have a warning level to match fault/error. I have the forwarded threadName, timestamp, etc and no where to store that in os_log. syslog was more powerful and clear than os_log, but I'm sure it's now too late to head down a more reasonable path..
So I pass the &__dso_handle all the way to the log command and hand it into the macro
#define my_os_log_with_type(dso, log, type, format, ...) __extension__({ \
os_log_t _log_tmp = (log); \
os_log_type_t _type_tmp = (type); \
if (os_log_type_enabled(_log_tmp, _type_tmp)) { \
OS_LOG_CALL_WITH_FORMAT(_os_log_impl, \
((void*)dso, _log_tmp, _type_tmp), format, ##__VA_ARGS__); \
} \
})
Logger.mm
// This doesn't work, logging the dso from the callsite. No file/line.
my_os_log_with_type(entry.dso, os_log_create( "com.foo", entry.tag), logLevel(entry.level)), "%{public}s", text );
// This does work, but who wants to jump to the forwarding log implementation?
os_log_with_type(os_log_create( "com.foo", entry.tag), logLevel(entry.level)), "%{public}s", text );
How is this a valid stack trace with mangled symbol names and file/line information? I've already demangled the name, even though Windows does this for me.
The recommended approach to get file/line seems to be to proc "atos" process repeatedly on the symbols to turn them all into file/line. But shouldn't there just be a function call for this, or option to backtrace_symbols() since it's looking up the symbol anyways.
I don't get how this external process call would work for iOS, and it seems slow for macOS as well.
Compare this with Window CaptureStackBackTrace, and then there is a simple function call via DbgHelp.lib to retrieve the file/line. Am I supposed to somehow use the CoreSymbolicate framework on macOS/iOS?