I opened a feedback ticket (FB16508762) but maybe someone in the community already found a workaround while the feedback reaches the maintainers.
When I put a Button inside a ScrollView, the tap animation stops working reliably and works only when the user taps and holds the button for a short time. The reasons, I believe is related to the fact that isPressed of configuration does not change and the default button styles use it to animate the tap.
import SwiftUI
struct DebuggingButtonStyle: ButtonStyle {
func makeBody(configuration: Configuration) -> some View {
configuration.label
.onChange(of: configuration.isPressed, { oldValue, newValue in
print("Is pressed: \(oldValue) -> \(newValue)")
})
}
}
struct ContentView: View {
var body: some View {
VStack {
Text("Buttons inside scroll view respond to taps as expected, however isPessed value of the configuration do not change unless the user press and hold it. Try to press the promiment button quickly or use the debug button and observe the console log.")
ScrollView {
VStack {
Button("Button Inside ScrollView") {
print("Button tapped")
}
.buttonStyle(.borderedProminent)
Button("Button Inside ScrollView (printing isPressed)") {
print("Button tapped")
}
.buttonStyle(DebuggingButtonStyle())
}
}
.border(FillShapeStyle(), width: 2)
Spacer()
Text("For reference, here is a button outside of a ScrollView. Tap the promiment button to observe how the button is expected to animate in respnse to a press.")
VStack {
Button("Button Outside ScrollView") {
print("Button tapped")
}
.buttonStyle(.borderedProminent)
Button("Button Outside ScrollView (printing isPressed)") {
print("Button tapped")
}
.buttonStyle(DebuggingButtonStyle())
}
}
.padding()
}
}
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
We have noticed that sometimes our app spends too much time in the first call of AVAudioSession.sharedInstance and [AVAudioSession setCategory:error:] which we call on app's initialization (during init of apps delegate). I am not sure if the app is stuck in these calls or it simply takes too much time to complete.
This probably causes the app to crash due to main thread watchdog.
Would it be safe to move these calls to a separate thread?
How to deduce from NSMethodSignature that a struct argument is passed by pointer?
Specifically on ARM.
For example if I have:
@protocol TestProtocol <NSObject>
- (void)time:(CMTime)time;
- (void)rect:(CGRect)point;
@end
And then I do:
struct objc_method_description methodDescription1 =
protocol_getMethodDescription(@protocol(TestProtocol), @selector(time:), YES, YES);
struct objc_method_description methodDescription2 =
protocol_getMethodDescription(@protocol(TestProtocol), @selector(rect:), YES, YES);
NSMethodSignature *sig1 = [NSMethodSignature signatureWithObjCTypes:methodDescription1.types];
NSMethodSignature *sig2 = [NSMethodSignature signatureWithObjCTypes:methodDescription2.types];
const char *arg1 = [sig1 getArgumentTypeAtIndex:2];
const char *arg2 = [sig2 getArgumentTypeAtIndex:2];
NSLog(@"%s %s", methodDescription1.types, arg1);
NSLog(@"%s %s", methodDescription2.types, arg2);
The output is:
v40@0:8{?=qiIq}16 {?=qiIq}
v48@0:8{CGRect={CGPoint=dd}{CGSize=dd}}16 {CGRect={CGPoint=dd}{CGSize=dd}}
Both look similar, no indication that CMTime will be actually passed as a pointer.
But when I print the debug description:
NSLog(@"%@", [sig1 debugDescription]);
NSLog(@"%@", [sig2 debugDescription]);
The first prints:
...
argument 2: -------- -------- -------- --------
type encoding (^) '^{?=qiIq}'
flags {isPointer}
...
While the second prints:
...
argument 2: -------- -------- -------- --------
type encoding ({) '{CGRect={CGPoint=dd}{CGSize=dd}}'
flags {isStruct}
...
So this information is indeed stored in the method signature, but how do I retrieve it without parsing the debug description?
Are there rules I can use to deduce this myself? I tried to experiment with different structs but it is hard to spot a pattern.
There is no API to create an intersection between two CGPaths, however CoreGraphics knows how to do it behind the scenes. When calling CGContextClip (link) it will intersect current and clipping paths and store it in the clipping path.
I was thinking to utilize this to perform intersections between paths I have. The problem is I can not find a way to retrieve back the clipping path from CGContext.
Am I correct that such API does not exist or did I miss something?
I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode.
The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format.
The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports.
So the following:
AVAudioEngine *engine = [[AVAudioEngine alloc] init];
playerNode = [[AVAudioPlayerNode alloc] init];
AVAudioFormat *format = [[AVAudioFormat alloc]
initWithSettings:utterance.voice.audioFileSettings];
[engine attachNode:self.playerNode];
[engine connect:self.playerNode to:engine.mainMixerNode format:format];
Throws an exception:
Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\""
I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer.
Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes.
I could not find any constant or function which can make such format, ASDB or settings.
The smartest way I could think of, which does not work:
AudioStreamBasicDescription toDesc;
FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,
2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);
AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc];
Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated.
So what is the correct way to do this?
Topic:
Media Technologies
SubTopic:
Audio
Tags:
AVAudioEngine
Core Audio
AVFoundation
Core Audio Types
I downloaded Xcode 12.5 Beta and now an existing project does not compile with errors telling I can't use Class as a key in NSDictionary.
Cannot initialize a parameter of type 'id<NSCopying> _Nonnull const __unsafe_unretained' with an rvalue of type 'Class'
I have assumes so far that while Class does not conform to NSCopying, it is safe to use as a dictionary key because it implements copy and copyWithZone:.
Did anything change with that regard in the compiler/runtime shipped with Xcode 12.5? Is it safe (and endorsed practice) to cast to id<NSCopying> to silence these errors?