It is stated that
From Fall 2023 you’ll receive an email from Apple if you upload an app to App Store Connect that uses required reason API without describing the reason in its privacy manifest file. From Spring 2024, apps that don’t describe their use of required reason API in their privacy manifest file won’t be accepted by App Store Connect.
There are some answers here : https://developer.apple.com/videos/play/wwdc2023/10060/ but far from answering all questions.
I have questions on how to implement:
Where exactly is the privacy manifest ? How to create it, from which file template in Xcode ? WWDC speaks of a PrivacyInfo.xcprivacy (does it require a more recent version of Xcode than 14.2).
WWDC describes a framework case. Is it the same for a "final" app ?
is there a specific format for describing the reason ? Or just plain text.
Is this text visible to the user or only to reviewer ?
does it apply retroactively to apps already in AppStore (do they need to be resubmitted ?). It seems not.
So I tried, in an iOS App, to declare the PrivacyInfo.xcprivacy as explained, with Xcode 14.2, using plist template, to no avail.
Really not clear on how to proceed or even start… We would need a clear step by step tutorial with all prerequisites (Xcode or MacOS versions needed for instance).
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I read in Xcode 15.2 release notes that @IBDesignable and @Inspectable are deprecated and will disappear in a future release.
After the abandon of WatchKit and its storyboard, replaced by SwiftUI, does it mean storyboards (and consequently UIKit) will progressively disappear, leaving developers alone with SwiftUI to design apps (IMHO, inadequate to design complex UI: for instance, I always struggle to position precisely objects for all devices sizes as we miss constraints manager in SwiftUI) ?
For sure, IBDesignable is (was) whimsical, but a very useful tool to design sophisticated UI. Replacing by #Preview does not make it.
I also understand that Xcode is a very complex piece of code and that maintaining and evolving some tools (as IBDesignable) requires effort. But isn't it what we expect from Apple ? To provide us with the best tools and keep on the promise of WYSIWYG environments, all along ?
Is it only me or do others share this view ?
I've noticed there is a new 'Same Here' button showing on any post (except your own posts).
I first thought it was a link to a similar question… Specially when the button just says 'Same here' without any badge value. But no, it is just similar to a like…
I guess the goal is to avoid and speed up instead of posting a reply or a comment 'Same here'.
Unfortunately, it is not possible to undo and no way to know who posted.
As your own posts do not show the button, does it mean you cannot know how many share the same issue than the one you posted ? If so, that's bizarre. Hope it will show the counter at least when it is non zero.
Let me see how many 'Same here' on this post…
.
Is it only me ?
I received recently a notification that an answer was "recommended by Apple", with a link to the post.
But when going to the post, this recommendation (with small Apple icon) do not show.
And the forum scores are not updated.
This is not the first time I get problems with forum notifications.
I filed a bug report: FB14172207
This app may run on MacOS or iOS.
I want to use windowResizability modifier (specially In MacOS) which is only available on masOS 13+ and iOS 17+, but still need to run on macOS 12 or iOS 15…
So I need something like
#if os(macOS)
if #available(macOS 13 *) {
use windowResizability
#else
do not use windowResizability
#endif
#else // iOS
if #available(iOS 17 *) {
use windowResizability
#else
do not use windowResizability
#endif
Here is the code where to apply (in @main)
struct TheApp: App {
var body: some Scene {
WindowGroup {
ContentView() // 1.11.2023
.frame(
minWidth: 1200, maxWidth: .infinity,
minHeight: 600, maxHeight: .infinity)
}
.windowResizability(.contentSize) // BTW: is that really necessary ?
}
}
How can I achieve this ? Do I need to write a WindowGroup extension for the modifier ? If so, how ?
BTW: is windowResizability really necessary ? App seems to work the same without it.
There are at the moment a lot of spams for a bank phone number.
https://developer.apple.com/forums/thread/769506
What is really surprising is to read App Store Connect Engineer answer, each time the same:
We appreciate your interest in participating in the forums! These forums are for questions about developing software and accessories for Apple platforms. Your question seems related to a consumer feature and is better suited for the Apple Support Communities
Is it an automatic answer (I cannot believe anyone who read the post did not notice it was a spam) ? If so, couldn't it simply detect it is a spam (Apple Intelligence could come to help) and delete the message (or the account) ?
PS: it would also be a spam in Apple Support Communities
PS2: I note the message has been deleted very rapidly.
Apparently, settings do not show anymore the apps settings in iOS 18.2.
I tested on simulators (Xcode 16.2) both on iOS 18.1 and iOS 18.2 and got very different results:
In iOS 18.1 simulator, I see the settings of a lot of apps.
In iOS 18.2 simulator, not a single app setting.
That is a really serious issue in simulator for development (I filed a bug report FB16175635), but would be really critical on device as it would make it impossible to adjust setting of many apps.
Unless I missed something (meta setting ?) in iOS 18.2 ?
I have not upgraded to 18.2 notably for this reason. So I would appreciate if someone who has upgraded could make the test and report ?
select Settings on Home page
scroll to Apps category
tap here to access the list
Does the list show anything ?
Thanks for your help.
I have an @objC used for notification.
kTag is an Int constant, fieldBeingEdited is an Int variable.
The following code fails at compilation with error: Command CompileSwift failed with a nonzero exit code if I capture self (I edited code, to have minimal case)
@objc func keyboardDone(_ sender : UIButton) {
DispatchQueue.main.async { [self] () -> Void in
switch fieldBeingEdited {
case kTag : break
default : break
}
}
}
If I explicitly use self, it compiles, even with self captured:
@objc func keyboardDone(_ sender : UIButton) {
DispatchQueue.main.async { [self] () -> Void in
switch fieldBeingEdited { // <<-- no need for self here
case self.kTag : break // <<-- self here
default : break
}
}
}
This compiles as well:
@objc func keyboardDone(_ sender : UIButton) {
DispatchQueue.main.async { () -> Void in
switch self.fieldBeingEdited { // <<-- no need for self here
case self.kTag : break // <<-- self here
default : break
}
}
}
Is it a compiler bug or am I missing something ?
If an app is rated 4+, does it have any additional obligation due to the SB2420 beyond this rating on the AppStore ?
Context: Xcode 26.3, iOS 18.7.6 on iPhone Xs
In this iOS app, I call UIActivityViewController to let user Airdrop files from the app.
When trying to send a URL whose file name contains some characters like accentuated (-, é, '), the transfer fails.
Removing those characters makes it work without problem.
The same app running on Mac (in iPad mode) works with both cases.
I also noticed that even when airdrop fails, there is no error reported by
activityVC.completionWithItemsHandler = { activity, success, items, error in }
Are those known issues ?
After seveal years with Swift, I still find it hard to use the if case let or while case let, even worse with opotional pattern if case let x?.So, I would like to find an expression to "speak" case let more naturally.Presently, to be sure of what I do, I have to menatlly replace the if case by the full switch statement, with a single case and default ; pretty tedious.I thought of canMatch or canMatchUnwrap … with:So, following would readif case let x = y { // if canMatch x with yif case let x? = someOptional { // if canMatchUnwrap x with someOptionalwhile case let next? = node.next { // while canMatchUnwrap next with node.nextAm I the only one with such problem ? Have you found a better way ?
I submitted an iOS app with a watchOS companion app.App has been 'Metadata Rejected':Here is the full message:Guideline 2.1 - Information NeededWe have started the review of your app, but we are not able to continue because we need access to a video that demonstrates the current version of your app in use on a physical watchOS device.Please only include footage in your demo video of your app running on a physical watchOS device, and not on a simulator. It is acceptable to use a screen recorder to capture footage of your app in use.Next StepsTo help us proceed with the review of your app, please provide us with a link to a demo video in the App Review Information section of App Store Connect and reply to this message in Resolution Center.To provide a link to a demo video:- Log in to App Store Connect- Click on "My Apps"- Select your app- Click on the app version on the left side of the screen- Scroll down to "App Review Information"- Provide demo video access details in the "Notes" section- Once you've completed all changes, click the "Save" button at the top of the Version Information page.Please note that in the event that your app may only be reviewed by means of a demo video, you will be required to provide an updated demo video with every resubmission.Since your App Store Connect status is Metadata Rejected, we do NOT require a new binary. To revise the metadata, visit App Store Connect to select your app and revise the desired metadata values. Once you’ve completed all changes, reply to this message in Resolution Center and we will continue the review.I have 3 questions:- Is it a systematic requirement for Watch apps ? I did not see in guidelines ; or is it for some reason specific to my app or to the reviewer ?- How can I record video on the Apple Watch ? Should I film the watch while in operation and post this video ? Or is there a direct way to record the video from the watch to iPhone (using system tools, not third party).- I understand it is not video for publication on appstore, but video for tester. So should I include video in the screen captures section or put it on some web site and give a link to it to the tester ?
Do you use third party framweworks, like Alamofire ?
In this app I get access to QRCode.
Reading works perfectly from the camera.
Now I am struggling to get the image that was processed by the built in QRCode reader.
I have found many hints on SO, but cannot make it work.
Here is the code I have now.
It is a bit long, I have to slit in 2 parts
I looked at:
// https://stackoverflow.com/questions/56088575/how-to-get-image-of-qr-code-after-scanning-in-swift-ios
// https://stackoverflow.com/questions/37869963/how-to-use-avcapturephotooutput
import UIKit
import AVFoundation
class ScannerViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {
		fileprivate var captureSession: AVCaptureSession! // use for QRCode reading
		fileprivate var previewLayer: AVCaptureVideoPreviewLayer!
		
		// To get the image of the QRCode
		private var photoOutputQR: AVCapturePhotoOutput!
		private var isCapturing = false
		
		override func viewDidLoad() {
				super.viewDidLoad()
				var accessGranted = false
			 //	switch AVCaptureDevice.authorizationStatus(for: .video) {
// HERE TEST FOR ACCESS RIGHT. WORKS OK ;
// But is .video enough ?
				}
				
				if !accessGranted {	return }
				captureSession = AVCaptureSession()
				
				photoOutputQR = AVCapturePhotoOutput() // IS IT THE RIGHT PLACE AND THE RIGHT THING TO DO ?
				captureSession.addOutput(photoOutputQR)	 // Goal is to capture an image of QRCode once acquisition is done
				guard let videoCaptureDevice = AVCaptureDevice.default(for: .video) else { return }
				let videoInput: AVCaptureDeviceInput
				
				do {
						videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
				} catch {	return }
				
				if (captureSession.canAddInput(videoInput)) {
						captureSession.addInput(videoInput)
				} else {
						failed()
						return
				}
				
				let metadataOutput = AVCaptureMetadataOutput()
				
				if (captureSession.canAddOutput(metadataOutput)) {
						captureSession.addOutput(metadataOutput) // SO I have 2 output in captureSession. IS IT RIGHT ?
						
						metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
						metadataOutput.metadataObjectTypes = [.qr]	// For QRCode video acquisition
						
				} else {
						failed()
						return
				}
				
				previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
				previewLayer.frame = view.layer.bounds
				previewLayer.frame.origin.y += 40
				previewLayer.frame.size.height -= 40
				previewLayer.videoGravity = .resizeAspectFill
				view.layer.addSublayer(previewLayer)
				captureSession.startRunning()
		}
		
		override func viewWillAppear(_ animated: Bool) {
				
				super.viewWillAppear(animated)
				if (captureSession?.isRunning == false) {
						captureSession.startRunning()
				}
		}
		
		override func viewWillDisappear(_ animated: Bool) {
				
				super.viewWillDisappear(animated)
				if (captureSession?.isRunning == true) {
						captureSession.stopRunning()
				}
		}
		
		// MARK: - scan Results
		
		func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
				
				captureSession.stopRunning()
				
				if let metadataObject = metadataObjects.first {
						guard let readableObject = metadataObject as? AVMetadataMachineReadableCodeObject else { return }
						guard let stringValue = readableObject.stringValue else { return }
						AudioServicesPlaySystemSound(SystemSoundID(kSystemSoundID_Vibrate))
						found(code: stringValue)
				}
				// Get image - IS IT THE RIGHT PLACE TO DO IT ?
				// https://stackoverflow.com/questions/37869963/how-to-use-avcapturephotooutput
				print("Do I get here ?", isCapturing)
				let photoSettings = AVCapturePhotoSettings()
				let previewPixelType = photoSettings.availablePreviewPhotoPixelFormatTypes.first!
				print("previewPixelType", previewPixelType)
				let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
														 kCVPixelBufferWidthKey as String: 160,
														 kCVPixelBufferHeightKey as String: 160]
				photoSettings.previewPhotoFormat = previewFormat
				if !isCapturing {
						isCapturing = true
						photoOutputQR.capturePhoto(with: photoSettings, delegate: self)
				}
				dismiss(animated: true)
		}
		
}
extension ScannerViewController: AVCapturePhotoCaptureDelegate {
	
		func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
			
				isCapturing = false
				print("photo", photo, photo.fileDataRepresentation())
				guard let imageData = photo.fileDataRepresentation() else {
						print("Error while generating image from photo capture data.");
						return
				}
		 }
}
I get the following print on console
Clearly photo is not loaded properly
Do I get here ? false
previewPixelType 875704422
photo <AVCapturePhoto: 0x281973a20 pts:nan 1/1 settings:uid:3 photo:{0x0} time:nan-nan> nil
Error while generating image from photo capture data.
I suspect this point has been discussed in length, but I would like to find some reference to the design logic behind some Swift key aspect : the assignment operator, by value or reference.
We know well how = works, depending it deals with reference or value (knowing the consequence of misuse) and the difference between the 2 :
class AClass {
var val: Int = 0
}
struct AStruct {
var val : Int = 0
}
let aClass = AClass()
let bClass = aClass
bClass.val += 10
print("aClass.val", aClass.val, "bClass.val", bClass.val)
let aStruct = AStruct()
var bStruct = aStruct
bStruct.val += 10
print("aStruct.val", aStruct.val, "bStruct.val", bStruct.val)
Hence my question.
Was it ever considered to have 2 operators, one used to assign reference and the other to assign value?
Imagine we have :
= operator when dealing with references
:= operator when dealing with content.
Then
let bClass = aClass
would remain unchanged.
But
var bStruct = aStruct
would not be valid anymore, with a compiler warning to replace by
var bStruct := aStruct
On the other end, we could now write
let bClass := aClass
to create a new instance and assign another instance content, equivalent to convenience initialiser
class AClass {
var val: Int = 0
init(with aVar: AClass) {
self.val = aVar.val
}
init() {
}
}
called as
let cClass = AClass(with: aClass)
But the 2 operators would have made it clear that when using = we copy the reference. When using := we copy content.
I do think there is a strong rationale behind the present design choice (side effects I do not see ?), but I would appreciate to better understand which.