After seveal years with Swift, I still find it hard to use the if case let or while case let, even worse with opotional pattern if case let x?.So, I would like to find an expression to "speak" case let more naturally.Presently, to be sure of what I do, I have to menatlly replace the if case by the full switch statement, with a single case and default ; pretty tedious.I thought of canMatch or canMatchUnwrap … with:So, following would readif case let x = y { // if canMatch x with yif case let x? = someOptional { // if canMatchUnwrap x with someOptionalwhile case let next? = node.next { // while canMatchUnwrap next with node.nextAm I the only one with such problem ? Have you found a better way ?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I propose that we collectively build a list of most wanted XCode features. Those pain points that do make our life more difficult with XCode or less fun.The goal would not be to have a scientifically correct ranking of all those, but simply:- make visible many ideas that have probably been reported in improvement requests- expose in short why that would be a great evolution, what pain point it would solve (functional improvements, not bugs correction)- if possible, discuss the feasibility of each idea.I would agree to update this original post to include new inputs.To avoid a thread where new posts get smaller and smaller in width, would be great to post each new idea as an answer to this original post. Of course discuss an idea as a reply to this idea post itself.Rule of the game would be tolerance, not trying to argue indefinitely on one idea. And be concise in wanted feature descriptionThe ultimate goal would be to influence positively XCode development team to consider the most wanted proposals.At least, have them explain us that one wanted feature in on the way (let's dream), why such idea is not so great, or why it would be too complex to implement…_________________________________Here is a first wanted feature, inspired from a thread in IB forum: https://forums.developer.apple.com/thread/72495Wanted Feature : Storyboard zoom when editing macOS appPain point: makes navigating in MacOS storyboard extremely painful if more than a few windows. Does limit the use of storyboards for MacOS App. The IB post was seen more than 7500 times, so that seems a largely shared concern.Feasibility: done for iOS, so should be possible for MacOS._________________________________Wanted Feature : Automatic/1-button icon, logo, screenshot, launch image generation for all project related devices/screens as mandated by ASC. Current with each new release.Pain point: The number of individually required image-based assets involved for meta data submittals became a burden long ago and continues to require significant labor to suss and satisfy.Feasibility: Xcode already does some of this w/launchStoryboards for new Swift projects, it just needs to be ubiquitous for all similar assets. It's just scripting/image resizing, which should not be a problem on macOS. Yes, I know there are 3rd party tools that help, but they seem to not keep up w/new devices/screen sizes - if they can do it, so can Xcode._________________________________Wanted Feature : More explicit help message when an action cannot be completed in IBPain point: When XCode cannot complete an action (e.g., create an IBOutlet for an IB object), option-cleaning the build folder solves the problem. Numerous issues reported on the forum were solved with this simple action. However, messages from IDE never mention this solution.Feasibility: Just adding (when appropriate) a suggestion in the message._________________________________Wanted Feature : When build fails, propose automatic correction action whenever possiblePain point: As an example, if some asset contains Finder information, a build error occurs, with message "resource fork, Finder information, or similar detritus not allowed Command /usr/bin/codesign failed with exit code 1". One has to go to Terminal, use xattr command and clean laboriously the offending files.Feasibility: As XCode has detected the files, it could propose to do the cleaning itself…- if it can be done with terminal command, so can Xcode._________________________________Wanted Feature : When an API is deprecated, get information on how to replace.Pain point: Most often, when an API is deprecated and we get a warning from compiler, we get no hint on how to replace. Thus, requiring to post on forum (a lot of posts relate to this search for information), search on web to find some hints… At the end losing time and not being sure to make the best decision.Feasibility: Those who deprecated API for sure know how to handle this. Should be (relatively) easy to provide information in doc, and in the deprecation message._________________________________Wanted Feature : When a string is missing for an infoPlist key, give a warning.Pain point: If the required key is missing, app will crash (or be rejected at Appstore verification stage which is a minor harm). I had the case where I called imagePicker, but intended to use only photo, not video. But if user taps on video, that caused a crash.Feasibility: When UIImagePickerControllerDelegate is used, compiler could check that infoPlist is defined for the required keys.._________________________________All comments welcomed.
I submitted an iOS app with a watchOS companion app.App has been 'Metadata Rejected':Here is the full message:Guideline 2.1 - Information NeededWe have started the review of your app, but we are not able to continue because we need access to a video that demonstrates the current version of your app in use on a physical watchOS device.Please only include footage in your demo video of your app running on a physical watchOS device, and not on a simulator. It is acceptable to use a screen recorder to capture footage of your app in use.Next StepsTo help us proceed with the review of your app, please provide us with a link to a demo video in the App Review Information section of App Store Connect and reply to this message in Resolution Center.To provide a link to a demo video:- Log in to App Store Connect- Click on "My Apps"- Select your app- Click on the app version on the left side of the screen- Scroll down to "App Review Information"- Provide demo video access details in the "Notes" section- Once you've completed all changes, click the "Save" button at the top of the Version Information page.Please note that in the event that your app may only be reviewed by means of a demo video, you will be required to provide an updated demo video with every resubmission.Since your App Store Connect status is Metadata Rejected, we do NOT require a new binary. To revise the metadata, visit App Store Connect to select your app and revise the desired metadata values. Once you’ve completed all changes, reply to this message in Resolution Center and we will continue the review.I have 3 questions:- Is it a systematic requirement for Watch apps ? I did not see in guidelines ; or is it for some reason specific to my app or to the reviewer ?- How can I record video on the Apple Watch ? Should I film the watch while in operation and post this video ? Or is there a direct way to record the video from the watch to iPhone (using system tools, not third party).- I understand it is not video for publication on appstore, but video for tester. So should I include video in the screen captures section or put it on some web site and give a link to it to the tester ?
Do you use third party framweworks, like Alamofire ?
I have an image to fill the entire screen, even behind the notch.
So I set the Top margin to safe area to -44.
That works OH on all iPhones… except on iPhone 12 ProMax simulator.
Here I get a few pixels uncovered at top.
I had to change value to -48 to get everything OK.
So question is: has "notch area" which defines safe area increased 4 pixels on iPhone 12 Pro max ? If so, is it a hardware change ?
In this app I get access to QRCode.
Reading works perfectly from the camera.
Now I am struggling to get the image that was processed by the built in QRCode reader.
I have found many hints on SO, but cannot make it work.
Here is the code I have now.
It is a bit long, I have to slit in 2 parts
I looked at:
// https://stackoverflow.com/questions/56088575/how-to-get-image-of-qr-code-after-scanning-in-swift-ios
// https://stackoverflow.com/questions/37869963/how-to-use-avcapturephotooutput
import UIKit
import AVFoundation
class ScannerViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {
		fileprivate var captureSession: AVCaptureSession! // use for QRCode reading
		fileprivate var previewLayer: AVCaptureVideoPreviewLayer!
		
		// To get the image of the QRCode
		private var photoOutputQR: AVCapturePhotoOutput!
		private var isCapturing = false
		
		override func viewDidLoad() {
				super.viewDidLoad()
				var accessGranted = false
			 //	switch AVCaptureDevice.authorizationStatus(for: .video) {
// HERE TEST FOR ACCESS RIGHT. WORKS OK ;
// But is .video enough ?
				}
				
				if !accessGranted {	return }
				captureSession = AVCaptureSession()
				
				photoOutputQR = AVCapturePhotoOutput() // IS IT THE RIGHT PLACE AND THE RIGHT THING TO DO ?
				captureSession.addOutput(photoOutputQR)	 // Goal is to capture an image of QRCode once acquisition is done
				guard let videoCaptureDevice = AVCaptureDevice.default(for: .video) else { return }
				let videoInput: AVCaptureDeviceInput
				
				do {
						videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
				} catch {	return }
				
				if (captureSession.canAddInput(videoInput)) {
						captureSession.addInput(videoInput)
				} else {
						failed()
						return
				}
				
				let metadataOutput = AVCaptureMetadataOutput()
				
				if (captureSession.canAddOutput(metadataOutput)) {
						captureSession.addOutput(metadataOutput) // SO I have 2 output in captureSession. IS IT RIGHT ?
						
						metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
						metadataOutput.metadataObjectTypes = [.qr]	// For QRCode video acquisition
						
				} else {
						failed()
						return
				}
				
				previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
				previewLayer.frame = view.layer.bounds
				previewLayer.frame.origin.y += 40
				previewLayer.frame.size.height -= 40
				previewLayer.videoGravity = .resizeAspectFill
				view.layer.addSublayer(previewLayer)
				captureSession.startRunning()
		}
		
		override func viewWillAppear(_ animated: Bool) {
				
				super.viewWillAppear(animated)
				if (captureSession?.isRunning == false) {
						captureSession.startRunning()
				}
		}
		
		override func viewWillDisappear(_ animated: Bool) {
				
				super.viewWillDisappear(animated)
				if (captureSession?.isRunning == true) {
						captureSession.stopRunning()
				}
		}
		
		// MARK: - scan Results
		
		func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
				
				captureSession.stopRunning()
				
				if let metadataObject = metadataObjects.first {
						guard let readableObject = metadataObject as? AVMetadataMachineReadableCodeObject else { return }
						guard let stringValue = readableObject.stringValue else { return }
						AudioServicesPlaySystemSound(SystemSoundID(kSystemSoundID_Vibrate))
						found(code: stringValue)
				}
				// Get image - IS IT THE RIGHT PLACE TO DO IT ?
				// https://stackoverflow.com/questions/37869963/how-to-use-avcapturephotooutput
				print("Do I get here ?", isCapturing)
				let photoSettings = AVCapturePhotoSettings()
				let previewPixelType = photoSettings.availablePreviewPhotoPixelFormatTypes.first!
				print("previewPixelType", previewPixelType)
				let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
														 kCVPixelBufferWidthKey as String: 160,
														 kCVPixelBufferHeightKey as String: 160]
				photoSettings.previewPhotoFormat = previewFormat
				if !isCapturing {
						isCapturing = true
						photoOutputQR.capturePhoto(with: photoSettings, delegate: self)
				}
				dismiss(animated: true)
		}
		
}
extension ScannerViewController: AVCapturePhotoCaptureDelegate {
	
		func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
			
				isCapturing = false
				print("photo", photo, photo.fileDataRepresentation())
				guard let imageData = photo.fileDataRepresentation() else {
						print("Error while generating image from photo capture data.");
						return
				}
		 }
}
I get the following print on console
Clearly photo is not loaded properly
Do I get here ? false
previewPixelType 875704422
photo <AVCapturePhoto: 0x281973a20 pts:nan 1/1 settings:uid:3 photo:{0x0} time:nan-nan> nil
Error while generating image from photo capture data.
Learning Widgets, I try a simple pattern: get a textField in the app;
have the Widget updated when textField is changed.
App is UIKit.
Widget has No "included Configuration Intent".
It compiles and works. Widget is updated every 5 minutes as asked line 23 (final snippet).
But the message is never updated (line 48 of final snippet) with globalToPass (as expected from line 24): it always shows "Hello".
What I tried: Create a singleton to hold the data to pass:
class Util {
		
		class var shared : Util {
				struct Singleton {
						static let instance = Util()
				}
				return Singleton.instance;
		}
		
		var globalToPass = "Hello"
}
shared the file between the 2 targets App and WidgetExtension
In VC, update the singleton when textField is changed and ask for widget to reload timeline
		@IBAction func updateMessage(_ sender: UITextField) {
				
				Util.shared.globalToPass = valueToPassLabel.text ?? "--"
				WidgetCenter.shared.reloadTimelines(ofKind: "WidgetForTest")
				WidgetCenter.shared.reloadAllTimelines()
		}
Problem : Widget never updated its message field
Probably I need to have the message in @State var, but I could not get it work.
Here is the full widget code at this time:
import WidgetKit
import SwiftUI
struct LoadStatusProvider: TimelineProvider {
		
		func placeholder(in context: Context) -> SimpleEntry {
				
				SimpleEntry(date: Date(), loadEntry: 0, message: Util.shared.globalToPass)
		}
		func getSnapshot(in context: Context, completion: @escaping (SimpleEntry) -> ()) {
				
				let entry = SimpleEntry(date: Date(), loadEntry: 0, message: Util.shared.globalToPass)
				completion(entry)
		}
		func getTimeline(in context: Context, completion: @escaping (Timeline<Entry>) -> ()) {
				var entries: [SimpleEntry] = []
				// Generate a timeline consisting of five entries an hour apart, starting from the current date.
				let currentDate = Date()
				for minuteOffset in 0 ..< 2 {
						let entryDate = Calendar.current.date(byAdding: .minute, value: 5*minuteOffset, to: currentDate)!
						let entry = SimpleEntry(date: entryDate, loadEntry: minuteOffset, message: Util.shared.globalToPass)
						entries.append(entry)
				}
				let timeline = Timeline(entries: entries, policy: .atEnd)
				completion(timeline)
		}
}
struct SimpleEntry: TimelineEntry {
		let date: Date
		let loadEntry: Int
		let message: String
}
struct WidgetForTestNoIntentEntryView : View {
		var entry: LoadStatusProvider.Entry
		var body: some View {
				let formatter = DateFormatter()
				formatter.timeStyle = .medium
				let dateString = formatter.string(from: entry.date)
				return
						VStack {
								Text(String(entry.message))
								HStack {
										Text("Started")
										Text(entry.date, style: .time)
								}
								HStack {
										Text("Now")
										Text(dateString)
								}
								HStack {
										Text("Loaded")
										Text(String(entry.loadEntry))
								}
						}
		}
}
@main
struct WidgetForTestNoIntent: Widget {
		let kind: String = "WidgetForTestNoIntent"
		var body: some WidgetConfiguration {
				StaticConfiguration(kind: kind, provider: LoadStatusProvider()) { entry in
						WidgetForTestNoIntentEntryView(entry: entry)
				}
				.configurationDisplayName("My Widget")
				.description("This is an example widget.")
		}
}
struct WidgetForTestNoIntent_Previews: PreviewProvider {
		static var previews: some View {
				
				WidgetForTestNoIntentEntryView(entry: SimpleEntry(date: Date(), loadEntry: 0, message: "-"))
						.previewContext(WidgetPreviewContext(family: .systemSmall))
		}
}
I have not defined extension for IntentHandler
I suspect this point has been discussed in length, but I would like to find some reference to the design logic behind some Swift key aspect : the assignment operator, by value or reference.
We know well how = works, depending it deals with reference or value (knowing the consequence of misuse) and the difference between the 2 :
class AClass {
var val: Int = 0
}
struct AStruct {
var val : Int = 0
}
let aClass = AClass()
let bClass = aClass
bClass.val += 10
print("aClass.val", aClass.val, "bClass.val", bClass.val)
let aStruct = AStruct()
var bStruct = aStruct
bStruct.val += 10
print("aStruct.val", aStruct.val, "bStruct.val", bStruct.val)
Hence my question.
Was it ever considered to have 2 operators, one used to assign reference and the other to assign value?
Imagine we have :
= operator when dealing with references
:= operator when dealing with content.
Then
let bClass = aClass
would remain unchanged.
But
var bStruct = aStruct
would not be valid anymore, with a compiler warning to replace by
var bStruct := aStruct
On the other end, we could now write
let bClass := aClass
to create a new instance and assign another instance content, equivalent to convenience initialiser
class AClass {
var val: Int = 0
init(with aVar: AClass) {
self.val = aVar.val
}
init() {
}
}
called as
let cClass = AClass(with: aClass)
But the 2 operators would have made it clear that when using = we copy the reference. When using := we copy content.
I do think there is a strong rationale behind the present design choice (side effects I do not see ?), but I would appreciate to better understand which.
I consider the following distribution scheme for some Mac App: distribute the commercial (paying) version on Appstore
propose a free demo version (of course limited in some aspects) on a web site. This would of course be notarised.
The reason is to avoid having several versions on the Appstore which could create some confusion. The 2 versions would have similar names and differ essentially in the size of data they can handle.
Does anyone know if this is authorised by the Appstore Guidelines ? Or must I publish both on the AppStore ? Is there a risk my app being rejected as spam ?
In a MacOS App: When I create a file (in a folder), I save a security bookmark for the file.
if I ask user to authorise its folder (before creating the file in it), I can save its bookmark too, allowing to create other files in this folder.
So, when I create a file and need to create a companion (eg, a Results file), I first ask access to the folder, then create the file and create the results file in the same folder (hence having sandbox authorisation).
My understanding is that it is not possible to programmatically create and save the folder bookmark, after deriving its url from the file url, without requesting user to explicitly grant access (with NSOpen panel) ? Which would be very logical as it would deny the goal of security bookmarks.
So, is user explicit authorisation required (logical but creates more complexity when user moves files in the Finder).
Note: In fact don't really need it, as I save bookmark for every accessed file, but I would like to know.
We have been waiting for this for many years.
And the icon is there, at the right most
That's a GREAT plus for the forum.
The Code Block doesn't number lines anymore.
The work around is to ask for Numbered list in addition to Code block.
Just applying Code Block:
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
commonInit()
}
Code Block AND Numbered List (or the other order)
1. required init?(coder aDecoder: NSCoder) {
2. super.init(coder: aDecoder)
3. commonInit()
4. }
Is there another way to get numbering directly ?
In this app, I have:
a VC as entry point (root)
which segues (push) to a SplitViewController
On iPad, when I get into the splitView, I can get back to the root by dragging down the split view.
But on iPhone, this does not work: even though presentation is .automatic, Splitview covers all screen. No way to get back.
I tried to create a back button in the detail views…
Could not get it.
What is the best way to do this ?
is there a setup for the initial segue or the views presentation modes to allow pop back ?
can I add a back button ? Where, calling what action to return ?
I cannot embed splitViewController in nav stack …
I would like to have the same solution on both iPhone and iPad…
In the new version of download > More, there doesn't seem to be a way to filter search efficiently. I searched for instance for MacOS (to see all releases). I get a very long list (200+) references, with a lot of Xcode, but nothing specific to MacOS.
If I search for Xcode 11, I get all Xcode references:
I added quotes to try and force an exact search, I just get a Java extension…
That was not the case in the previous version of the more page. Or do I miss something ?
Even though I have selected all the notification options in my profile,
I do not receive any mail (since about a week), at least when an answer is marked as correct.
I did check they are not in spam.
What do I miss ?