Hello everybody ,Two days ago I submitted my first iOS app. I was super excited. Yesterday my app came back rejected with “metadata rejected”. I am very inexperienced and I kind of panic and tried everything to solve it. I replied in the resolution center and then I made the mistake of clicking submit for review. And now I am waiting.Sorry to bother you but I really would like to hear from somebody with experience what is going to happen now...- Since I clicked submit for review will my app be again in the beginning of the queue?- what happens to my response in the resolution center? Will my reviewer still read it?- How long, in the worst case scenario will it take to get some feedback back?Thank you so much.Best regards,
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Hello everybody,I am new to iOS development and I found an error that I can not get past. I have read a lot online and on Stackoverflow but I don't understand why this error keeps coming up.I have a table view controller and I want to write text to other view controller using:navigationController?.pushViewController(viewController, animated: true)I have this class where I already have an outlet for the label.import UIKit
class PetitionDetailsViewController: UIViewController {
@IBOutlet weak var PetitionDetailsOutlet: UILabel!
override func viewDidLoad() {
super.viewDidLoad()
}
}On the other class I have this code where I try to add text to the label in the PetitionDetailsViewController after tapping one of the rows in the table view.override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
let viewController = PetitionDetailsViewController()
print(petitions[indexPath.row].body)
viewController.PetitionDetailsOutlet.text = petitions[indexPath.row].body
navigationController?.pushViewController(viewController, animated: true)
}I don't understand why this error keeps coming up. I have the outlet and initially the label is empty.Why is it nil?
Hello everybody,
I am trying to navigate from one view to another using NavigationView and NavigationLink.
I can't figure out what I am doing wrong, but the way I am doing it I get a warning saying:
"Result of NavigationLink<Label, Destination> is unused".
In fact, I can't navigate to the view I want to open.
Here is my code so far:
NavigationView {
VStack {
Button(action: {
let observation = self.createObservation()
self.records.addObservation(observation)
self.isPresented.toggle()
NavigationLink(destination: ObservationDetails(observation: observation).environmentObject(self.records)) {
EmptyView()
}
}) {
Text("Classify")
}
}
Thank you for your help!
Hello everybody,
For the past week I have been struggling to run inference on a classifier I built using Google's AutoML Vision tool.
At first I thought everything would go smoothly because Google allows to export a CoreML version of the final model. I assumed I would only need to use Apple's CoreML library to make it work. When I export the model Google provides a .mlmodel file and a dict.txt file with the classification labels. For the current model I have 100 labels.
This is my Swift code to run inference on the model.
private lazy var classificationRequest: VNCoreMLRequest = {
				do {
						let classificationModel = try VNCoreMLModel(for: NewGenusModel().model)
						let request = VNCoreMLRequest(model: classificationModel, completionHandler: { [weak self] request, error in
								self?.processClassifications(for: request, error: error)
						})
						request.imageCropAndScaleOption = .scaleFit
						return request
				}
				catch {
						fatalError("Error! Can't use Model.")
				}
		}()
		func classifyImage(receivedImage: UIImage) {
				let orientation = CGImagePropertyOrientation(rawValue: UInt32(receivedImage.imageOrientation.rawValue))
				if let image = CIImage(image: receivedImage) {
						DispatchQueue.global(qos: .userInitiated).async {
								let handler = VNImageRequestHandler(ciImage: image, orientation: orientation!)
								do {
										try handler.perform([self.classificationRequest])
								}
								catch {
										fatalError("Error classifying image!")
								}
						}
				}
		}
The problem started when I tried to pass a UIImage to run inference on the model. The input type of the original model was MultiArray (Float32 1 x 224 x 224 x 3). Using Coremltools library I was able to convert the input type to Image (Color 224 x 224) using Python.
This worked and here is my code:
import coremltools
import coremltools.proto.FeatureTypes_pb2 as ft
spec = coremltools.utils.load_spec("model.mlmodel")
input = spec.description.input[0]
input.type.imageType.colorSpace = ft.ImageFeatureType.RGB
input.type.imageType.height = 224
input.type.imageType.width = 224
coremltools.utils.save_spec(spec, "newModel.mlmodel")
My problem now is with the output type. I want to be able to access the confidence of the classification as well as the result label of the classification. Again using coremltools I was able to to access the output description and I got this.
name: "scores"
type {
	multiArrayType {
		dataType: FLOAT32
	}
}
I am trying to change it this way:
f = open("dict.txt", "r")
labels = f.read()
class_labels = labels.splitlines()
print(class_labels)
class_labels = class_labels[1:]
assert len(class_labels) == 57
for i, label in enumerate(class_labels):
	if isinstance(label, bytes):
		class_labels[i] = label.decode("utf8")
classifier_config = ct.ClassifierConfig(class_labels)
output = spec.description.output[0]
output.type = ft.DictionaryFeatureType
Unfortunately this is not working and I can't find information only that can help me... This I don't know what to do next.
Thank you for your help!
Hello everybody,
I am relatively new to ARKit and SceneKit and I have been experimenting with it.
I have been exploring plane detection and I want to keep only one plane in the view. If other planes are found I want the old ones to be removed.
This is the solution I found: I have an array with all found anchors and before adding a new child node I remove all the anchors from my scene.
What do you think of this solution? Do you think I should do it in any other way? Thank you!
private var planes = [UUID: Plane]()
private var anchors = [UUID: ARAnchor]()
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// we only care about planes
guard let planeAnchor = anchor as? ARPlaneAnchor else {
return
}
print("Found plane: \(planeAnchor)")
for anchor in anchors {
sceneView?.session.remove(anchor: anchor.value)
}
let plane = Plane(anchor: planeAnchor)
planes[anchor.identifier] = plane
anchors[anchor.identifier] = anchor
node.addChildNode(plane)
}
Hello everybody,
I am trying to run inference on a CoreML Model created by me using CreateML. I am following the sample code provided by Apple on the CoreML documentation page and every time I try to classify an image I get this error: "Could not create Espresso context".
Has this ever happened to anyone? How did you solve it?
Here is my code:
import Foundation
import Vision
import UIKit
import ImageIO
final class ButterflyClassification {
var classificationResult: Result?
lazy var classificationRequest: VNCoreMLRequest = {
do {
let model = try VNCoreMLModel(for: ButterfliesModel_1(configuration: MLModelConfiguration()).model)
return VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
self?.processClassification(for: request, error: error)
})
}
catch {
fatalError("Failed to lead model.")
}
}()
func processClassification(for request: VNRequest, error: Error?) {
DispatchQueue.main.async {
guard let results = request.results else {
print("Unable to classify image.")
return
}
let classifications = results as! [VNClassificationObservation]
if classifications.isEmpty {
print("No classification was provided.")
return
}
else {
let firstClassification = classifications[0]
self.classificationResult = Result(speciesName: firstClassification.identifier, confidence: Double(firstClassification.confidence))
}
}
}
func classifyButterfly(image: UIImage) - Result? {
guard let ciImage = CIImage(image: image) else {
fatalError("Unable to create ciImage")
}
DispatchQueue.global(qos: .userInitiated).async {
let handler = VNImageRequestHandler(ciImage: ciImage, options: [:])
do {
try handler.perform([self.classificationRequest])
}
catch {
print("Failed to perform classification.\n\(error.localizedDescription)")
}
}
return classificationResult
}
}
Thank you for your help!
Hello everybody,
I have very little experience developing applications for the Apple Watch. I want to use the Apple Watch to capture accelerometer and gyroscope data to create a CoreML model.
Could you give me some pointers on what I would have to do to be able to gather the data I need from the Apple Watch?
Do I need to create a simple Watch app to gather this data first and save it to a txt file, for exemple?
Thank you for your help.
Best regards,
Tomás
Hello everybody,
I am new to Machine Learning but I want to get started with developing CoreML models to try them out in a few apps of my own.
What is the best way to build a dataset from Apple Watch data to build an activity model?
Do I build an iPhone app that works with the Apple Watch in order to get the data I need, or is there a more direct way to do it through Xcode, maybe?
Thank you for for help.
Best regards,
Tomás
Topic:
App & System Services
SubTopic:
Hardware
Tags:
SensorKit
Machine Learning
Apple Watch
Core ML
Hello everyone,
I trying to draw a custom view inside a for each (list style), that is inside a Scroll View, that is inside a Navigation view. Like this.
Navigation View {
ScrollView {
ForEach(array of objects ...) {
CustomView()
}
}
}
The custom view calls up a sheet that has a button that is able to delete elements inside the collection used in the foreach.
Unless I use this asyncAfter after dismissing the sheet I always get index out of bounds when I try to remove the last element of the array of objects in the for each:
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
workouts.removeAll(where: { $0.id == workoutToRemoveID })
}
I have been trying to solve this bug, but so far no luck. Could you give me a hand?
Thank you for your help!
Hello everyone!
I am trying to implement push notifications in my WatchOS app. I have created the delegate class that handles the registration for remote notifications and that allows me to obtain the device token. Then I take the token and send it to Firebase, like this:
func didRegisterForRemoteNotifications(withDeviceToken deviceToken: Data) {
FirebaseApp.configure()
var ref: DatabaseReference!
ref = Database.database().reference().ref.child("/")
let tokenParts = deviceToken.map { data in String(format: "%02.2hhx", data) }
let token = tokenParts.joined()
if let userID = UserDefaults.standard.object(forKey: "id") as? String {
ref.child("users/\(userID)/token").setValue(token)
}
}
Then I am using a Python Script to communicate with APNS. I am using the http library to get access to HTTP2. This is what I have got:
payload = {
"aps" : {
"alert" : {
"title" : "Hello Push",
"message": "This is a notification!"
},
"category": "myCategory"
}
}
dev_server = "https://api.sandbox.push.apple.com:443"
device_token = "9fe2814b6586bbb683b1a3efabdbe1ddd7c6918f51a3b83e90fce038dc058550"
headers = {
'method': 'POST',
'path': '/3/device/{}'.format(device_token),
'autorization': 'bearer' + 'provider_token',
'apns-push-type': 'myCategory',
'apns-expiration': '0',
'apns-priority': '10',
}
async def test():
async with httpx.AsyncClient(http2=True) as client:
client = httpx.AsyncClient(http2=True)
r = await client.post(dev_server, headers=headers, data=payload)
print(r.text)
asyncio.run(test())
I have also downloaded the .p8 auth key file. But I don't really understand from the Apple Documentation what I have to do with it.
What is the provider token in the headers?
Am I doing the right thing with the token I receive from didRegisterForRemoteNotifications?
Topic:
Programming Languages
SubTopic:
Swift
Tags:
APNS
Swift
Notification Center
User Notifications