Post

Replies

Boosts

Views

Activity

Shapes from VNDetectFaceLandmarksRequest are sideways
I'm trying to set up an app that will detect the locations of the face/the facial landmarks in a still image using VNDetectFaceLandmarksRequest. For some reason though, when I draw the detected facial landmarks they show up in the correct location but rotated about 90 degrees. What gives? I've tried setting the orientation of the VNImageRequestHandler to be .left instead of .up on line 66, and that fixes the orientation of the face but gives it a strange offset away from the face. I've tried rotating the calculated landmarkLayer as things are being calculated but I think I don't understand CAShapeLayer transform manipulation enough to get it to be positioned how I'd like. StillImageViewController.swift
1
0
872
Sep ’21
Core ML Missing Metadata "inputSchema"
I'm working with a style transfer model trained with pytorch in google colaboratory and then converted to an ML package. When I bring it into xcode and try to preview the asset I see the following error. There was a problem decoding this Core ML document missingMetadataField(named: "inputSchema") I've been able to train and convert models as .mlmodel files, I'm only seeing this issue with .mlpackage files. I'm using xcode 13 beta, which as far as I know is the only version of xcode that can handle ML packages/programs at the moment, and I'm using the coremltools beta to handle the conversion. Prior to the conversion, or if I convert to an ML Model instead it seems to work just fine. Is this a problem with how the model is being structured, or converted? Is this a problem with how I've set up my xcode environment/swift project? Is there some way to update the metadata associated with ML packages to make sure the missing input schema is included?
1
0
1.7k
Sep ’21