Hello everyone,
I am currently developing a smart camera app for iOS that recommends optimal zoom and exposure values on-device using a custom Core ML model. I am still waiting for an official response from Apple Support, but I wanted to ask the community if anyone has experience with a similar workflow regarding App Review and the DPLA.
Here is my training methodology:
-
I gathered my own proprietary dataset of original landscape photos.
-
I generated multiple variants of these photos with different zoom and exposure settings offline on my Mac.
-
I used the CalculateImageAestheticsScoresRequest (Vision framework) via a local macOS command-line tool to evaluate and score each variant.
-
Based on those scores, I labeled the "best" zoom and exposure parameters for each original photo.
-
I used this labeled dataset to train my own independent neural network using PyTorch, and then converted it to a Core ML model to ship inside my app.
Since the app uses my own custom model on-device and does not send any user data to a server, the privacy aspect is clear. However, I am curious if using the output of Apple's Vision API strictly offline to label my own dataset could be interpreted as "reverse engineering" or a violation of the Developer Program License Agreement (DPLA).
Has anyone successfully shipped an app using a similar knowledge distillation or automated dataset labeling approach with Apple's APIs? Did you face any pushback during App Review?
Any insights or shared experiences would be greatly appreciated!