Is there a specific introduction to the technology of vision look up, how to separate objects from pictures, or technical introductions to separate them from videos?
Looking at the demo video, it says that it uses machine learning technology, but I didn't see any introduction to this in the help documentation. Can anyone help provide a reference?
Great technique for separating objects from pictures or videos.
This technology is the same as the human separation technology provided by vision, but why the help document does not see the detailed API...
https://developer.apple.com/documentation/vision/applying_matte_effects_to_people_in_images_and_video
Can anyone tell me, thanks.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
If styletransfer is not performed, the channel obtained through VNGeneratePersonSegmentationRequest() can just match, but if styletransfer is performed, the final effect will be slightly different from the channel, and the resulting picture is 512*512 in resolution.
I try to match the channel of the hand obtained through VNGeneratePersonSegmentationRequest(), so that I can keep what I want, and the background can be replaced, but I tried my own learning package and others' learning package, and found that there is no way to match properly , I don't know where I went wrong, just a little bit off.
You can look at the picture. The picture effect retained by maskImage is a little different from what you expected. If it is a sketch, the effect is very obvious.
My understanding is that the resolution of the image processed by styletransfer is inconsistent with the resolution of the original image. Even if the size is matched, it is a bit biased, but I watched the video demo and the matching is very good. I don't know if my method is wrong? Shouldn't it be handled through channels or what? I hope I can get an answer. I hope I've made it clear. If anyone can answer my doubts, thank you.
Check the picture reference address, I can't upload the picture, the prompt tells me to upload it later.
https://kdocs.cn/l/cbYcFau6TFTe
How do the virtual objects added in ARkit obtain their alpha channel for image processing, for example, when I use the painting transfer function, I want to process the virtual object part separately, because each virtual object has a collision plane, so I obtained before All channels are channels with this collision plane, but this is wrong. Is there any good way to get the Alpha channel of the virtual object? Thanks!