Post

Replies

Boosts

Views

Activity

Reply to The occulution relationship of virtual content and real object
Hi @radicalappdev and thanks for this informations. I also tried to get the best integration of my 3D model into the real world. The SceneReconstructionProvider works very well with an OcclusionMaterial(), but using this method I was losing the GroundingShadow(). So I looked into the Shadow Receiving Occlusion Surface (RealityKit). It doesn’t work with the default GroundingShadow, but it seems to work with a DirectionalLightComponent. However, I wasn’t able to achieve a nice rendering — I keep getting a kind of white haze over my shader, which looks bad. 1 ) the first difficulty is , it's difficult to setup a correct light because "proxy:true" is not valid in visionOS let lightEntity = Entity() let directionalLight = DirectionalLightComponent( color: .white, intensity: 1000 // NO PROXY OPTION IN VISION-OS ) let shadows = DirectionalLightComponent.Shadow( maximumDistance: 10, depthBias: 0.01 ) lightEntity.components[DirectionalLightComponent.self] = directionalLight lightEntity.components.set( DynamicLightShadowComponent(castsShadow: true)) lightEntity.transform.rotation = simd_quatf(angle: -.pi/4, axis: [1, 0, 0]) content.add(lightEntity) I made a basic shader to test : ** And the problem is that the Z-DEPTH of the OcclusionMaterial() blocks the GroundingShadow, even when plane detection is enabled in ARKit. So it’s a bit frustrating — it’s either my 3D models with a GroundingShadow or with occlusion, but not both at the same time.** let objectWillChange = ObservableObjectPublisher() let session = ARKitSession() let worldTracking = WorldTrackingProvider() let scene = SceneReconstructionProvider(modes: [.classification]) // fournit les MeshAnchor let planeDetection = PlaneDetectionProvider(alignments: [.horizontal]) func start() async throws { if(BuildEnv.isPreview || BuildEnv.isSimulator) { return } try await session.run([planeDetection,worldTracking,scene]) print("ARPipeline started") } func stop() async { session.stop() } }
Topic: Spatial Computing SubTopic: ARKit Tags:
1d
Reply to WWDC20 Optical Flow Sample Code
Hello , Here are some steps that might help you. the first thing is how to load the CIkernel for your visualization . it is difficult to find the documentation on it. Normaly you should convert the "Core Image Kernel Language" function to "Metal Shading Language" but you can still load the kernel manually with a String (take care this function is deprecated) Sample to load Kernel : import Foundation import CoreImage class OpticalFlowVisualizerFilter: CIFilter {       var inputImage: CIImage?   let callback: CIKernelROICallback = {       (index, rect) in         return rect       }       static var kernel: CIKernel = { () -> CIKernel in          /*let url = Bundle.main.url(forResource: "OpticalFlowVisualizer",                  withExtension: "ci.metal")!     let data = try! Data(contentsOf: url)           return try! CIKernel(functionName: "flowView2",                  fromMetalLibraryData: data)*/     var source = "//\n// OpticalFlowVisualizer.cikernel\n// SampleVideoCompositionWithCIFilter\n//\n\n\nkernel vec4 flowView2(sampler image, float minLen, float maxLen, float size, float tipAngle)\n{\n\t/// Determine the color by calculating the angle from the .xy vector\n\t///\n\tvec4 s = sample(image, samplerCoord(image));\n\tvec2 vector = s.rg - 0.5;\n\tfloat len = length(vector);\n\tfloat H = atan(vector.y,vector.x);\n\t// convert hue to a RGB color\n\tH *= 3.0/3.1415926; // now range [3,3)\n\tfloat i = floor(H);\n\tfloat f = H-i;\n\tfloat a = f;\n\tfloat d = 1.0 - a;\n\tvec4 c;\n\t\t if (H<-3.0) c = vec4(0, 1, 1, 1);\n\telse if (H<-2.0) c = vec4(0, d, 1, 1);\n\telse if (H<-1.0) c = vec4(a, 0, 1, 1);\n\telse if (H<0.0) c = vec4(1, 0, d, 1);\n\telse if (H<1.0) c = vec4(1, a, 0, 1);\n\telse if (H<2.0) c = vec4(d, 1, 0, 1);\n\telse if (H<3.0) c = vec4(0, 1, a, 1);\n\telse       c = vec4(0, 1, 1, 1);\n\t// make the color darker if the .xy vector is shorter\n\tc.rgb *= clamp((len-minLen)/(maxLen-minLen), 0.0,1.0);\n\t/// Add arrow shapes based on the angle from the .xy vector\n\t///\n\tfloat tipAngleRadians = tipAngle * 3.1415/180.0;\n\tvec2 dc = destCoord(); // current coordinate\n\tvec2 dcm = floor((dc/size)+0.5)*size; // cell center coordinate\n\tvec2 delta = dcm - dc; // coordinate relative to center of cell\n\t// sample the .xy vector from the center of each cell\n\tvec4 sm = sample(image, samplerTransform(image, dcm));\n\tvector = sm.rg - 0.5;\n\tlen = length(vector);\n\tH = atan(vector.y,vector.x);\n\tfloat rotx, k, sideOffset, sideAngle;\n\t// these are the three sides of the arrow\n\trotx = delta.x*cos(H) - delta.y*sin(H);\n\tsideOffset = size*0.5*cos(tipAngleRadians);\n\tk = 1.0 - clamp(rotx-sideOffset, 0.0, 1.0);\n\tc.rgb *= k;\n\tsideAngle = (3.14159 - tipAngleRadians)/2.0;\n\tsideOffset = 0.5 * sin(tipAngleRadians / 2.0);\n\trotx = delta.x*cos(H-sideAngle) - delta.y*sin(H-sideAngle);\n\tk = clamp(rotx+size*sideOffset, 0.0, 1.0);\n\tc.rgb *= k;\n\trotx = delta.x*cos(H+sideAngle) - delta.y*sin(H+sideAngle);\n\tk = clamp(rotx+ size*sideOffset, 0.0, 1.0);\n\tc.rgb *= k;\n\t/// return the color premultiplied\n\tc *= s.a;\n\treturn c;\n}"     return try! CIKernel(source: source)!   }()   override var outputImage : CIImage? {     get {       guard let input = inputImage else {return nil}       return OpticalFlowVisualizerFilter.kernel.apply(extent: input.extent, roiCallback: callback, arguments: [input, 0.0, 100.0, 10.0, 30.0])     }   } } Then , The optical flow works with a pair of frames. You can use AVfoundation to extract frame from your video like this. self.videoAssetReaderOutput = AVAssetReaderTrackOutput(track: self.videoTrack, outputSettings: [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange])     guard self.videoAssetReaderOutput != nil else {       return false     } You can get CVPixelBuffer for your VNRequest like this func nextFrame() -> CVPixelBuffer? {     guard let sampleBuffer = self.videoAssetReaderOutput.copyNextSampleBuffer()     else {       return nil     }     currentFrame += 1     return CMSampleBufferGetImageBuffer(sampleBuffer)   } And compare 2 frames like this :           var requestHandler = VNSequenceRequestHandler()     var previousImage = ofRequest.previousImage     var observationImage: CIImage?          let visionRequest = VNGenerateOpticalFlowRequest(targetedCIImage: ofRequest.targetImage, options: [:])           do {       try requestHandler.perform([visionRequest], on: previousImage)       if let pixelBufferObservation = visionRequest.results?.first as? VNPixelBufferObservation       {         observationImage = CIImage(cvImageBuffer: pixelBufferObservation.pixelBuffer)       }     } catch {       print(error)     }          let ciFilter = OpticalFlowVisualizerFilter()     ciFilter.inputImage = observationImage     let output = ciFilter.outputImage     return output!   } I hope this helps you a bit
Topic: Programming Languages SubTopic: Swift Tags:
Oct ’22