Post

Replies

Boosts

Views

Activity

Process input tensor by `builder.add_transpose` in custom layer makes the function always run on CPU
I'm trying to implement a 5-D input tensor version of custom grid_sample based on this work. For encoding one of the input tensor, grid to a MTLTexture as shown in officical guide, i need to transpose grid from (N×D_grid×W_grid×H_grid×3) to (N×3×D_grid×W_grid ×H_grid) by builder.add_transpose similar like shown here. But in my implementation, i find that adding this transpose op will make the custom layer always run on CPU. Without this layer, the data can get in GPU. I doubt will builder.add_transpose have an effect on this? System Information xcode: 13.2 coremltools: 4.1/5.1.0 test device: iphone 11
0
0
568
Jan ’22
Can `MTLTexture` be used to store 5-D input tensor?
I'm trying to implement a pytorch custom layer [grid_sampler] (https://pytorch.org/docs/1.9.1/generated/torch.nn.functional.grid_sample.html) on GPU. Both of its inputs, input and grid can be 5-D. My implementation of encodeToCommandBuffer, which is MLCustomLayer protocol's function, is shown below. According to my current attempts, both value of id<MTLTexture> input and id<MTLTexture> grid don't meet expectations. So i wonder can MTLTexture be used to store 5-D input tensor as inputs of encodeToCommandBuffer? Or can anybody help to show me how to use MTLTexture correctly here? Thanks a lot! - (BOOL)encodeToCommandBuffer:(id<MTLCommandBuffer>)commandBuffer             inputs:(NSArray<id<MTLTexture>> *)inputs            outputs:(NSArray<id<MTLTexture>> *)outputs             error:(NSError * _Nullable *)error {   NSLog(@"Dispatching to GPU");   NSLog(@"inputs count %lu", (unsigned long)inputs.count);   NSLog(@"outputs count %lu", (unsigned long)outputs.count);   id<MTLComputeCommandEncoder> encoder = [commandBuffer       computeCommandEncoderWithDispatchType:MTLDispatchTypeSerial];     assert(encoder != nil);       id<MTLTexture> input = inputs[0];   id<MTLTexture> grid = inputs[1];   id<MTLTexture> output = outputs[0];   NSLog(@"inputs shape %lu, %lu, %lu, %lu", (unsigned long)input.width, (unsigned long)input.height, (unsigned long)input.depth, (unsigned long)input.arrayLength);   NSLog(@"grid shape %lu, %lu, %lu, %lu", (unsigned long)grid.width, (unsigned long)grid.height, (unsigned long)grid.depth, (unsigned long)grid.arrayLength);   if (encoder)   {     [encoder setTexture:input atIndex:0];     [encoder setTexture:grid atIndex:1];     [encoder setTexture:output atIndex:2];           NSUInteger wd = grid_sample_Pipeline.threadExecutionWidth;     NSUInteger ht = grid_sample_Pipeline.maxTotalThreadsPerThreadgroup / wd;     MTLSize threadsPerThreadgroup = MTLSizeMake(wd, ht, 1);     MTLSize threadgroupsPerGrid = MTLSizeMake((input.width + wd - 1) / wd, (input.height + ht - 1) / ht, input.arrayLength);     [encoder setComputePipelineState:grid_sample_Pipeline];     [encoder dispatchThreadgroups:threadgroupsPerGrid threadsPerThreadgroup:threadsPerThreadgroup];     [encoder endEncoding];         }   else     return NO;   *error = nil;   return YES; }
1
0
1.2k
Jan ’22
CoreML custom layer implemented by `object-C` doesn't work in `swift` testing project
I convert a pytorch model to mlmodel with a custom layer, and create a test app in swift to test my model. When i implement the custom layer by swift, it works well. However when i implement the custom layer by object-C, the code return 2022-01-14 17:58:49.964377+0800 CustomLayers[2547:968723] [coreml] Error in adding network -1. 2022-01-14 17:58:49.965023+0800 CustomLayers[2547:968723] [coreml] MLModelAsset: load failed with error Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.} 2022-01-14 17:58:49.965085+0800 CustomLayers[2547:968723] [coreml] MLModelAsset: modelWithError: load failed with error Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.} Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.}: file CustomLayers/model_2.swift, line 114 2022-01-14 17:58:49.966267+0800 CustomLayers[2547:968723] Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.}: file CustomLayers/model_2.swift, line 114 (lldb) It seems the model load failed with object-C custom layer. So i wonder does object-C custom layer implementation can't work with swift project? Although i try to set the CustomLayers-Bridging-Header.h. It still doesn't work. System Information mac OS: 11.6.1 Big Sur xcode: 12.5.1 coremltools: 5.1.0 test device: iphone 11
2
0
1.3k
Jan ’22
Why i enabled Metal API in `encode` function but my Coreml custom layer still run on CPU
I implement a custom pytorch layer on both CPU and GPU following [Hollemans amazing blog] (https://machinethink.net/blog/coreml-custom-layers ). The cpu version works good, but when i implemented this op on GPU it cannot activate "encode" function. Always run on CPU. I have checked the coremltools.convert() options with compute_units=coremltools.ComputeUnit.CPU_AND_GPU, but it still not work. This problem also mentioned in https://stackoverflow.com/questions/51019600/why-i-enabled-metal-api-but-my-coreml-custom-layer-still-run-on-cpu and https://developer.apple.com/forums/thread/695640. Any idea on help this would be grateful. System Information mac OS: 11.6.1 Big Sur xcode: 12.5.1 coremltools: 5.1.0 test device: iphone 11
1
0
996
Jan ’22