I made some more tests, trying to create a very simple model:
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(3, 3)
self.activation1 = nn.ReLU()
self.linear2 = nn.Linear(3, 1)
def forward(self, x):
x = self.linear1(x)
x = self.activation1(x)
x = self.linear2(x)
return x
And I could convert it with no problem. But if I try to use a layer with different shape, the outputSchema error occurs:
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(3, 4)
self.activation1 = nn.ReLU()
self.linear2 = nn.Linear(4, 1)
def forward(self, x):
x = self.linear1(x)
x = self.activation1(x)
x = self.linear2(x)
return x
It seems to work (with same shape) independent of the number of hidden layers.
I'm doing the conversion using this code:
mlmodel = ct.convert(
traced_model,
inputs=[ct.TensorType(name="input", shape=input.shape)],
)
pipeline_network = pipeline.Pipeline (
input_features = [("input",datatypes.Array(1,3))],
output_features=[("linear_1",datatypes.Array(1,1))]
)
pipeline_network.add_model(mlmodel)
pipeline_spec = pipeline_network.spec
ct.utils.convert_double_to_float_multiarray_type(pipeline_spec)
ct.utils.save_spec(pipeline_spec, "Core.mlmodel")
Topic:
Machine Learning & AI
SubTopic:
Core ML
Tags: