The workaround doesn't work in a tf.function, this is a real problem.
I tried other alternative like :
randomgen = tf.random.Generator.from_non_deterministic_state()
#%%
for _ in range(10):
g2 = tf.random.get_global_generator()
x = g2.uniform((10,),(1,2))
y = g2.uniform((10,),(3,4))
tf.print(x)
tf.print(y)
But
NotFoundError: No registered 'RngReadAndSkip' OpKernel for 'GPU' devices compatible with node {{node RngReadAndSkip}}
. Registered: device='CPU'
[Op:RngReadAndSkip]
And obviously calling this in a tf.function will always generate the same sequence
tf.random.stateless_uniform((size,),(1,2),xmin,xmax,tf.float32)
this doesn't works too :
randomgen = tf.random.Generator.from_non_deterministic_state()
@tf.function
def MandelbrotDataSet(size=1000, max_depth=100, xmin=-2.0, xmax=0.7, ymin=-1.3, ymax=1.3):
global randomgen
x = randomgen.uniform((size,),xmin,xmax,tf.float32)
y = randomgen.uniform((size,),xmin,xmax,tf.float32)
Because of RngReadAndSkip again.