In TensorFlow2_MNIST example…. Why we split the validation_data into validation_inputs and validation_targets while the others (train_data & test_data) not and used as they are (tuples). Why we did not use them all the same way?
Thanks for reaching out to us.
You’re right. train_data and validation_data contain samples with the same shape. However, the fit function requires validation inputs and validation targets to be separated. That’s why we use iter() to separate inputs and targets.
iter(validation_data) makes the ‘validation_data’ object iterable. This means it could be used like a loop. Imagine it has the values 1,12,-4,9 and that 1 is ‘loaded’.
Using next(), we are telling it to load the next batch. Then the value of the object would be 12.
Using next() again would make the object load the number -4 and so on.
Now, instead of single numbers, our datasets hold batches of data. So whole arrays. With the iter() we instruct the program that we want this object to be able to act in the above explained way. With the next() we load the next (only batch).
The 365 Team
Thank you very much!
A side note: For me : I created two environments in Anaconda and stopped having incompatibility issues when: Environment_1: Tensorflow_1 works for python_3.7 & Environment_2: Tensorflow_2 works with python_3.8….