Last answered:

21 Aug 2020

Posted on:

28 Oct 2019

0

Running into errors in MNIST exercise

I was following the MNIST sample trying to replicate the codes from the lecture. However, I was getting some warnings and errors in different areas (not sure if they are related) and I am unable to complete the exercise.
First warning appears when running: mnist_dataset, mnist_info = tfds.load(name='mnist', with_info=True, as_supervised=True)
WARNING:tensorflow:Entity <bound method TopLevelFeature.decode_example of FeaturesDict({
'image': Image(shape=(28, 28, 1), dtype=tf.uint8),
'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=10),
})> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4
Then when I tried to fit the model using: model.fit(train_data, epochs=NUM_EPOCHS, validation_data=(validation_inputs, validation_targets), verbose=2) 
Here's the error: ValueError                                Traceback (most recent call last)
<ipython-input-96-0dc18ef0284d> in <module>
----> 1 model.fit(train_data, epochs=NUM_EPOCHS, validation_data=(validation_inputs, validation_targets), verbose=2)

~\Anaconda3\envs\py3-TF2.0\lib\site-packages\tensorflow_core\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
732         max_queue_size=max_queue_size,
733         workers=workers,
--> 734         use_multiprocessing=use_multiprocessing)
735
736   def evaluate(self,

~\Anaconda3\envs\py3-TF2.0\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
222           validation_data=validation_data,
223           validation_steps=validation_steps,
--> 224           distribution_strategy=strategy)
225
226       total_samples = _get_total_number_of_samples(training_data_adapter)

~\Anaconda3\envs\py3-TF2.0\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
561                                     class_weights=class_weights,
562                                     steps=validation_steps,
--> 563                                     distribution_strategy=distribution_strategy)
564     elif validation_steps:
565       raise ValueError('`validation_steps` should not be specified if '

~\Anaconda3\envs\py3-TF2.0\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _process_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
603       max_queue_size=max_queue_size,
604       workers=workers,
--> 605       use_multiprocessing=use_multiprocessing)
606   # As a fallback for the data type that does not work with
607   # _standardize_user_data, use the _prepare_model_with_inputs.

~\Anaconda3\envs\py3-TF2.0\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weights, batch_size, epochs, steps, shuffle, **kwargs)
239     if not batch_size:
240       raise ValueError(
--> 241           "`batch_size` or `steps` is required for `Tensor` or `NumPy`"
242           " input data.")
243

ValueError: `batch_size` or `steps` is required for `Tensor` or `NumPy` input data.

3 answers ( 0 marked as helpful)
Posted on:

28 Oct 2019

0
Performed a google search and was able to address the first issue if anyone runs into the same. Warning mnist_dataset, mnist_info = tfds.load(name=’mnist’, with_info=True, as_supervised=True) Installing gast 0.2.2 version fixed it.
pip install --user gast==0.2.2
     
Posted on:

29 Oct 2019

2

It looks like the provided solution (link: https://learn.365datascience.com/courses/deep-learning-with-tensorflow-2-0/learning) was missing an argument referencing this line of code:  model.fit(train_data, epochs=NUM_EPOCHS, validation_data=(validation_inputs, validation_targets), verbose=2)  change to (by adding validation_steps=1) will do the trick
model.fit(train_data, epochs=NUM_EPOCHS, validation_data=(validation_inputs, validation_targets),validation_steps=1, verbose=2)

Posted on:

21 Aug 2020

0
My model was 
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',input_shape=(28,28,1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
  I just changed the 
input_shape=(28,28,1)
 to
input_shape=(28,28,3)
and it resolved the error

Submit an answer