-
Notifications
You must be signed in to change notification settings - Fork 430
OutofRangeError: #172
Comments
check that all the images are present; besides, since you are using windows there might be some issues with line breaks as well. So I would recommend to write a small script that verifies that all your images can be opened and read correctly. |
hi have u solve the same problem . i met it too now @tadbeer |
Hey.. I quit using this repository.
Instead my teammate wrote the deeplab V3 architecture completely in Kerala
itself. that is giving us pretty good segmentation results.
Since we are working in a private corporation I can't share that keras
script with you. We might be uploading it on GitHub in some time.
Till then may be you can try writing the architecture in keras itself. V3
is pretty simple.
I can guide you in writing the architecture of you wish.
All the best.
Pranjal
…On Sun 20 May, 2018, 2:57 PM womengjianhai, ***@***.***> wrote:
hi have u solve the same problem . i met it too now @tadbeer
<https://github.com/tadbeer>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#172 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AX_s77QtXSDMGU-2F1YLUrGggeK6LXdaks5t0TbzgaJpZM4S-VEA>
.
|
*we wrote the architecture in "KERAS" (not Kerela)
On Sun 20 May, 2018, 3:29 PM Pranjal Agarwal, <[email protected]>
wrote:
… Hey.. I quit using this repository.
Instead my teammate wrote the deeplab V3 architecture completely in Kerala
itself. that is giving us pretty good segmentation results.
Since we are working in a private corporation I can't share that keras
script with you. We might be uploading it on GitHub in some time.
Till then may be you can try writing the architecture in keras itself. V3
is pretty simple.
I can guide you in writing the architecture of you wish.
All the best.
Pranjal
On Sun 20 May, 2018, 2:57 PM womengjianhai, ***@***.***>
wrote:
> hi have u solve the same problem . i met it too now @tadbeer
> <https://github.com/tadbeer>
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#172 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AX_s77QtXSDMGU-2F1YLUrGggeK6LXdaks5t0TbzgaJpZM4S-VEA>
> .
>
|
i have found the solution to it . |
Thanks.
…On Sun 20 May, 2018, 3:34 PM womengjianhai, ***@***.***> wrote:
i have found the solution to it .
use '
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
sess.run(init_op)'
to initialize the variables. Because the code forget initializing the
local vairables.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#172 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AX_s7_5FHkXbrDLcoiwZLf532nQlB6Taks5t0T-vgaJpZM4S-VEA>
.
|
@tadbeer Hello,will u upload your deeplabv3 on Github please? |
I tried this method, but did not solve the problem |
@KevinMarkVine How did you finally solve this problem? I also met the same problem as you. |
sorry, I forgot. |
The checkpoint has been created.
step 0 loss = 17.905, (227.301 sec/step)
step 1 loss = 9.095, (221.894 sec/step)
Traceback (most recent call last):
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1361, in _do_call
return fn(*args)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _run_fn
target_list, status, run_metadata)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 516, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.OutOfRangeError: FIFOQueue '_1_create_inputs/batch/fifo_queue' is closed and has insufficient elements (requested 25, current size 1)
[[Node: create_inputs/batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](create_inputs/batch/fifo_queue, create_inputs/batch/n)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 259, in
main()
File "train.py", line 251, in main
loss_value, _ = sess.run([reduced_loss, train_op], feed_dict=feed_dict)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 905, in run
run_metadata_ptr)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1137, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1355, in _do_run
options, run_metadata)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1374, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: FIFOQueue '_1_create_inputs/batch/fifo_queue' is closed and has insufficient elements (requested 25, current size 1)
[[Node: create_inputs/batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](create_inputs/batch/fifo_queue, create_inputs/batch/n)]]
Caused by op 'create_inputs/batch', defined at:
File "train.py", line 259, in
main()
File "train.py", line 147, in main
image_batch, label_batch = reader.dequeue(args.batch_size)
File "D:\examples_d\deeplabs_v2_sleep\deeplab_resnet\image_reader.py", line 189, in dequeue
num_elements)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\input.py", line 989, in batch
name=name)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\input.py", line 763, in _batch
dequeued = queue.dequeue_many(batch_size, name=name)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\data_flow_ops.py", line 483, in dequeue_many
self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_data_flow_ops.py", line 2749, in _queue_dequeue_many_v2
component_types=component_types, timeout_ms=timeout_ms, name=name)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 3271, in create_op
op_def=op_def)
File "C:\Users\Arcturus Desktop\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1650, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
OutOfRangeError (see above for traceback): FIFOQueue '_1_create_inputs/batch/fifo_queue' is closed and has insufficient elements (requested 25, current size 1)
[[Node: create_inputs/batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](create_inputs/batch/fifo_queue, create_inputs/batch/n)]]
We are trying to train this model on our own dataset of 90 images(for POC) and after rectifying a few errors, we are now stuck at this one. We feel that there is a problem with the batches since there is a different current size in the batch every time in every training step and that is why it stops after a few steps. But when we use get_shape() to check the batch size at each iteration it shows the correct batch size. Please let us know why this happening.
The text was updated successfully, but these errors were encountered: