You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, and thank you for this very interesting and exciting package !
I have 2 issues and 1 general question:
I am working on a Windows Laptop with WSL2. I tried running your Docker Image but was always running through this issue:
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/775f320175cbf2f849eb43680d71fb362bd58a8c7b33ea54ab99002f75bc476a/merged/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1: file exists: unknown
This is an issue that does not seem related only to your package. As mentionned here, NVIDIA/nvidia-docker#1699 (comment) I rebuilt a new docker image with this Dockerfile and it now works
FROM deepair1/deepair:latest
RUN rm -rf \
/usr/lib/x86_64-linux-gnu/libcuda.so* \
/usr/lib/x86_64-linux-gnu/libnvcuvid.so* \
/usr/lib/x86_64-linux-gnu/libnvidia-*.so* \
/usr/lib/firmware \
/usr/local/cuda/compat/lib
When I am trying to run your exemples, I always immediatly run out of memory. This is the end of the error message I receive.
I am running your exemple with a Laptop, with a Nvidia GTX 1050Ti GPU, an i5 7th Gen with 4 cores (2.5Ghz) and 16Gb of RAM. After trying some workarounds (tensorflow/tensorflow#51354), I think the issue come from the batch_size that is set to 3 (if i understand correctly), but I couldn't find any way to simply lower it. Where could I modify it ?
I am very interested in trying your packages with TCR that were sequenced with VDJ 10x genomic solution. I read thoroughly your pre-print but I am very new to deep learning. If I wanted to run specific sequences from my personal TCR runs against selected epitopes, I could directly use your model, right ? And as i understood it, for the runs I would just need the TCR chains sequences, but I would also need the sequences and AlphaFold2 results of my epitopes of interest ?
Thank you very much in advance for your help
The text was updated successfully, but these errors were encountered:
Romeo1-1
changed the title
How can I
Reduce batch size & general questions
Jan 26, 2023
Hello, and thank you for this very interesting and exciting package !
I have 2 issues and 1 general question:
This is an issue that does not seem related only to your package. As mentionned here, NVIDIA/nvidia-docker#1699 (comment) I rebuilt a new docker image with this Dockerfile and it now works
I am running your exemple with a Laptop, with a Nvidia GTX 1050Ti GPU, an i5 7th Gen with 4 cores (2.5Ghz) and 16Gb of RAM. After trying some workarounds (tensorflow/tensorflow#51354), I think the issue come from the batch_size that is set to 3 (if i understand correctly), but I couldn't find any way to simply lower it. Where could I modify it ?
Thank you very much in advance for your help
The text was updated successfully, but these errors were encountered: