-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: alevin-fry generate-permit-list failed with exit status ExitStatus(unix_wait_status(25856)) #134
Comments
Thanks for the report, @lichtobergo. Would you be able to share the output of your mapping directory via some mechanism? That is One thing you might also try is to see if you get the same error if you explicitly install alevin-fry 0.8.2 in your conda enviornment. It was just recently updated to 0.9.0 and, though that release was tested before release, it would be good to know if the issue is specifically tied to the new version. Thanks! |
I could share the af_map via ownCloud if that is ok for you? |
Sounds good! |
That's the link. Let me know when download is finished so I can take it down. |
Hi @lichtobergo, Thank you! I have been able to download the data an reproduce the error you list above. You can remove the files now. I will note I tried alevin-fry 0.8.2, and the problem persists, so I don't think the issue is an alevin-fry update issue. It seems, somehow, that the RAD file is incomplete / shorter than expected. Is the base data on which you're running publicly available? I'd like to try to reproduce the RAD file and see if that shows up as truncated reproducibly or if there is something going on that may be specific to your system. Thanks, |
Thanks again, Rob, for your effort helping me! Unfortunately the base data is not publicy available. I don't know how to share these large fastq files in the best way. I will try to use older data that I already analyzed last year in August on a different machine, which is unfortunately not around anymore. At least I know that the data is fine. Would that help? |
Anything that we could do to reproduce the base error on our end would help, as it would let us diagnose the issue. For example, if you run into the same issue with another dataset (that is publicly available) we can check on that. Alternatively, if you can observe the same issue on a small subset of this data (say if you just took the --Rob |
Ok, I will run a public data set on my machine and depending on the results I then would share subsets of our data. |
Thank you! |
Hi Rob,
Genome fasta Michael |
Hi @lichtobergo, Thank you! I will test this out today. Just to verify, you observe the error on this subset if your data as well? If so, I will try to reproduce it on our machine and debug locally. best, |
I'm still trying to confirm that. But my machine is trying to make the splici index since 4 hours. Maybe it is stuck? (I know I don't have to build it every time from scratch but did it anyway). I will get back to you once I know if quantifying works with this subset. Sorry for making this debug so hard but I'm just a user of these tools and not used to this process. |
Just as an update! Quantifying the subset was successful. I post the output of the simpleaf quant command:
I will now quickly check if the full data set also runs successfully. |
So, back again. With the full data I still get the same error as in the original post. That's seems really weird to me. |
It is super weird! Here’s an idea. What if you extract and run on just the last 5 or 10 million reads in the file? I suspect something strange about either the reads themselves, or the mapping output in the file. —Rob |
Ok, new discoveries. I tried, as you suggested, with the last 5 million reads and quantification was successful. So I thought maybe it has to do with the size of the fastq files. As it is now, I'm working on a remote machine and the data is on a different server (I think samba share) which I have mounted on my remote vie CIFS in fstab. Out of curiosity I copied the fastq files to the remote and tried again and this time it worked with the full data set. Although, I have no clue why and how that causes trouble, I think we are closing in on the problem. |
Hi @lichtobergo, Thanks for the excellent detective work. Indeed, I think you're likely closing in on something. The fact that everything works properly when the file is local, but that it fails when the file is remote, certainly seems a plausible explanation. In theory, there is no reason that local and remote storage should behave or be treated differently — all of the relevant differences should be abstracted away at the level of the OS and filesystem so that by the time we get to
The --Rob |
Hi Rob, Best, |
Thanks, Michael! Please do reach out again if you run into any other issues and feel free to comment in this thread if your sysadmin is able to resolve this issue. Best, |
Hi,
I encountered another issue after my first issue #133 was successfully solved.
Im working in a conda environment generated as follows:
I tried to quantify 10x Genomics 3'GEX v3.1 data against a splici index and followed the guide described here:
https://www.sc-best-practices.org/introduction/raw_data_processing.html#a-real-world-example
Everything worked until the simpleaf quant command which exits with the following output
My hardware info:
Linux Ubuntu 22.04 LTS 6.5.0-24-generic kernel
output of lscpu
The quant command worked one time and finished successful but since then only this error occurs.
The text was updated successfully, but these errors were encountered: