Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: predector exit status (137) #98

Open
bioinfoanalyst opened this issue Nov 28, 2024 · 11 comments
Open

BUG: predector exit status (137) #98

bioinfoanalyst opened this issue Nov 28, 2024 · 11 comments
Labels
bug Something isn't working

Comments

@bioinfoanalyst
Copy link

Dear all, I got following error while running predector command.

Plus 4 more processes waiting for tasks… [06/6a863f] NOTE: Process
`signalp_v6 (2)` terminated with an error exit status (137) --
Execution is retried (1)

How to avoid this or should i ignore it?

Also predector command is taking days to for 145 fungus proteomes (proteome size range from 7-9mb ). What can be the estimated completion time? Note that server contains 96 cores, 440.5 GiB memory, 10tb space

@bioinfoanalyst bioinfoanalyst added the bug Something isn't working label Nov 28, 2024
@bioinfoanalyst
Copy link
Author

bioinfoanalyst commented Nov 28, 2024

predector error.txt

I got above message on bash terminal

@darcyabjones
Copy link
Member

Hi @bioinfoanalyst,

Thanks for reaching out.

Regarding SignalP exiting 137.

Exit code 137 indicates that the process has run out of memory while running.
Is the server you're running shared with other people?
It might just be that someone else was using a lot of RAM that day.

The automatic retry should have worked, and if so you can safely ignore the message.
If not, try running again and check back in if it fails.

Regarding your attached error log

So this is a rare occurrence and it should be fine to just re-run the code and ignore it.
For more explanation, i create a temporary directory with a name based on the process it's running in (tmpdir$$). I'm not sure of the details but in rare cases you can have different processes spawn with the same number as a race condition.
It's a known issue but so uncommon that normally you shouldn't need to worry about it. I'll add part of a random number to reduce the possibility of this happening further.

Regarding runtime

I can't answer this without more details.
I can run predector for a single proteome in 3 hours on a regular laptop, and I've run predector in 2 hours for >200 proteomes on a compute cluster. I'd expect something with 96 cpus to get through your proteomes in under a day, but it depends heavily on how you've set it up to run.

Can you please share some details of the command and configuration that you're running it with?
Is the computer shared with other people that could be competing for resources?
Is top (or htop) showing that it's using CPUs as you've configured it to?
When you say proteome size range is 7-9 MB do you mean the fasta file size is about 7 megabytes? So around 15000 sequences i'm guessing?

Are you running all of the proteomes separately or all at once?
Predector internally removes duplicate sequences and reduplicates them at the end, so if your proteomes are fairly similar it can speed things up quite a lot to run them all in one go.

@bioinfoanalyst
Copy link
Author

bioinfoanalyst commented Dec 3, 2024 via email

@bioinfoanalyst
Copy link
Author

bioinfoanalyst commented Dec 3, 2024 via email

@bioinfoanalyst
Copy link
Author

bioinfoanalyst commented Dec 3, 2024

Thanks for responding @darcyabjones!

I am solely using server these days. It is not shared with others and not other commands are running I run only predector command at a time without running other commands thinking of same what you suggested.

RAM is free to use. I think i need to do settings in config file but i do
not know how to do I am using following nextflow command:

nextflow run -resume -r 1.2.7 -profile docker_sudo ccdmb/predector --proteome "/home/fungus/Data/Proteomes/*.faa"

Note that if I use c32,r250 in the command like below:

nextflow run -resume -r 1.2.7 -profile c32,r250,docker_sudo ccdmb/predector --proteome "/home/fungus/Data/Proteomes/*.faa"

it gives following on console:

**Unknown configuration profile: 'c32'

Did you mean one of these?
r32**

I can only use given profiles in the usage like maximum of c16,r64

nextflow run -resume -r 1.2.7 -profile c16,r64,docker_sudo ccdmb/predector --proteome "/home/fungus/Data/Proteomes/*.faa"

How to use customized more cpus and memory?

@darcyabjones darcyabjones reopened this Dec 3, 2024
@darcyabjones
Copy link
Member

There's a section in the documentation on customising the configuration.
The available configs we have e.g. c16... can only cover the most common server configurations. Unfortunately we have to set them manually, or at least i used to, this may have changed.

In your case you'll just need to specify 96 cpus for the large jobs.
The reason for your memory problem is probably that i wasn't able to limit the number of CPUs SignalP6 uses, so cpu_high is supposed to just take up all CPUs on the server. But with c16 it can potentially start 6 signalp jobs, with each trying to use all available CPUs and consuming lots of memory. You could try doubling the cpu_medium and ram_medium values, but i haven't really seen a huge benefit before.

Have a look at the documentation, and try something below.

Save this as config.txt

process {

  withLabel:cpu_low {
    cpus = 1
  }
  withLabel:cpu_medium {
    cpus = 4
  }
  withLabel:cpu_high {
    cpus = 96
  }

  withLabel:ram_low {
    memory = 2.GB
  }
  withLabel:ram_medium {
    memory = 8.GB
  }
  withLabel:ram_high {
    memory = 380.GB
  }
}

params {
  max_cpus = 96
  max_memory = 400.GB
}

try running like this:

nextflow run -resume -r 1.2.7 -profile docker_sudo -config ./config.txt ccdmb/predector --proteome "/home/fungus/Data/Proteomes/*.faa"

Let me know how you go :)

Cheers,
Darcy

@bioinfoanalyst
Copy link
Author

This time I got following error on console:

Plus 4 more processes waiting for tasks…
Execution cancelled -- Finishing pending tasks before exit
ERROR ~ Error executing process > 'mmseqs_search_phibase (1)'

Caused by:
  Process `mmseqs_search_phibase (1)` terminated with an error exit status (1)


Command executed:

  mkdir -p tmp matches
  
  mmseqs search       "query/db"       "target/db"       "matches/db"       "tmp"       --threads "96"       --max-seqs 300       -e 0.01       -s 7       --num-iterations 3       --realign       -a
  
  mmseqs convertalis       query/db       target/db       matches/db       search.tsv       --threads "96"       --format-mode 0       --format-output 'query,target,qstart,qend,qlen,tstart,tend,tlen,evalue,gapopen,pident,alnlen,raw,bits,cigar,mismatch,qcov,tcov'
  
  predutils r2js       --pipeline-version "1.2.7"       --software-version "13.45111"       --database-version 'v4-13'        "phibase" search.tsv query/db.fasta     > out.ldjson
  
  rm -rf -- tmp matches search.tsv

Command exit status:
  1

Command output:
  Query database size: 5000 type: Aminoacid
  Target database size: 7544 type: Aminoacid
  Calculation of alignments
  [=================================================================] 5.00K 6s 928ms
  Time for merging to aln_0: 0h 0m 0s 8ms
  1176288 alignments calculated
  35993 sequence pairs passed the thresholds (0.030599 of overall calculated)
  7.198600 hits per query sequence
  Time for processing: 0h 0m 7s 651ms
  result2profile query/db target/db tmp/9496839976047076923/aln_0 tmp/9496839976047076923/profile_0 --sub-mat nucl:nucleotide.out,aa:blosum62.out -e 0.01 --mask-profile 1 --e-profile 0.1 --comp-bias-corr 1 --wg 0 --allow-deletion 0 --filter-msa 1 --max-seq-id 0.9 --qid 0 --qsc -20 --cov 0 --diff 1000 --pca 0 --pcb 1.5 --db-load-mode 0 --gap-open nucl:5,aa:11 --gap-extend nucl:2,aa:1 --threads 96 --compressed 0 -v 3 
  
  Query database size: 5000 type: Aminoacid
  Target database size: 7544 type: Aminoacid
  [=================================================================] 5.00K 0s 796ms
  Time for merging to profile_0: 0h 0m 0s 95ms
  Time for processing: 0h 0m 1s 637ms
  prefilter tmp/9496839976047076923/profile_0 target/db tmp/9496839976047076923/pref_tmp_1 --sub-mat nucl:nucleotide.out,aa:blosum62.out --seed-sub-mat nucl:nucleotide.out,aa:VTML80.out -s 7 -k 0 --k-score 2147483647 --alph-size nucl:5,aa:21 --max-seq-len 65535 --max-seqs 300 --split 0 --split-mode 2 --split-memory-limit 0 -c 0 --cov-mode 0 --comp-bias-corr 1 --diag-score 1 --exact-kmer-matching 0 --mask 1 --mask-lower-case 0 --min-ungapped-score 15 --add-self-matches 0 --spaced-kmer-mode 1 --db-load-mode 0 --pca 1 --pcb 1.5 --threads 96 --compressed 0 -v 3 
  
  Query database size: 5000 type: Profile
  Estimated memory consumption: 547M
  Target database size: 7544 type: Aminoacid
  Index table k-mer threshold: 0 at k-mer size 6 
  Index table: counting k-mers
  [=================================================================] 7.54K 0s 150ms
  Index table: Masked residues: 131865
  Index table: fill
  [=================================================================] 7.54K 0s 50ms
  Index statistics
  Entries:          3702644
  DB size:          509 MB
  Avg k-mer size:   0.057854
  Top 10 k-mers
      HRLPLL	76
      TRYAIM	75
      LKCFAR	75
      TEVTYR	57
      MTYAWY	53
      YTADSV	50
      RDKELL	49
      ANLRKP	48
      ILELKP	37
      ALHYPY	36
  Time for index table init: 0h 0m 0s 901ms
  Process prefiltering step 1 of 1
  
  k-mer similarity threshold: 94
  Starting prefiltering scores calculation (step 1 of 1)
  Query db start 1 to 5000
  Target db start 1 to 7544
  [===================Error: Prefilter died

Command error:
  Target database size: 7544 type: Aminoacid
  Calculation of alignments
  [=================================================================] 5.00K 6s 928ms
  Time for merging to aln_0: 0h 0m 0s 8ms
  1176288 alignments calculated
  35993 sequence pairs passed the thresholds (0.030599 of overall calculated)
  7.198600 hits per query sequence
  Time for processing: 0h 0m 7s 651ms
  result2profile query/db target/db tmp/9496839976047076923/aln_0 tmp/9496839976047076923/profile_0 --sub-mat nucl:nucleotide.out,aa:blosum62.out -e 0.01 --mask-profile 1 --e-profile 0.1 --comp-bias-corr 1 --wg 0 --allow-deletion 0 --filter-msa 1 --max-seq-id 0.9 --qid 0 --qsc -20 --cov 0 --diff 1000 --pca 0 --pcb 1.5 --db-load-mode 0 --gap-open nucl:5,aa:11 --gap-extend nucl:2,aa:1 --threads 96 --compressed 0 -v 3 
  
  Query database size: 5000 type: Aminoacid
  Target database size: 7544 type: Aminoacid
  [=================================================================] 5.00K 0s 796ms
  Time for merging to profile_0: 0h 0m 0s 95ms
  Time for processing: 0h 0m 1s 637ms
  prefilter tmp/9496839976047076923/profile_0 target/db tmp/9496839976047076923/pref_tmp_1 --sub-mat nucl:nucleotide.out,aa:blosum62.out --seed-sub-mat nucl:nucleotide.out,aa:VTML80.out -s 7 -k 0 --k-score 2147483647 --alph-size nucl:5,aa:21 --max-seq-len 65535 --max-seqs 300 --split 0 --split-mode 2 --split-memory-limit 0 -c 0 --cov-mode 0 --comp-bias-corr 1 --diag-score 1 --exact-kmer-matching 0 --mask 1 --mask-lower-case 0 --min-ungapped-score 15 --add-self-matches 0 --spaced-kmer-mode 1 --db-load-mode 0 --pca 1 --pcb 1.5 --threads 96 --compressed 0 -v 3 
  
  Query database size: 5000 type: Profile
  Estimated memory consumption: 547M
  Target database size: 7544 type: Aminoacid
  Index table k-mer threshold: 0 at k-mer size 6 
  Index table: counting k-mers
  [=================================================================] 7.54K 0s 150ms
  Index table: Masked residues: 131865
  Index table: fill
  [=================================================================] 7.54K 0s 50ms
  Index statistics
  Entries:          3702644
  DB size:          509 MB
  Avg k-mer size:   0.057854
  Top 10 k-mers
      HRLPLL	76
      TRYAIM	75
      LKCFAR	75
      TEVTYR	57
      MTYAWY	53
      YTADSV	50
      RDKELL	49
      ANLRKP	48
      ILELKP	37
      ALHYPY	36
  Time for index table init: 0h 0m 0s 901ms
  Process prefiltering step 1 of 1
  
  k-mer similarity threshold: 94
  Starting prefiltering scores calculation (step 1 of 1)
  Query db start 1 to 5000
  Target db start 1 to 7544
  [===================Killed
  Error: Prefilter died

Work dir:
  /home/fungus/Apps/predector/work/4c/e87abf46a55fb2451b7ef8e3e82e56

Container:
  predector/predector:1.2.7

Tip: view the complete command output by changing to the process work dir and entering the command `cat .command.out`

 **-- Check '.nextflow.log' file for details**

The '.nextflow.log' file is attached below
nextflow.log

@bioinfoanalyst
Copy link
Author

bioinfoanalyst commented Dec 3, 2024

By rerunning, no error this time but processing is stuck at:

Plus 4 more processes waiting for tasks…

I will let you know about the progress by next 12 hours.

And yes each fast file contains about 15000 sequences.
Please note that I am using predector command as suggested by you:
nextflow run -resume -r 1.2.7 -profile docker_sudo -config ./config.txt ccdmb/predector --proteome "/home/fungus/Data/Proteomes/*.faa"

I have added screenshots of top, htop and predector commands below:

Screenshot from 2024-12-03 19-32-32
Screenshot from 2024-12-03 19-33-49
Screenshot from 2024-12-03 19-34-55

@bioinfoanalyst
Copy link
Author

bioinfoanalyst commented Dec 4, 2024

My all cpus are completely consumed. How can I keep 5 cpus free to reduce load on processors. Filesystem is also not responding. I had to restart the server.

Screenshot from 2024-12-03 22-04-12

After 14 hours predector status:

Screenshot from 2024-12-04 08-56-21

@darcyabjones
Copy link
Member

Hi there,

Your server should work fine with all CPUs at 100% use, thread swapping will allow normal OS processes to run as some processes are stopped or spawned. Unless you have critical processes that need to use CPUs (like a network filesystem?) you shouldn't need to reduce this.

However, to answer your question, just reduce the number of CPUs from 96 in the configuration file.

You could also try to set the nextflow -queue-size parameter to 16. I'm not 100% sure if this works with local execution, but if it does it would help reduce filesystem IO when lots of smaller tasks are running.

@bioinfoanalyst
Copy link
Author

bioinfoanalyst commented Dec 5, 2024

Dear @darcyabjones!

I encountered following error after 15 hours of running command:
Please guide how can i avoid errors to get my results successfully. Its been a month almost that I am trying to run it but not successful yet. Your help will be highly appreciated.

Thanks

plus 4 more processes waiting for tasks…
Execution cancelled -- Finishing pending tasks before exit
ERROR ~ Error executing process > 'deeploc (158)'

Caused by:
  Process `deeploc (158)` terminated with an error exit status (125)


Command executed:

  run () {
      set -e
      TMPDIR="${PWD}/tmp$$"
      mkdir -p "${TMPDIR}"
      TMPFILE="tmp$$.out"
  
      # The base_compiledir is the important bit here.
      # This is where cache-ing happens. But it also creates a lock
      # for parallel operations.
      export THEANO_FLAGS="device=cpu,floatX=float32,optimizer=fast_compile,cxx=${CXX},base_compiledir=${TMPDIR}"
  
      deeploc -f "$1" -o "${TMPFILE}" 1>&2
      cat "${TMPFILE}.txt"
  
      rm -rf -- "${TMPFILE}.txt" "${TMPDIR}"
  }
  export -f run
  
  # This just always divides it up into even chunks for each cpu.
  # Since deeploc caches compilation, it's more efficient to run big chunks
  # and waste a bit of cpu time at the end if one finishes early.
  NSEQS="$(grep -c '^>' in.fasta || echo 0)"
  CHUNKSIZE="$(decide_task_chunksize.sh in.fasta "4" "${NSEQS}")"
  
  parallel         --halt now,fail=1         --joblog log.txt         -j "4"         -N "${CHUNKSIZE}"         --line-buffer          --recstart '>'         --cat          run     < in.fasta     | cat > out.txt
  
  predutils r2js         --pipeline-version "1.2.7"         --software-version "1.0"         -o out.ldjson         deeploc out.txt in.fasta

Command exit status:
  125
executor >  local (400)
[7b/d4de30] val…te_input:download_pfam_hmm | 1 of 1, cached: 1 ✔
[70/4e59a9] val…te_input:download_pfam_dat | 1 of 1, cached: 1 ✔
[9a/df6782] validate_input:download_dbcan  | 1 of 1, cached: 1 ✔
[68/8735a8] val…ate_input:download_phibase | 1 of 1, cached: 1 ✔
[bd/244f08] val…_input:download_effectordb | 1 of 1, cached: 1 ✔
[fc/48b4c4] check_env:get_signalp3_version | 1 of 1, cached: 1 ✔
[d1/44f8a6] check_env:get_signalp4_version | 1 of 1, cached: 1 ✔
[0e/6b80db] check_env:get_signalp5_version | 1 of 1, cached: 1 ✔
[5f/d03138] check_env:get_signalp6_version | 1 of 1, cached: 1 ✔
[1c/34664c] check_env:get_targetp2_version | 1 of 1, cached: 1 ✔
[24/8705b6] check_env:get_tmhmm2_version   | 1 of 1, cached: 1 ✔
[1f/9910ff] check_env:get_deeploc1_version | 1 of 1, cached: 1 ✔
[c1/30d229] check_env:get_phobius_version  | 1 of 1, cached: 1 ✔
[13/bfdd8d] che…env:get_effectorp1_version | 1 of 1, cached: 1 ✔
[67/b303ff] che…env:get_effectorp2_version | 1 of 1, cached: 1 ✔
[b8/14f298] che…env:get_effectorp3_version | 1 of 1, cached: 1 ✔
[df/a41d81] che…_env:get_localizer_version | 1 of 1, cached: 1 ✔
[97/5dc134] che…_env:get_apoplastp_version | 1 of 1, cached: 1 ✔
[07/fc6114] check_env:get_deepsig_version  | 1 of 1, cached: 1 ✔
[6a/a39d36] check_env:get_emboss_version   | 1 of 1, cached: 1 ✔
[10/78bfdb] check_env:get_mmseqs2_version  | 1 of 1, cached: 1 ✔
[a3/65110f] check_env:get_hmmer_version    | 1 of 1, cached: 1 ✔
[73/47c95e] che…env:get_deepredeff_version | 1 of 1, cached: 1 ✔
[52/f47acc] che…_env:get_predutils_version | 1 of 1, cached: 1 ✔
[5b/a8abc2] gen_target_table               | 1 of 1, cached: 1 ✔
[77/4ecb51] sanitise_phibase               | 1 of 1, cached: 1 ✔
[f7/99dac1] encode_seqs                    | 1 of 1, cached: 1 ✔
[d0/8a9e50] split_fasta (deduplicated)     | 1 of 1, cached: 1 ✔
[0e/e1ee35] signalp_v3_hmm (187)           | 188 of 188, cached: 188 ✔
[a8/2ff022] signalp_v3_nn (155)            | 188 of 188, cached: 188 ✔
[cd/afe862] signalp_v4 (150)               | 152 of 188, cached: 152
[37/5924cb] signalp_v5 (177)               | 119 of 188, cached: 119
[16/3c2389] signalp_v6 (181)               | 37 of 188, cached: 37
[9a/82b881] deepsig (180)                  | 188 of 188, cached: 188 ✔
[54/79a828] phobius (185)                  | 184 of 188, cached: 183
[89/aec9e4] tmhmm (183)                    | 183 of 188, cached: 106
[f4/8eeedc] targetp (152)                  | 49 of 188, cached: 49
[80/b7ef61] deeploc (137)                  | 134 of 188, cached: 130, failed: 2
[94/3abceb] apoplastp (121)                | 60 of 188, cached: 60
[6c/586b65] localizer (156)                | 39 of 188, cached: 39
[92/cd4677] effectorp_v1 (63)              | 63 of 188, cached: 63
[0d/b439c2] effectorp_v2 (78)              | 46 of 188, cached: 46
[83/292d2a] effectorp_v3 (66)              | 67 of 188, cached: 67
[80/cd6a47] deepredeff_fungi_v1 (133)      | 56 of 188, cached: 56
[fe/159c08] deepredeff_oomycete_v1 (49)    | 52 of 188, cached: 52
[9f/862fc0] kex2_regex (186)               | 188 of 188, cached: 188 ✔
[42/e8cdff] rxlrlike_regex (139)           | 188 of 188, cached: 188 ✔
[59/d1196c] pepstats (162)                 | 188 of 188, cached: 188 ✔
[6e/925096] press_pfam_hmmer               | 1 of 1, cached: 1 ✔
[4f/5432f4] pfamscan (123)                 | 123 of 188, cached: 68
[5c/a54a63] press_dbcan_hmmer              | 1 of 1, cached: 1 ✔
[5c/1a6bf2] hmmscan_dbcan (1)              | 3 of 188, cached: 3
[d2/2a701a] press_effectordb_hmmer         | 1 of 1, cached: 1 ✔
[cf/d47ebb] hmmscan_effectordb (56)        | 57 of 188, cached: 57
[05/442171] mmseqs_index_proteomes (172)   | 188 of 188, cached: 188 ✔
[b6/466c9f] mmseqs_index_phibase           | 1 of 1, cached: 1 ✔
[-        ] mmseqs_search_phibase          | 0 of 1
[bd/a24a0f] decode_seqs (240)              | 241 of 254
[a3/adbee8] pub…sis_software_versions.tsv) | 8 of 8, cached: 8
Plus 4 more processes waiting for tasks…

ERROR ~ Error executing process > 'deeploc (158)'

Caused by:
  Process `deeploc (158)` terminated with an error exit status (125)


Command executed:

  run () {
      set -e
      TMPDIR="${PWD}/tmp$$"
      mkdir -p "${TMPDIR}"
      TMPFILE="tmp$$.out"
  
      # The base_compiledir is the important bit here.
      # This is where cache-ing happens. But it also creates a lock
      # for parallel operations.
      export THEANO_FLAGS="device=cpu,floatX=float32,optimizer=fast_compile,cxx=${CXX},base_compiledir=${TMPDIR}"
  
      deeploc -f "$1" -o "${TMPFILE}" 1>&2
      cat "${TMPFILE}.txt"
  
      rm -rf -- "${TMPFILE}.txt" "${TMPDIR}"
  }
  export -f run
  
  # This just always divides it up into even chunks for each cpu.
  # Since deeploc caches compilation, it's more efficient to run big chunks
  # and waste a bit of cpu time at the end if one finishes early.
  NSEQS="$(grep -c '^>' in.fasta || echo 0)"
  CHUNKSIZE="$(decide_task_chunksize.sh in.fasta "4" "${NSEQS}")"
  
  parallel         --halt now,fail=1         --joblog log.txt         -j "4"         -N "${CHUNKSIZE}"         --line-buffer          --recstart '>'         --cat          run     < in.fasta     | cat > out.txt
  
  predutils r2js         --pipeline-version "1.2.7"         --software-version "1.0"         -o out.ldjson         deeploc out.txt in.fasta

Command exit status:
  125
executor >  local (400)
[7b/d4de30] val…te_input:download_pfam_hmm | 1 of 1, cached: 1 ✔
[70/4e59a9] val…te_input:download_pfam_dat | 1 of 1, cached: 1 ✔
[9a/df6782] validate_input:download_dbcan  | 1 of 1, cached: 1 ✔
[68/8735a8] val…ate_input:download_phibase | 1 of 1, cached: 1 ✔
[bd/244f08] val…_input:download_effectordb | 1 of 1, cached: 1 ✔
[fc/48b4c4] check_env:get_signalp3_version | 1 of 1, cached: 1 ✔
[d1/44f8a6] check_env:get_signalp4_version | 1 of 1, cached: 1 ✔
[0e/6b80db] check_env:get_signalp5_version | 1 of 1, cached: 1 ✔
[5f/d03138] check_env:get_signalp6_version | 1 of 1, cached: 1 ✔
[1c/34664c] check_env:get_targetp2_version | 1 of 1, cached: 1 ✔
[24/8705b6] check_env:get_tmhmm2_version   | 1 of 1, cached: 1 ✔
[1f/9910ff] check_env:get_deeploc1_version | 1 of 1, cached: 1 ✔
[c1/30d229] check_env:get_phobius_version  | 1 of 1, cached: 1 ✔
[13/bfdd8d] che…env:get_effectorp1_version | 1 of 1, cached: 1 ✔
[67/b303ff] che…env:get_effectorp2_version | 1 of 1, cached: 1 ✔
[b8/14f298] che…env:get_effectorp3_version | 1 of 1, cached: 1 ✔
[df/a41d81] che…_env:get_localizer_version | 1 of 1, cached: 1 ✔
[97/5dc134] che…_env:get_apoplastp_version | 1 of 1, cached: 1 ✔
[07/fc6114] check_env:get_deepsig_version  | 1 of 1, cached: 1 ✔
[6a/a39d36] check_env:get_emboss_version   | 1 of 1, cached: 1 ✔
[10/78bfdb] check_env:get_mmseqs2_version  | 1 of 1, cached: 1 ✔
[a3/65110f] check_env:get_hmmer_version    | 1 of 1, cached: 1 ✔
[73/47c95e] che…env:get_deepredeff_version | 1 of 1, cached: 1 ✔
[52/f47acc] che…_env:get_predutils_version | 1 of 1, cached: 1 ✔
[5b/a8abc2] gen_target_table               | 1 of 1, cached: 1 ✔
[77/4ecb51] sanitise_phibase               | 1 of 1, cached: 1 ✔
[f7/99dac1] encode_seqs                    | 1 of 1, cached: 1 ✔
[d0/8a9e50] split_fasta (deduplicated)     | 1 of 1, cached: 1 ✔
[0e/e1ee35] signalp_v3_hmm (187)           | 188 of 188, cached: 188 ✔
[a8/2ff022] signalp_v3_nn (155)            | 188 of 188, cached: 188 ✔
[cd/afe862] signalp_v4 (150)               | 152 of 188, cached: 152
[37/5924cb] signalp_v5 (177)               | 119 of 188, cached: 119
[16/3c2389] signalp_v6 (181)               | 37 of 188, cached: 37
[9a/82b881] deepsig (180)                  | 188 of 188, cached: 188 ✔
[54/79a828] phobius (185)                  | 184 of 188, cached: 183
[89/aec9e4] tmhmm (183)                    | 183 of 188, cached: 106
[f4/8eeedc] targetp (152)                  | 49 of 188, cached: 49
[d7/0d4ddb] deeploc (132)                  | 135 of 188, cached: 130, failed: 2
[94/3abceb] apoplastp (121)                | 60 of 188, cached: 60
[6c/586b65] localizer (156)                | 39 of 188, cached: 39
[92/cd4677] effectorp_v1 (63)              | 63 of 188, cached: 63
[0d/b439c2] effectorp_v2 (78)              | 46 of 188, cached: 46
[83/292d2a] effectorp_v3 (66)              | 67 of 188, cached: 67
[80/cd6a47] deepredeff_fungi_v1 (133)      | 56 of 188, cached: 56
[fe/159c08] deepredeff_oomycete_v1 (49)    | 52 of 188, cached: 52
[9f/862fc0] kex2_regex (186)               | 188 of 188, cached: 188 ✔
[42/e8cdff] rxlrlike_regex (139)           | 188 of 188, cached: 188 ✔
[59/d1196c] pepstats (162)                 | 188 of 188, cached: 188 ✔
[6e/925096] press_pfam_hmmer               | 1 of 1, cached: 1 ✔
[4f/5432f4] pfamscan (123)                 | 123 of 188, cached: 68
[5c/a54a63] press_dbcan_hmmer              | 1 of 1, cached: 1 ✔
[5c/1a6bf2] hmmscan_dbcan (1)              | 3 of 188, cached: 3
[d2/2a701a] press_effectordb_hmmer         | 1 of 1, cached: 1 ✔
[cf/d47ebb] hmmscan_effectordb (56)        | 57 of 188, cached: 57
[05/442171] mmseqs_index_proteomes (172)   | 188 of 188, cached: 188 ✔
[b6/466c9f] mmseqs_index_phibase           | 1 of 1, cached: 1 ✔
[-        ] mmseqs_search_phibase          | 0 of 1
[bd/a24a0f] decode_seqs (240)              | 241 of 254
[a3/adbee8] pub…sis_software_versions.tsv) | 8 of 8, cached: 8
Plus 4 more processes waiting for tasks…
ERROR ~ Error executing process > 'deeploc (158)'

Caused by:
  Process `deeploc (158)` terminated with an error exit status (125)


Command executed:

  run () {
      set -e
      TMPDIR="${PWD}/tmp$$"
      mkdir -p "${TMPDIR}"
      TMPFILE="tmp$$.out"
  
      # The base_compiledir is the important bit here.
      # This is where cache-ing happens. But it also creates a lock
      # for parallel operations.
      export THEANO_FLAGS="device=cpu,floatX=float32,optimizer=fast_compile,cxx=${CXX},base_compiledir=${TMPDIR}"
  
      deeploc -f "$1" -o "${TMPFILE}" 1>&2
      cat "${TMPFILE}.txt"
  
      rm -rf -- "${TMPFILE}.txt" "${TMPDIR}"
  }
  export -f run
  
  # This just always divides it up into even chunks for each cpu.
  # Since deeploc caches compilation, it's more efficient to run big chunks
  # and waste a bit of cpu time at the end if one finishes early.
  NSEQS="$(grep -c '^>' in.fasta || echo 0)"
  CHUNKSIZE="$(decide_task_chunksize.sh in.fasta "4" "${NSEQS}")"
  
  parallel         --halt now,fail=1         --joblog log.txt         -j "4"         -N "${CHUNKSIZE}"         --line-buffer          --recstart '>'         --cat          run     < in.fasta     | cat > out.txt
  
  predutils r2js         --pipeline-version "1.2.7"         --software-version "1.0"         -o out.ldjson         deeploc out.txt in.fasta

Command exit status:
  125

Command output:
  (empty)

Command error:
  Unable to find image 'predector/predector:1.2.7' locally
  docker: Error response from daemon: pull access denied for predector/predector, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
  See 'docker run --help'.

Work dir:
  /home/fungus/Apps/predector/work/4e/695450c8fd2b72607dfcca5d587e54

Container:
  predector/predector:1.2.7

Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`

 -- Check '.nextflow.log' file for details

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants