I have been trying for some time now to get Supernova to work properly. I need some help regarding this particular issue.
I am using Supernova 1.2.2 to assemble a human genome. I downloaded your datasets NA12878 and NA19238, but had no luck with the assembly so far.
The command line looks like this:
supernova run --id dataset_NA12878 --fastqs ./input_fastqs
The last few lines of the log file read:
2017-12-13 02:37:21 [runtime] (chunks_complete)
2017-12-14 01:06:54 [runtime] (chunks_complete) ID.dataset_sample_NA12878_all_indices.ASSEMBLER_CS._ASSEMBLER.ASSEMBLER_TR
2017-12-14 01:06:54 [runtime] (run:local) ID.dataset_sample_NA12878_all_indices.ASSEMBLER_CS._ASSEMBLER.ASSEMBLER_TR.fork0.join
2017-12-14 01:06:57 [runtime] (join_complete) ID.dataset_sample_NA12878_all_indices.ASSEMBLER_CS._ASSEMBLER.ASSEMBLER_TR
2017-12-14 01:07:03 [runtime] (ready) ID.dataset_sample_NA12878_all_indices.ASSEMBLER_CS._ASSEMBLER.ASSEMBLER_MC
2017-12-14 01:07:03 [runtime] (run:local) ID.dataset_sample_NA12878_all_indices.ASSEMBLER_CS._ASSEMBLER.ASSEMBLER_MC.fork0.split
2017-12-14 01:07:06 [runtime] (split_complete) ID.dataset_sample_NA12878_all_indices.ASSEMBLER_CS._ASSEMBLER.ASSEMBLER_MC
2017-12-14 01:07:06 [runtime] (run:local) ID.dataset_sample_NA12878_all_indices.ASSEMBLER_CS._ASSEMBLER.ASSEMBLER_MC.fork0.chnk0.main
And it got stuck at this point for days (I aborted the task after three days of being “stuck”, five days in total).
After going through the logs in ASSEMBLER_MC directory, I found that _stdout file reads:
Thu Dec 14 01:07:06 2017: reading 22 logged object(s)
Thu Dec 14 01:07:06 2017: using 28 threads
Thu Dec 14 01:07:06 2017: advisory memory limit set to 463,856,467,968
Thu Dec 14 01:07:06 2017: loading data
Thu Dec 14 01:08:13 2017: ** start stage Closures, mem = 31.48 GB, peak = 30.74 GB
Thu Dec 14 01:08:13 2017: start making closures
Thu Dec 14 01:08:13 2017: making closures, mem = 31.48 GB, peak = 30.74 GB
Thu Dec 14 01:08:13 2017: loading paths index
Thu Dec 14 01:13:35 2017: load complete, mem = 118.45 GB, peak = 117.00 GB
Thu Dec 14 01:13:35 2017: define pair set, mem = 118.45 GB
Thu Dec 14 01:13:35 2017: reserving space
Thu Dec 14 01:14:48 2017: finding pairs
Thu Dec 14 01:16:56 2017: sorting proc, mem = 168.82 GB, peak = 164.87 GB
Thu Dec 14 01:26:30 2017: sort complete, mem = 185.30 GB, peak = 180.96 GB
Thu Dec 14 01:30:22 2017: start main loop, mem = 162.67 GB, peakmem = 180.96 GB
Thu Dec 14 01:30:22 2017: running 200 batches
.......... .......... .......... .......... ..........
.......... .......... .......... .......... ..........
Thu Dec 14 05:21:17 2017: main loop done, 3.85 hours used, mem = 146.52 GB, peakmem = 180.96 GB
Thu Dec 14 05:21:24 2017: doubling closures, mem = 57.14 GB, peak = 180.96 GB
Thu Dec 14 05:22:17 2017: sorting closures, mem = 61.07 GB, peak = 180.96 GB
Thu Dec 14 05:26:34 2017: adding back long edges
Thu Dec 14 05:27:10 2017: sorting again
Thu Dec 14 05:31:33 2017: 129,333,760 closures having in total 4,190,968,350 edges
Thu Dec 14 05:31:33 2017: indexing closures, peak mem = 180.96 GB
Thu Dec 14 05:37:34 2017: uniquesorting
Thu Dec 14 05:37:37 2017: killing short subset closures
Thu Dec 14 05:39:41 2017: indexing closures + uniquesorting; peak mem = 180.96 GB, mem = 106.15 GB
Thu Dec 14 05:46:30 2017: computing involution for all_closures; peak mem = 180.96 GB, mem = 110.90 GB
Thu Dec 14 05:46:52 2017: finding short overlaps; peak mem = 180.96 GB, mem = 114.63 GB
Thu Dec 14 05:48:07 2017: symmetrizing short overlaps; peak mem = 180.96 GB, mem = 123.26 GB
Thu Dec 14 05:51:22 2017: adding long matches + symmetrize; peak mem = 180.96 GB, mem = 152.61 GB
Thu Dec 14 06:03:13 2017: symmetrizing wrt involution; peak mem = 180.96 GB, mem = 167.42 GB
Thu Dec 14 06:07:19 2017: iadds = 84182136, itotal = 3246145934
Thu Dec 14 06:07:19 2017: done, time used = 4.09 minutes
Thu Dec 14 06:12:25 2017: 125,159,730 closures having in total 4,033,164,390 edges, with repetitions from 181988708 distinct edges
Thu Dec 14 06:12:25 2017: forming data structures for supergraph; peak mem = 180.96 GB, mem = 140.87 GB
And that is the last information of Supernova running. The CPU usage gets stuck at 2% (only one core running), but nothing changes over time. The same thing happens when I try to assemble dataset NA19238.
I tried limiting number of threads and changing RAM, but it always performs the same. Since I used large enough AWS instance, which has 480GB of RAM, 64 CPUs and 2TB free disk space, the memory (or disk space) should not be an issue.
This happens when I’m taking into account all sample indexes. When I run the task with the option --indices, the task completes without any issue when only one sample index is provided (in my example, it is sample index AACCGTAA for dataset NA12878). When two sample indices are given, the task gets “stuck” as usual. The command line in that case looks like this:
supernova run --id dataset_NA12878_indices_AACCGTAA_CTAAACGG --fastqs ./input_fastqs --indices AACCGTAA,CTAAACGG
However, the task working with dataset HGP finished successfully (the command line: supernova run --id dataset_HGP --fastqs ./input_fastqs). Since the only difference between these tasks is the dataset used, I was wondering if there is anyone else having trouble with the assembly using datasets NA12878 and NA12938? Am I missing something?
One more thing, when running Supernova on a whole dataset with --indices set to include only sample indices starting with N (command line: supernova run --id dataset_NA12878_N_indices --fastqs ./input_fastqs --indices NTCACGCG,NTAAACGG,NGTTTACT), I get the error message:
[error] Sample index 'NTCACGCG' is not valid. Must be one of: any, SI-<number>, SI-<plate>-<well coordinate>, 220<part number>, or a nucleotide sequence.
On the other hand, when I run Supernova only on a part of a dataset that include only fastq files that belong to sample indices starting with N (without --indices argument), the supernova run is successful.
Did anybody else have similar (or any) experience with the assembly of the given datasets?
Solved! Go to Solution.
Thanks for being a community member. I contacted our support team and this issue would be better handled via their ticketing system. I've opened a ticket on your behalf regarding this issue.
Re: Supernova stuck for some datasets
I had the same issue with version 1.2.2. For me, it was a RAM issue, which I fixed. In the output directory, in the "_log" file (which was in tiny-bcl/_log for me), I could see that supernova was repeatedly checking whether my system had enough RAM, finding that it didn't have enough, and then checking again. It never exited, just kept doing this for four days until I killed the job. I tried lowering the RAM limit with the --localmem argument, but it returned an error if I didn't set it at least to 128 (Gb), which my system didn't have. I fixed this issue by modifying the supernova-cs/1.2.2/bin/run program on line 59 by replacing 128 with 16. This allows the program to run if at least 16 Gb RAM is available. I then used --localmem=32. The program ran in 35 minutes, and less if I turned of the preflight check with --nopreflight.