RAR > PAR > RAR first
Re: RAR > PAR > RAR first
I'd like to add an idea that perhaps would be an easier in-between step to implement before on-the-fly par2 verification. Im not sure if it will work, but here we go. NZB's contain the exact size a file is supposed to be, right? What if SAB were to check the size of the downloaded files against the NZB? If the sizes of the RAR's match it could start extracting without PAR verification. It's not a guarantee that the files contain no errors but checking the sizes provides *some* indication and takes just a few seconds at most.
If the sizes would not match SAB would first try PAR verification. And the same if extraction fails.
An option to turn this behaviour on and off would be nice of course.
If the sizes would not match SAB would first try PAR verification. And the same if extraction fails.
An option to turn this behaviour on and off would be nice of course.
Re: RAR > PAR > RAR first
Just build a faster server, thats what I did
Mind you, I only have an 8mbit connection, which i limit to about 550k. I get a 350MB TV episode in about 12 mins.
On my old server (AMD XP 2000) This would on average take 70 seconds to verify and 70 seconds to extract
On my new server (Xeon 1.8 Dual) This has gone to about 7 seconds to verify and 6 seconds to extract.
The contoller on this motherboard is also significantly faster, so I dont have any throughput problems now while streaming content to a PC or multiple TVIX style units, if there is still disc activity going on in the back ground
Mind you, I only have an 8mbit connection, which i limit to about 550k. I get a 350MB TV episode in about 12 mins.
On my old server (AMD XP 2000) This would on average take 70 seconds to verify and 70 seconds to extract
On my new server (Xeon 1.8 Dual) This has gone to about 7 seconds to verify and 6 seconds to extract.
The contoller on this motherboard is also significantly faster, so I dont have any throughput problems now while streaming content to a PC or multiple TVIX style units, if there is still disc activity going on in the back ground
Re: RAR > PAR > RAR first
We're not talking about 350Mb SD TV Eps. Think more like 14Gb Full HD Movies. Those take quite some more time.
Re: RAR > PAR > RAR first
And you've never had to repair any of these? Are you nuts?Phasma wrote: We're not talking about 350Mb SD TV Eps. Think more like 14Gb Full HD Movies. Those take quite some more time.
I have a GigaNews account and regularly download 50GB posts. I'd be crazy not to par check before unpacking.
Re: RAR > PAR > RAR first
Do you mean setting SABnzbd CPU priority using Task Manager or is there some way of doing it automatically... Like using SAB instead of Folding@Home and SETI as something to use any spare CPU cycles/Disk activity!???shypike wrote: It could, but we also need some attention for "nicing" the par2 process, because you will still have problems when a repair is needed.
CPU usage can be regulated, just set SABnzbd's CPU priority lower than that of MediaCenter.
Actually, the most worrying part is that it is (on Windows at least) not possible to prioritize disk access.
On Windows it's perfectly possible for a low-pio CPU process to saturate the disk channel completely.
(Run a large xcopy in the background and then try to start another application and you'll know what I mean.)
Re: RAR > PAR > RAR first
I'm sure there are utilities to set a processes prio.
Maybe some day we build it into SABnzbd itself.
Still no fix for the disk usage priotiry...
Maybe some day we build it into SABnzbd itself.
Still no fix for the disk usage priotiry...
Re: RAR > PAR > RAR first
This is a very interesting idea. How viable is it?Phasma wrote: I'd like to add an idea that perhaps would be an easier in-between step to implement before on-the-fly par2 verification. Im not sure if it will work, ...
You hit the nail on the head. If you have a rented server you have slower hardware (but not slow) and much faster internet access. Upgrading to the type of hardware you are talking about would cost several thousand Euro per year extra. Just for Sabnzbd is not viable.auskento wrote: Just build a faster server, thats what I did
Mind you, I only have an 8mbit connection, which i limit to about 550k. ...
Re: RAR > PAR > RAR first
PAR2-on-the-fly verifiction is actually pretty easy to implement (just finished implementing it on the old SAB client).
Basically the only thing you have to do is extract the md5 hashes from a par2 file and then verify the files once you assemble them.
Per block verification (i.e figuring out exactly how many blocks are missing) is a bit harder to do, and probably not worth implementing in SAB (if blocks are missing, repairing is going to take some time anyway).
Basically the only thing you have to do is extract the md5 hashes from a par2 file and then verify the files once you assemble them.
Per block verification (i.e figuring out exactly how many blocks are missing) is a bit harder to do, and probably not worth implementing in SAB (if blocks are missing, repairing is going to take some time anyway).
Re: RAR > PAR > RAR first
TDIAN RETURNS!
Great to see you here, and thank you so much for creating my favourite application!
Great to see you here, and thank you so much for creating my favourite application!
Re: RAR > PAR > RAR first
I would like to restart the debate on the original idea.
Looking at my history my last 98 par2 checks passed. Most of these checks are for 350MB or greater data sets. So whilst I understand that many users always want to par2 check for me the majority of the time it simply is not required, a direct unrar would succeed.
On the fly parity checks would be great but that's still alot of CPU time that for me is not needed.
Looking at my history my last 98 par2 checks passed. Most of these checks are for 350MB or greater data sets. So whilst I understand that many users always want to par2 check for me the majority of the time it simply is not required, a direct unrar would succeed.
On the fly parity checks would be great but that's still alot of CPU time that for me is not needed.
Re: RAR > PAR > RAR first
on-the-fly check has been implemented for 0.5.0.
Just wait and see how that behaves first.
Just wait and see how that behaves first.
Re: RAR > PAR > RAR first
I will indeed wait and see but speaking theoretically it will still do the calculations that i don't need it to do regardless how well on the fly works.
FYI i have downloaded 9 more 350MB NZB since i posted and all 9 did not need repair. From what i can see 1 set of files out of the last 114 sets has needed repair. Thats approximately 70GB of parity checks that didn't need checked and 0.4GB that did.
FYI i have downloaded 9 more 350MB NZB since i posted and all 9 did not need repair. From what i can see 1 set of files out of the last 114 sets has needed repair. Thats approximately 70GB of parity checks that didn't need checked and 0.4GB that did.
Re: RAR > PAR > RAR first
The way SABnzbd is designed now, it's not possible to retry a defective download afterwards.
So when you do have an incomplete download, SABnzbd will not be able to help you repair the job.
You will definitely encounter jobs with missing articles.
But, we'll discuss the idea of optional par2 in the team.
So when you do have an incomplete download, SABnzbd will not be able to help you repair the job.
You will definitely encounter jobs with missing articles.
But, we'll discuss the idea of optional par2 in the team.
Re: RAR > PAR > RAR first
Woo! This was the exact suggestion I was going to say. If the user is willing to devote enough ram, assemble articles to rar files in memory, and then perform parity checking immediately, then write to disk. Or if bad, I suppose write to disk / know to get par files as necessary.shypike wrote: I do have plans for on-the-fly par2 verification.
Ideally one should do the par2 verification when articles are assembled
into a file. In combination with an enabled (large) memory cache, this would be
the most efficient way.
But it's a lot of work and not high on the list right now.
As for the rar before par: if there's a way to force a file as "good", it could save a lot of time. Say unraring gets through the first 13 rars, and fails on the 14. If we could tell par the first 13 are good, and to work from there, it'd be less taxing? And now, we would just further check everything as needed, and if one rar is damaged, it's very plausible for others to be. How about even an unrar as you download, as the files become available?
As for the par not taking much time, in my setup, I have a single dedicated 500gb sata300 drive for sabnzbd downloading. My download speeds are such that if a file took me 20 minutes to download, I'd then have to spend 13 minutes par checking. And after that, everything checked out, and it begins the unrar process, which then takes another 4 minutes. With some techniques, I'm sure in this scenario, I'd be looking at maybe 25 minutes tops instead of 37 minutes. I know if I had the temporary files be on a different drive, the unrar would be faster. And I know I'm approaching the limits of hard drive grunt, but it's still room enough to cut lots of time. It could be 33% faster, eh.
Happy holidays!! Amazing
Re: RAR > PAR > RAR first
The on-the-fly par2 check has been implemented for release 0.5.0.
It works if you use the sources from Subversion.
An official Beta is still 1-2 months away.
It works if you use the sources from Subversion.
An official Beta is still 1-2 months away.