This is kind of a odd request, but here's what I'm trying to accomplish:
1) I have a QNAP running Docker with 3 instances of Sab that Sonarr/Radarr send with a round robin (priority being the same accomplished this and it works very well)
2) My QNAP is still just a baby processor ARM Ryzen V1500B @ 2.20GHz
3) It bottlenecks whenever it has to unpack anything and I have an M1 Mac Mini sitting right here ready to help
Is there a way I can set up my M1 Mac Mini to simply unpack the files once it notices a new folder needs to be worked on? Right now, the process is setup for the individual instance to kick off a post-processing job, but that won't work as I don't want it to utilize the unpacking on that instance. Ideally I'd like to make my 4th instance of Sab running on my M1 Mac Mini go to work whenever a new job is completed and it would go behind and unpack everything.
I believe Sonarr doesn't care how long it takes, it simply checks every minute to see if new files can be grabbed.
I can write in PowerShell decently, and less so in Bash, Batch, and Python.
Odd thing is I can't get the CPUs to be used appropriately on my QNAP as it stays around 30% usage and doesn't go any higher , up from 18% before. I believe it's because Container Station running on QNAP is at 34% CPU and 84% memory usage, but the overall memory of the QNAP is 6/32GB so I can use more RAM if needed. My docker compose files are telling it to use 6GB of RAM but Sab won't actually use the full 6.
Multiple Instances SABNZBD - Unpack Separately
-
- Newbie
- Posts: 2
- Joined: October 4th, 2021, 4:07 pm
-
- Newbie
- Posts: 2
- Joined: October 4th, 2021, 4:07 pm
Re: Multiple Instances SABNZBD - Unpack Separately
My bottleneck turned out to be my 5x16TB drives. I moved the incomplete folder to the 2x SSDs on the QNAP and am using that as a scratch space with tons of reads/writes, then when it unpacks, it directly unpacks it to the TV folder folder on the 5x16TB and becomes a large write stream which seems to have increased CPU utilization on the QNAP when it is actually verifying and repairing the file. These smaller jobs were being bogged down by various other reads/writes to the 5x16TB drives, so I've removed one less thing to cause the hard drives to have to have split their resources. It's odd because I have a decent 2TB SSD cache (made up of 2 drives in RAID 1) on top of the 5x16TB, but apparently it couldn't cache anything and had to constantly hit the HDDs.
I've also successfully completed my experiment with 3 SAB servers and have decided it's not giving me a 3x return so am changing back to a single server instance. I've also tuned it so it will prioritize recent stuff and only look for things with 200 days retention or less to improve chances of getting faster downloads.
I've also successfully completed my experiment with 3 SAB servers and have decided it's not giving me a 3x return so am changing back to a single server instance. I've also tuned it so it will prioritize recent stuff and only look for things with 200 days retention or less to improve chances of getting faster downloads.