sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
Forum rules
Help us help you:
Help us help you:
- Are you using the latest stable version of SABnzbd? Downloads page.
- Tell us what system you run SABnzbd on.
- Adhere to the forum rules.
- Do you experience problems during downloading?
Check your connection in Status and Interface settings window.
Use Test Server in Config > Servers.
We will probably ask you to do a test using only basic settings. - Do you experience problems during repair or unpacking?
Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
Did this get resolved? What was the solution?
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
I just about to add this to proxmox
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
I should update, I did get my speed issues resolved.
For details, config is as follows.
Sabnzb installed in a lxc container in proxmox (5800x, 64gb)
Truenas installed as a vm with sata controller pass through (for direct disk access)
Two disks defined - 1) ssd for temp/scratch use, and 2) rust for media storage - seagate 4tb 2.5" SMR 5400rpm POS
Above 2 disks shared via NFS4 to proxmox
direct unpack enabled
Disks mapped to container using bind mounts.
There are several places where sync can be defined - more info https://www.avidandrew.com/understandin ... ching.html
1) zfs dataset
2) client*
For #1, the initial setting was "standard" which means allow client to define sync status
I tested 2 variations for client - sync and async.
With sync, speeds were abysmal, 15-30 MB/s on a disk capable of 130MB/s
With async, speeds improved but jumped beween 60-100 MB/s.
Setting zfs dataset sync to "DISABLED", and leaving client at async resulted in the best speeds, of 120-130 MB/s consistently (per iostat and zpool iostat 1). While this is not the safest configuration (in case of powerloss), it doesn't really matter for this data type (media).
For details, config is as follows.
Sabnzb installed in a lxc container in proxmox (5800x, 64gb)
Truenas installed as a vm with sata controller pass through (for direct disk access)
Two disks defined - 1) ssd for temp/scratch use, and 2) rust for media storage - seagate 4tb 2.5" SMR 5400rpm POS
Above 2 disks shared via NFS4 to proxmox
direct unpack enabled
Disks mapped to container using bind mounts.
There are several places where sync can be defined - more info https://www.avidandrew.com/understandin ... ching.html
1) zfs dataset
2) client*
For #1, the initial setting was "standard" which means allow client to define sync status
I tested 2 variations for client - sync and async.
With sync, speeds were abysmal, 15-30 MB/s on a disk capable of 130MB/s
With async, speeds improved but jumped beween 60-100 MB/s.
Setting zfs dataset sync to "DISABLED", and leaving client at async resulted in the best speeds, of 120-130 MB/s consistently (per iostat and zpool iostat 1). While this is not the safest configuration (in case of powerloss), it doesn't really matter for this data type (media).