sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
Forum rules
Help us help you:
Help us help you:
- Are you using the latest stable version of SABnzbd? Downloads page.
- Tell us what system you run SABnzbd on.
- Adhere to the forum rules.
- Do you experience problems during downloading?
Check your connection in Status and Interface settings window.
Use Test Server in Config > Servers.
We will probably ask you to do a test using only basic settings. - Do you experience problems during repair or unpacking?
Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
-
- Newbie
- Posts: 8
- Joined: February 4th, 2023, 6:36 pm
sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
Hello!
I may not be the only person experiencing this and this may end up being a C++/Python debate - but I wanted to just confirm I am not missing something here.
Long story - I have a Proxmox host running Ubuntu VM which is running a bunch of docker containers including sabnzbd and nzbget. I recently decided to move away from nzbget because, while getting close to 130 mb/s (1040 mbit/s) on a 187 mb/s (1500 mbit/s) connection, it has a crazy memory leak which can cause a download to slow down to a crawl to about 5-6 mbit/s until I restart the container.
With sabnzbd, however, I can only get, consistently between 45-50 mb/s. When doing the speed test within sab, it shows bandwidth of 35 mb/s and pystone score of 80-88k... this is a Xeon-D 1528 6 core and the VM has 6 cores assigned to it; the bare metal score is way higher. But - what's even more interesting is that recently, I noticed that even in sab, my downloads slow down to 5-6 mb/s, especially when I try to download something over 15 GB.
Storage wise - I am running an SAS3 (12 gbit) based ZFS mirrored and stripped pool (i.e. RAID10 equivalent); it's plenty fast - with download folder speed being over 650 mb/s.
So - what am I missing here? I am kinda of resigned to the fact I may have speeds in the mid 50's, but why the slow down??
Thanks
I may not be the only person experiencing this and this may end up being a C++/Python debate - but I wanted to just confirm I am not missing something here.
Long story - I have a Proxmox host running Ubuntu VM which is running a bunch of docker containers including sabnzbd and nzbget. I recently decided to move away from nzbget because, while getting close to 130 mb/s (1040 mbit/s) on a 187 mb/s (1500 mbit/s) connection, it has a crazy memory leak which can cause a download to slow down to a crawl to about 5-6 mbit/s until I restart the container.
With sabnzbd, however, I can only get, consistently between 45-50 mb/s. When doing the speed test within sab, it shows bandwidth of 35 mb/s and pystone score of 80-88k... this is a Xeon-D 1528 6 core and the VM has 6 cores assigned to it; the bare metal score is way higher. But - what's even more interesting is that recently, I noticed that even in sab, my downloads slow down to 5-6 mb/s, especially when I try to download something over 15 GB.
Storage wise - I am running an SAS3 (12 gbit) based ZFS mirrored and stripped pool (i.e. RAID10 equivalent); it's plenty fast - with download folder speed being over 650 mb/s.
So - what am I missing here? I am kinda of resigned to the fact I may have speeds in the mid 50's, but why the slow down??
Thanks
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
Python on Docker is slower than Python straight on OS.
> pystone score of 80-88k.
I have a Celeron with pystone score of 95k. And SABnzbd achieves linespeed (60MB/s) on that ... with a straight SSD.
> nzbget ... to slow down to a crawl to about 5-6 mbit/s
> I noticed that even in sab, my downloads slow down to 5-6 mb/s
Coincidence? I think not.
After a download, click on the SAB wrench ... is there a "Download speed limited by", and if so, what does it say. I did a download a Xeon VPS, and SAB said: "Download speed limited by Disk speed (37x)".
Can you try a SAB download to an internal, non-SAS disk?
And the other way around: if from the CLI you write 25GB to your SAS ... what is the speed of the last GBs of writing? Is it throttling? Please do the test from the host, and from within the docker
> pystone score of 80-88k.
I have a Celeron with pystone score of 95k. And SABnzbd achieves linespeed (60MB/s) on that ... with a straight SSD.
> nzbget ... to slow down to a crawl to about 5-6 mbit/s
> I noticed that even in sab, my downloads slow down to 5-6 mb/s
Coincidence? I think not.
After a download, click on the SAB wrench ... is there a "Download speed limited by", and if so, what does it say. I did a download a Xeon VPS, and SAB said: "Download speed limited by Disk speed (37x)".
Can you try a SAB download to an internal, non-SAS disk?
And the other way around: if from the CLI you write 25GB to your SAS ... what is the speed of the last GBs of writing? Is it throttling? Please do the test from the host, and from within the docker
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
With "dd", you can measure the speed of a disk. Start the command below, and press ENTER each few seconds
Internal SSD ... I'm very disappointed:
This is for an USB connected external drive
Internal SSD ... I'm very disappointed:
Code: Select all
sander@brixit:~$ dd if=/dev/zero of=~/blabla bs=1024k count=5000 status=progress oflag=direct
591396864 bytes (591 MB, 564 MiB) copied, 2 s, 296 MB/s
1766850560 bytes (1,8 GB, 1,6 GiB) copied, 6 s, 293 MB/s
2037383168 bytes (2,0 GB, 1,9 GiB) copied, 12 s, 170 MB/s
2156920832 bytes (2,2 GB, 2,0 GiB) copied, 19 s, 113 MB/s
2629828608 bytes (2,6 GB, 2,4 GiB) copied, 29 s, 90,7 MB/s
2861563904 bytes (2,9 GB, 2,7 GiB) copied, 34 s, 84,2 MB/s
3388997632 bytes (3,4 GB, 3,2 GiB) copied, 50 s, 67,7 MB/s
3705667584 bytes (3,7 GB, 3,5 GiB) copied, 60 s, 61,7 MB/s
4010803200 bytes (4,0 GB, 3,7 GiB) copied, 69 s, 58,0 MB/s
4203741184 bytes (4,2 GB, 3,9 GiB) copied, 84 s, 50,0 MB/s
4410310656 bytes (4,4 GB, 4,1 GiB) copied, 90 s, 49,0 MB/s
4618977280 bytes (4,6 GB, 4,3 GiB) copied, 101 s, 45,7 MB/s
5065670656 bytes (5,1 GB, 4,7 GiB) copied, 120 s, 42,2 MB/s
5205131264 bytes (5,2 GB, 4,8 GiB) copied, 125 s, 41,6 MB/s
5000+0 records in
5000+0 records out
5242880000 bytes (5,2 GB, 4,9 GiB) copied, 125,971 s, 41,6 MB/s
Code: Select all
sander@brixit:~$ dd if=/dev/zero of=/media/zeegat/blabla bs=1024k count=5000 status=progress oflag=direct
540016640 bytes (540 MB, 515 MiB) copied, 2 s, 270 MB/s
1383071744 bytes (1,4 GB, 1,3 GiB) copied, 7 s, 197 MB/s
1998585856 bytes (2,0 GB, 1,9 GiB) copied, 13 s, 154 MB/s
2647654400 bytes (2,6 GB, 2,5 GiB) copied, 19 s, 139 MB/s
3354394624 bytes (3,4 GB, 3,1 GiB) copied, 25 s, 134 MB/s
3855613952 bytes (3,9 GB, 3,6 GiB) copied, 29 s, 133 MB/s
4449107968 bytes (4,4 GB, 4,1 GiB) copied, 35 s, 127 MB/s
4977590272 bytes (5,0 GB, 4,6 GiB) copied, 41 s, 121 MB/s
5220859904 bytes (5,2 GB, 4,9 GiB) copied, 44 s, 119 MB/s
5000+0 records in
5000+0 records out
5242880000 bytes (5,2 GB, 4,9 GiB) copied, 44,2683 s, 118 MB/s
-
- Newbie
- Posts: 8
- Joined: February 4th, 2023, 6:36 pm
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
Unfortunately the box everything is being hosted on does not have a non SSD drive - boot drive is NVME (for Proxmox), but it is not exposed to VMs. Everything is on the ZFS pool.
When I say the drives are fast - I mean it - it's fast - I had to increase to 50 gigs because it was finishing too fast lol
And then within the docker container:
And on the proxmox host itself on the zfs pool (g'damn it's 2x as fast lol):
So back to "performance" bottle necks - outside of docker, I get about 170k on pystone with the VM having 6 cores assigned to it; on "bare" metal; (i.e. on the PVE host itself), I get about 175k - so about a 3% loss for the overhead of running in a VM; not bad. However, not sure if it's a "good" or "bad" score for a Xeon D-1528. For comparison - I ran it on an AWS box with a E5-2686 with 8 provisioned cores and I got 190k - so I assume 170k for a D-1528 is decent.
I did as you suggested - did the 10 gig download test via the wrench and, at about the 6.5 gig mark, it slowed down to about 4.5 mb/s - so my previous 15 gig threshold was incorrect as it's actually 10 (I guess I wasn't paying attention). While the download was progressing, load was at about 0.3, and used cache would climb to about 30-35 and drop down to 10 and sort of yo-yo there... not sure if that's expected or...? In any case- during the slowdown I checked ARC stats and if there was any other IO going on - no red flags there at all. After it was done - I clicked on the wrench icon and did not see "download speed limited" message
So then I decided to try sab on the bare VM (not docker) and see what happens... and it too was capped at 50 mb/s on a 10 gig download test (direct unpack is not enabled). However unlike docker, however, it did not slow down to a crawl at the 6.5 gig mark. On bare metal, pystone test within sab was at about 150k and but internet bandwidth comes up as only 21 mb./s... disk write is 570 mb/s to the incomplete folder and 700 mb/s to the complete folder. On bare VM, article cache is set to 1G, and I've played with connection counts - 5, 10, 25, 50... speed is still the same.
Any thoughts?
Thanks.
When I say the drives are fast - I mean it - it's fast - I had to increase to 50 gigs because it was finishing too fast lol
Code: Select all
user@server:/home/user $ dd if=/dev/zero of=~/test.log bs=1024k count=50000 status=progress oflag=direct
3098542080 bytes (3.1 GB, 2.9 GiB) copied, 2 s, 1.5 GB/s
6128926720 bytes (6.1 GB, 5.7 GiB) copied, 4 s, 1.5 GB/s
9187622912 bytes (9.2 GB, 8.6 GiB) copied, 6 s, 1.5 GB/s
12233736192 bytes (12 GB, 11 GiB) copied, 8 s, 1.5 GB/s
15272509440 bytes (15 GB, 14 GiB) copied, 10 s, 1.5 GB/s
18355322880 bytes (18 GB, 17 GiB) copied, 12 s, 1.5 GB/s
19863175168 bytes (20 GB, 18 GiB) copied, 13 s, 1.5 GB/s
23037214720 bytes (23 GB, 21 GiB) copied, 15 s, 1.5 GB/s
24611127296 bytes (25 GB, 23 GiB) copied, 16 s, 1.5 GB/s
27704426496 bytes (28 GB, 26 GiB) copied, 18 s, 1.5 GB/s
29256318976 bytes (29 GB, 27 GiB) copied, 19 s, 1.5 GB/s
30762074112 bytes (31 GB, 29 GiB) copied, 20 s, 1.5 GB/s
32330743808 bytes (32 GB, 30 GiB) copied, 21 s, 1.5 GB/s
33921433600 bytes (34 GB, 32 GiB) copied, 22 s, 1.5 GB/s
35446063104 bytes (35 GB, 33 GiB) copied, 23 s, 1.5 GB/s
38507905024 bytes (39 GB, 36 GiB) copied, 25 s, 1.5 GB/s
40066088960 bytes (40 GB, 37 GiB) copied, 26 s, 1.5 GB/s
41653633024 bytes (42 GB, 39 GiB) copied, 27 s, 1.5 GB/s
43199234048 bytes (43 GB, 40 GiB) copied, 28 s, 1.5 GB/s
46262124544 bytes (46 GB, 43 GiB) copied, 30 s, 1.5 GB/s
47822405632 bytes (48 GB, 45 GiB) copied, 31 s, 1.5 GB/s
49347035136 bytes (49 GB, 46 GiB) copied, 32 s, 1.5 GB/s
50896830464 bytes (51 GB, 47 GiB) copied, 33 s, 1.5 GB/s
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB, 49 GiB) copied, 33.9484 s, 1.5 GB/s
Code: Select all
root@00b99939d184:/# dd if=/dev/zero of=/test bs=1024k count=10000 status=progress oflag=direct
2685403136 bytes (2.7 GB, 2.5 GiB) copied, 2 s, 1.3 GB/s
4049600512 bytes (4.0 GB, 3.8 GiB) copied, 3 s, 1.3 GB/s
5384437760 bytes (5.4 GB, 5.0 GiB) copied, 4 s, 1.3 GB/s
6708789248 bytes (6.7 GB, 6.2 GiB) copied, 5 s, 1.3 GB/s
8039432192 bytes (8.0 GB, 7.5 GiB) copied, 6 s, 1.3 GB/s
9289334784 bytes (9.3 GB, 8.7 GiB) copied, 7 s, 1.3 GB/s
Code: Select all
root@pve:/ZFS-RAID10# dd if=/dev/zero of=./test bs=1024k count=50000 status=progress oflag=direct
2713714688 bytes (2.7 GB, 2.5 GiB) copied, 1 s, 2.7 GB/s
5492441088 bytes (5.5 GB, 5.1 GiB) copied, 2 s, 2.7 GB/s
8229224448 bytes (8.2 GB, 7.7 GiB) copied, 3 s, 2.7 GB/s
11011096576 bytes (11 GB, 10 GiB) copied, 4 s, 2.8 GB/s
13840154624 bytes (14 GB, 13 GiB) copied, 5 s, 2.8 GB/s
16658726912 bytes (17 GB, 16 GiB) copied, 6 s, 2.8 GB/s
19421724672 bytes (19 GB, 18 GiB) copied, 7 s, 2.8 GB/s
22183673856 bytes (22 GB, 21 GiB) copied, 8 s, 2.8 GB/s
24935137280 bytes (25 GB, 23 GiB) copied, 9 s, 2.8 GB/s
27698135040 bytes (28 GB, 26 GiB) copied, 10 s, 2.8 GB/s
30547116032 bytes (31 GB, 28 GiB) copied, 11 s, 2.8 GB/s
33376174080 bytes (33 GB, 31 GiB) copied, 12 s, 2.8 GB/s
36197892096 bytes (36 GB, 34 GiB) copied, 13 s, 2.8 GB/s
39044775936 bytes (39 GB, 36 GiB) copied, 14 s, 2.8 GB/s
41879076864 bytes (42 GB, 39 GiB) copied, 15 s, 2.8 GB/s
44668289024 bytes (45 GB, 42 GiB) copied, 16 s, 2.8 GB/s
47420801024 bytes (47 GB, 44 GiB) copied, 17 s, 2.8 GB/s
50217353216 bytes (50 GB, 47 GiB) copied, 18 s, 2.8 GB/s
I did as you suggested - did the 10 gig download test via the wrench and, at about the 6.5 gig mark, it slowed down to about 4.5 mb/s - so my previous 15 gig threshold was incorrect as it's actually 10 (I guess I wasn't paying attention). While the download was progressing, load was at about 0.3, and used cache would climb to about 30-35 and drop down to 10 and sort of yo-yo there... not sure if that's expected or...? In any case- during the slowdown I checked ARC stats and if there was any other IO going on - no red flags there at all. After it was done - I clicked on the wrench icon and did not see "download speed limited" message
So then I decided to try sab on the bare VM (not docker) and see what happens... and it too was capped at 50 mb/s on a 10 gig download test (direct unpack is not enabled). However unlike docker, however, it did not slow down to a crawl at the 6.5 gig mark. On bare metal, pystone test within sab was at about 150k and but internet bandwidth comes up as only 21 mb./s... disk write is 570 mb/s to the incomplete folder and 700 mb/s to the complete folder. On bare VM, article cache is set to 1G, and I've played with connection counts - 5, 10, 25, 50... speed is still the same.
Any thoughts?
Thanks.
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
And you're 100% sure you're running SAB 3.7.1? If so, and if we trust SABnzbd, it means SAB did not experience the Disk nor CPU as a bottleneck. That would mean the Internet and/usr newsserver speed is the bottleneck / limiting factor. How could that be? During such a download, when it slows down, can you run from within the Docker:I did as you suggested - did the 10 gig download test via the wrench and, at about the 6.5 gig mark, it slowed down to about 4.5 mb/s
After it was done - I clicked on the wrench icon and did not see "download speed limited" message
Code: Select all
iperf3 -c ams.speedtest.clouvider.net -p 5205 -4 -t5 # europe
iperf3 -c nyc.speedtest.clouvider.net -p 5205 -4 -t5 # usa
So ... overall speed withoud docker was ... 50 MB/s? You can check in the History, at the downloaded item, at the drop down on the right hand side. Something like "Downloaded in 30 seconds at an average of 34.5 MB/s"So then I decided to try sab on the bare VM (not docker) and see what happens... and it too was capped at 50 mb/s on a 10 gig download test (direct unpack is not enabled). However unlike docker, however, it did not slow down to a crawl at the 6.5 gig mark.
-
- Newbie
- Posts: 8
- Joined: February 4th, 2023, 6:36 pm
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
I'll run these tests tonight and will report
-
- Newbie
- Posts: 8
- Joined: February 4th, 2023, 6:36 pm
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
In regards to the Sab version - I am actually on 3.7.2...sander wrote: ↑February 6th, 2023, 6:39 amAnd you're 100% sure you're running SAB 3.7.1? If so, and if we trust SABnzbd, it means SAB did not experience the Disk nor CPU as a bottleneck. That would mean the Internet and/usr newsserver speed is the bottleneck / limiting factor. How could that be? During such a download, when it slows down, can you run from within the Docker:I did as you suggested - did the 10 gig download test via the wrench and, at about the 6.5 gig mark, it slowed down to about 4.5 mb/s
After it was done - I clicked on the wrench icon and did not see "download speed limited" message
I wonder what speed they report.Code: Select all
iperf3 -c ams.speedtest.clouvider.net -p 5205 -4 -t5 # europe iperf3 -c nyc.speedtest.clouvider.net -p 5205 -4 -t5 # usa
So ... overall speed withoud docker was ... 50 MB/s? You can check in the History, at the downloaded item, at the drop down on the right hand side. Something like "Downloaded in 30 seconds at an average of 34.5 MB/s"So then I decided to try sab on the bare VM (not docker) and see what happens... and it too was capped at 50 mb/s on a 10 gig download test (direct unpack is not enabled). However unlike docker, however, it did not slow down to a crawl at the 6.5 gig mark.
Now onto iperf3 - you are onto something... inside sab container, during the slowdown:
Code: Select all
root@00b99939d184:/# iperf3 -c nyc.speedtest.clouvider.net -p 5205 -4 -t5
Connecting to host nyc.speedtest.clouvider.net, port 5205
[ 5] local 172.18.0.3 port 45204 connected to 94.154.159.137 port 5205
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 5.12 MBytes 43.0 Mbits/sec 0 764 KBytes
[ 5] 1.00-2.00 sec 5.00 MBytes 41.9 Mbits/sec 8 300 KBytes
[ 5] 2.00-3.00 sec 3.75 MBytes 31.5 Mbits/sec 0 332 KBytes
[ 5] 3.00-4.00 sec 5.00 MBytes 42.0 Mbits/sec 0 352 KBytes
[ 5] 4.00-5.00 sec 3.75 MBytes 31.5 Mbits/sec 0 361 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-5.00 sec 22.6 MBytes 38.0 Mbits/sec 8 sender
[ 5] 0.00-5.00 sec 19.8 MBytes 33.3 Mbits/sec receiver
And then I ran it on the PVE host... and... same.. flippin' results. Ran it on my mac studio... you guessed it, same results. However, a regular speedtest off my mac - I get full 1.5 gbit; did speedtest-cli on the host VM - same results; and within the sab's container I got about 400 mbit.
I tried a few other public servers and I get the same results, which is baffling. My network is exclusively ubiquiti based with a UDM-Pro acting as a firewall, and a US-16-XG 10 gig switch into which the proxmox host and my mac connect to (via 10 gig interfaces); outside of iperf everything works flawlessly at full speeds.
Btw - when downloading on bare metal, this is what I got from the history: "Downloaded in 3 mins 53 seconds at an average of 45.3 MB/s"
I think I need to do a bit more digging as to why iperf sucks so much on my whole home network while regular speedtest are perfectly fine... but I still don't understand the cause of the slowdowns because even when downloading at 50 mb/s, I get the same poor iperf results.
Sorry about the rambling response... I was typing as I went along with the testing :p
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
Long, long shot: difference between web-speed and other protocols ... ?
A poor man's web speed test:
on my speedy VPS:
So ... steady 218 MB/s
A poor man's web speed test:
Code: Select all
lynx --dump http://cdimage.ubuntu.com/daily-live/current/ | awk '/http:.*amd64.iso/ { print "wget -4 -O /dev/null " $NF }' | head -1 | /bin/sh
Code: Select all
$ lynx --dump http://cdimage.ubuntu.com/daily-live/current/ |
--2023-02-07 12:04:20-- http://cdimage.ubuntu.com/daily-live/current/lunar-desk
Resolving cdimage.ubuntu.com (cdimage.ubuntu.com)... 185.125.190.37, 91.189.91.1
Connecting to cdimage.ubuntu.com (cdimage.ubuntu.com)|185.125.190.37|:80... conn
HTTP request sent, awaiting response... 200 OK
Length: 4477704192 (4.2G) [application/x-iso9660-image]
Saving to: ‘/dev/null’
/dev/null 41%[=======> ] 1.73G 218MB/s eta 12s
/dev/null 68%[============> ] 2.84G 218MB/s eta 7s
/dev/null 90%[=================> ] 3.75G 216MB/s eta 2s
/dev/null 100%[===================>] 4.17G 218MB/s in 20s
2023-02-07 12:04:40 (213 MB/s) - ‘/dev/null’ saved [4477704192/4477704192]
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
Sort of interesting... after reading this person's reddit post it seems like there are a few of us.
https://www.reddit.com/r/SABnzbd/commen ... _download/
Fuzzy what is your live disk usage when you are downloading? Does it spike like it does for mine? Ie I am downloading at 30MB/s and using >200MB/s of disk usage but once Sab stops downloading it drops back down to ~1MB/s or so.
https://www.reddit.com/r/SABnzbd/commen ... _download/
Fuzzy what is your live disk usage when you are downloading? Does it spike like it does for mine? Ie I am downloading at 30MB/s and using >200MB/s of disk usage but once Sab stops downloading it drops back down to ~1MB/s or so.
-
- Newbie
- Posts: 8
- Joined: February 4th, 2023, 6:36 pm
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
That's actually my thread on redditm6747 wrote: ↑February 7th, 2023, 11:03 am Sort of interesting... after reading this person's reddit post it seems like there are a few of us.
https://www.reddit.com/r/SABnzbd/commen ... _download/
Fuzzy what is your live disk usage when you are downloading? Does it spike like it does for mine? Ie I am downloading at 30MB/s and using >200MB/s of disk usage but once Sab stops downloading it drops back down to ~1MB/s or so.
I'll test later tonight and will advise
-
- Newbie
- Posts: 8
- Joined: February 4th, 2023, 6:36 pm
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
On the bare VM - about 30 mb/ssander wrote: ↑February 7th, 2023, 6:05 am Long, long shot: difference between web-speed and other protocols ... ?
A poor man's web speed test:
on my speedy VPS:Code: Select all
lynx --dump http://cdimage.ubuntu.com/daily-live/current/ | awk '/http:.*amd64.iso/ { print "wget -4 -O /dev/null " $NF }' | head -1 | /bin/sh
So ... steady 218 MB/sCode: Select all
$ lynx --dump http://cdimage.ubuntu.com/daily-live/current/ | --2023-02-07 12:04:20-- http://cdimage.ubuntu.com/daily-live/current/lunar-desk Resolving cdimage.ubuntu.com (cdimage.ubuntu.com)... 185.125.190.37, 91.189.91.1 Connecting to cdimage.ubuntu.com (cdimage.ubuntu.com)|185.125.190.37|:80... conn HTTP request sent, awaiting response... 200 OK Length: 4477704192 (4.2G) [application/x-iso9660-image] Saving to: ‘/dev/null’ /dev/null 41%[=======> ] 1.73G 218MB/s eta 12s /dev/null 68%[============> ] 2.84G 218MB/s eta 7s /dev/null 90%[=================> ] 3.75G 216MB/s eta 2s /dev/null 100%[===================>] 4.17G 218MB/s in 20s 2023-02-07 12:04:40 (213 MB/s) - ‘/dev/null’ saved [4477704192/4477704192]
Within the container - about 20 mb/s
On TrueNas Scale box - 30 mb/s
On my AWS host - 10 mb/s :|
On a lab machine - Intel NUC with a core i3 7200U and CentOS 7 - about 15mb/s...
Last edited by fuzzybeanbag on February 8th, 2023, 4:14 am, edited 1 time in total.
-
- Newbie
- Posts: 8
- Joined: February 4th, 2023, 6:36 pm
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
During "full" speed download of 50 mb/s - disk usage using iotop is about 55-60 mb/s. When it slows down to a crawl - disk usage periodically spikes up to 20 mb/s every few seconds - I am guessing it writes to cache and dumps to disk.m6747 wrote: ↑February 7th, 2023, 11:03 am Sort of interesting... after reading this person's reddit post it seems like there are a few of us.
https://www.reddit.com/r/SABnzbd/commen ... _download/
Fuzzy what is your live disk usage when you are downloading? Does it spike like it does for mine? Ie I am downloading at 30MB/s and using >200MB/s of disk usage but once Sab stops downloading it drops back down to ~1MB/s or so.
Interesting observation today - when doing the 10 gig test file, normally it'd slow down at about the 6.5 gig mark. This time around, it slowed down at about the 8 gig mark...
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
So at download speeds of 50MB/s you're getting disk usage of 20-30?? Sounds a little strange does it not?fuzzybeanbag wrote: ↑February 8th, 2023, 4:08 amOn the bare VM - about 30 mb/ssander wrote: ↑February 7th, 2023, 6:05 am Long, long shot: difference between web-speed and other protocols ... ?
A poor man's web speed test:
on my speedy VPS:Code: Select all
lynx --dump http://cdimage.ubuntu.com/daily-live/current/ | awk '/http:.*amd64.iso/ { print "wget -4 -O /dev/null " $NF }' | head -1 | /bin/sh
So ... steady 218 MB/sCode: Select all
$ lynx --dump http://cdimage.ubuntu.com/daily-live/current/ | --2023-02-07 12:04:20-- http://cdimage.ubuntu.com/daily-live/current/lunar-desk Resolving cdimage.ubuntu.com (cdimage.ubuntu.com)... 185.125.190.37, 91.189.91.1 Connecting to cdimage.ubuntu.com (cdimage.ubuntu.com)|185.125.190.37|:80... conn HTTP request sent, awaiting response... 200 OK Length: 4477704192 (4.2G) [application/x-iso9660-image] Saving to: ‘/dev/null’ /dev/null 41%[=======> ] 1.73G 218MB/s eta 12s /dev/null 68%[============> ] 2.84G 218MB/s eta 7s /dev/null 90%[=================> ] 3.75G 216MB/s eta 2s /dev/null 100%[===================>] 4.17G 218MB/s in 20s 2023-02-07 12:04:40 (213 MB/s) - ‘/dev/null’ saved [4477704192/4477704192]
Within the container - about 20 mb/s
On TrueNas Scale box - 30 mb/s
On my AWS host - 10 mb/s :|
On a lab machine - Intel NUC with a core i3 7200U and CentOS 7 - about 15mb/s...
-
- Newbie
- Posts: 8
- Joined: February 4th, 2023, 6:36 pm
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
Lol no not quite - re-read my previous note; at 50 mb/s it's at about 55-60; when it slows down to a crawl it spikes to about 20 every few seconds.m6747 wrote: ↑February 10th, 2023, 2:28 pmSo at download speeds of 50MB/s you're getting disk usage of 20-30?? Sounds a little strange does it not?fuzzybeanbag wrote: ↑February 8th, 2023, 4:08 amOn the bare VM - about 30 mb/ssander wrote: ↑February 7th, 2023, 6:05 am Long, long shot: difference between web-speed and other protocols ... ?
A poor man's web speed test:
on my speedy VPS:Code: Select all
lynx --dump http://cdimage.ubuntu.com/daily-live/current/ | awk '/http:.*amd64.iso/ { print "wget -4 -O /dev/null " $NF }' | head -1 | /bin/sh
So ... steady 218 MB/sCode: Select all
$ lynx --dump http://cdimage.ubuntu.com/daily-live/current/ | --2023-02-07 12:04:20-- http://cdimage.ubuntu.com/daily-live/current/lunar-desk Resolving cdimage.ubuntu.com (cdimage.ubuntu.com)... 185.125.190.37, 91.189.91.1 Connecting to cdimage.ubuntu.com (cdimage.ubuntu.com)|185.125.190.37|:80... conn HTTP request sent, awaiting response... 200 OK Length: 4477704192 (4.2G) [application/x-iso9660-image] Saving to: ‘/dev/null’ /dev/null 41%[=======> ] 1.73G 218MB/s eta 12s /dev/null 68%[============> ] 2.84G 218MB/s eta 7s /dev/null 90%[=================> ] 3.75G 216MB/s eta 2s /dev/null 100%[===================>] 4.17G 218MB/s in 20s 2023-02-07 12:04:40 (213 MB/s) - ‘/dev/null’ saved [4477704192/4477704192]
Within the container - about 20 mb/s
On TrueNas Scale box - 30 mb/s
On my AWS host - 10 mb/s :|
On a lab machine - Intel NUC with a core i3 7200U and CentOS 7 - about 15mb/s...
Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?
ah okay. Super strange.