memory usuage
Forum rules
Help us help you:
Help us help you:
- Tell us what system you run SABnzbd on.
- Adhere to the forum rules.
- Do you experience problems during downloading?
Check your connection in Status and Interface settings window.
Use Test Server in Config > Servers.
We will probably ask you to do a test using only basic settings. - Do you experience problems during repair or unpacking?
Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
memory usuage
the new version seems to be junk up a lot of memory on my nas box, its really slow doing anything else whilst its downloading, is there anything i can do to reduce memory consumption eg. disabling secondary template? sab 0.31 does not have this prob but of course it also does not have all these extra features . . . which i really like!
Re: memory usuage
Under what conditions do you see a difference?
SABnzbd-0.4.0 is bigger because it has more features and uses more Python modules.
But I would be surprised if that added more than 100K.
Its memory usage pattern should not have changed.
I would expect slower speed if you use SSL (especially on a NAS).
Could you be more specific about the differences?
I can list a number of potential memory hogs.
- secondary interface (just don't enable it)
- Setting "Top Only" option false (memory killer with large queues)
- Using article cache (at least a large one)
- Parallel downloading and post-processing (there's a new option to prevent this).
SABnzbd-0.4.0 is bigger because it has more features and uses more Python modules.
But I would be surprised if that added more than 100K.
Its memory usage pattern should not have changed.
I would expect slower speed if you use SSL (especially on a NAS).
Could you be more specific about the differences?
I can list a number of potential memory hogs.
- secondary interface (just don't enable it)
- Setting "Top Only" option false (memory killer with large queues)
- Using article cache (at least a large one)
- Parallel downloading and post-processing (there's a new option to prevent this).
Last edited by shypike on April 15th, 2008, 1:40 am, edited 1 time in total.
Re: memory usuage
I'm running 0.4.0 on a nas with 32mb of ram and I run stunnel along with SABnzbd+. I'm getting my best and most consistent speeds of any version so far. Great work. thank you.
-
- Full Member
- Posts: 211
- Joined: January 22nd, 2008, 1:38 pm
Re: memory usuage
i also have the idea that sab40 is bit faster then older versions, also speed more stable then before, from 1750 to 1850 before to 1900 to 1970 with sab40
Re: memory usuage
Hi
The download speed is fantastic, got no probs with that and agree its consistent. I've got 32mb ram too btw.
The problem lies when I try and watch a program through my Xbox whilst something is downloading, its incredibly slow and very jumpy, something that I don't experience with 0.3.1. I'm by no means an expert and I haven't tried all of the things you suggested.
As with many, I seem to be the only one with issues! If I still get the same probs after trying your suggestions, I'll just pause the downloads whilst I'm watching something. Thanks for all the responses, this is a great forum.
The download speed is fantastic, got no probs with that and agree its consistent. I've got 32mb ram too btw.
The problem lies when I try and watch a program through my Xbox whilst something is downloading, its incredibly slow and very jumpy, something that I don't experience with 0.3.1. I'm by no means an expert and I haven't tried all of the things you suggested.
As with many, I seem to be the only one with issues! If I still get the same probs after trying your suggestions, I'll just pause the downloads whilst I'm watching something. Thanks for all the responses, this is a great forum.
Re: memory usuage
Hi
I don't get how you can all be running this successfully on a 32MB machine. I'm finding it eats RAM like popcorn. I started off using B2 and I've now upgraded to B4. I'm using debian sid with Python 2.5.2. With B4 as soon as I run sabnzb.py it takes up 18M ram and 164M of swapfile. With 1 item added to the queue the ram rises to 45M and with 6 items it hits 82M. I haven't tried giving B4 a decent sized queue yet, but when I was using B2 I had about 40-50 queued items and it was using around 500M combined ram and swap. Each nzb is only around 1-1.5M so even if your holding all of the data in ram (which really seems unnecessary if you're only downloading one of them) that should only be taking about 60M total. Taking off the initial usage with 1 item, that's a jump of about 300M or 60M per nzb!?!
I something going wrong here or is this level of ram use to be expected with non-trivial queues?
I am already taking all the advice previously given in this thread to keep ram usage down. Caching is disabled. I did initially have top item only turned off as the machine has 500M ram so I did not expect ram to be any sort of issue as sabnzbd is the only (significant) prog running on it. When I saw how much ram it was using I turned on Top Item Only and restarted sabnzbd. That seemed to drop it from 550M to 450M for 50 items. I'll post again when I've given B4 a more heavy test. I also found with B2 that every time I added or deleted an item to/from the queue the CPU usage would jump to 100% for 30-60sec, and a 3GHz CPU! I truned off Auto Sort and it still happened. How big a queue size has been tested by the developers? Are there some algorithms and data structures in use that grow exponentially with queue size, or should cpu and ram use scale linearly?
Maybe I should use the old version until at stable 4.0 is released. I'm never sure whether getting the most up to date versions of things is going to fix or introduce more bugs/issues.
All I really want is to be able to give this thing 200 nzbs, have it download them all in age order so the oldest ones don't fall off before it gets them, and come back to it in a few weeks and find they're all there ok. The sorting by age feature is a very important one to have. That is the reason I've moved over to sab from hellanzb, with can't even display the age, let alone sort on it. If it could work reliably without consuming the entire ram (and swap) of the machine this prog would be really usefull
I don't get how you can all be running this successfully on a 32MB machine. I'm finding it eats RAM like popcorn. I started off using B2 and I've now upgraded to B4. I'm using debian sid with Python 2.5.2. With B4 as soon as I run sabnzb.py it takes up 18M ram and 164M of swapfile. With 1 item added to the queue the ram rises to 45M and with 6 items it hits 82M. I haven't tried giving B4 a decent sized queue yet, but when I was using B2 I had about 40-50 queued items and it was using around 500M combined ram and swap. Each nzb is only around 1-1.5M so even if your holding all of the data in ram (which really seems unnecessary if you're only downloading one of them) that should only be taking about 60M total. Taking off the initial usage with 1 item, that's a jump of about 300M or 60M per nzb!?!
I something going wrong here or is this level of ram use to be expected with non-trivial queues?
I am already taking all the advice previously given in this thread to keep ram usage down. Caching is disabled. I did initially have top item only turned off as the machine has 500M ram so I did not expect ram to be any sort of issue as sabnzbd is the only (significant) prog running on it. When I saw how much ram it was using I turned on Top Item Only and restarted sabnzbd. That seemed to drop it from 550M to 450M for 50 items. I'll post again when I've given B4 a more heavy test. I also found with B2 that every time I added or deleted an item to/from the queue the CPU usage would jump to 100% for 30-60sec, and a 3GHz CPU! I truned off Auto Sort and it still happened. How big a queue size has been tested by the developers? Are there some algorithms and data structures in use that grow exponentially with queue size, or should cpu and ram use scale linearly?
Maybe I should use the old version until at stable 4.0 is released. I'm never sure whether getting the most up to date versions of things is going to fix or introduce more bugs/issues.
All I really want is to be able to give this thing 200 nzbs, have it download them all in age order so the oldest ones don't fall off before it gets them, and come back to it in a few weeks and find they're all there ok. The sorting by age feature is a very important one to have. That is the reason I've moved over to sab from hellanzb, with can't even display the age, let alone sort on it. If it could work reliably without consuming the entire ram (and swap) of the machine this prog would be really usefull
Last edited by jack42 on May 11th, 2008, 1:57 pm, edited 1 time in total.
Re: memory usuage
While I agree it might be optimistic to run SAB on a 32MB NAS, it's fine on my 128MB LinkStation Pro.
Python RAM usage with Article Cache set to zero, -c switch run to clean before benchmarking:
Downloading 1 NZB (4.5GB) = 25MB
Downloading 2 NZB (4.5GB, 4.5GB) = 28MB
My LinkStation still has 70MB RAM free at this time.
Python RAM usage with Article Cache set to zero, -c switch run to clean before benchmarking:
Downloading 1 NZB (4.5GB) = 25MB
Downloading 2 NZB (4.5GB, 4.5GB) = 28MB
My LinkStation still has 70MB RAM free at this time.
Re: memory usuage
I've now done some more testing with B4 and a cleaned out cache, and it's _much_ better now. I find I have a roughly constant swap usage of ~170MB and ram usage as follows:
No items 18M
1 item in Q and 1 DL 45M
6 items 82M
8 items 100M
50 items 104M
66 items 114M
I was afraid that the increases in the first 4 lines were going to carry on, leading to a large ram usage like I was getting with B2, but I was very pleasantly surprised to find that when I did put in a load of nzbs it only went up slightly. Most of the increase in the first 4 must have been associated with buffers and things rather than actually storing the data from the nzbs, as the average increase per nzb is only about 0.25MB for large numbers of them (going from 8-66 in that table). So it seems it should be ok with very large numbers afterall. Excellent.
I don't know what was cause my 500M usage before. It could be that there was some memory leak in B2 that has now been fixed in B4. It could be that becuase Top Only was off when I added most of the nzbs originally that data about them was stored in the cache that caused the memory usage to remain very high even after turning Top Only on and restarting SABnzbd. Or maybe something just became corrupted in the data which was causing all sorts of badness. I think I saw something about making the queue more robust in the release notes.
The CPU usage is much better now, as well. When I added the 42 nzbs it took somewhere around 5-6 mins to processes them, and the 16 was around 2.5 mins. An individual nzb (with 66 already loaded) is only 7-8 seconds. Something was clearly very wrong before. Strangely, though, it takes about the same time (7-8 secs of 100% CPU on 3GHz) to delete an item from the queue. I would have thought this would have been a relatively simple operation compared to parsing and adding the nzb. While the processing was going on I was still able to click other delete buttons, so I didn't have to wait for each one to go through, although the screen couldn't be updated until they were all done.
No items 18M
1 item in Q and 1 DL 45M
6 items 82M
8 items 100M
50 items 104M
66 items 114M
I was afraid that the increases in the first 4 lines were going to carry on, leading to a large ram usage like I was getting with B2, but I was very pleasantly surprised to find that when I did put in a load of nzbs it only went up slightly. Most of the increase in the first 4 must have been associated with buffers and things rather than actually storing the data from the nzbs, as the average increase per nzb is only about 0.25MB for large numbers of them (going from 8-66 in that table). So it seems it should be ok with very large numbers afterall. Excellent.
I don't know what was cause my 500M usage before. It could be that there was some memory leak in B2 that has now been fixed in B4. It could be that becuase Top Only was off when I added most of the nzbs originally that data about them was stored in the cache that caused the memory usage to remain very high even after turning Top Only on and restarting SABnzbd. Or maybe something just became corrupted in the data which was causing all sorts of badness. I think I saw something about making the queue more robust in the release notes.
The CPU usage is much better now, as well. When I added the 42 nzbs it took somewhere around 5-6 mins to processes them, and the 16 was around 2.5 mins. An individual nzb (with 66 already loaded) is only 7-8 seconds. Something was clearly very wrong before. Strangely, though, it takes about the same time (7-8 secs of 100% CPU on 3GHz) to delete an item from the queue. I would have thought this would have been a relatively simple operation compared to parsing and adding the nzb. While the processing was going on I was still able to click other delete buttons, so I didn't have to wait for each one to go through, although the screen couldn't be updated until they were all done.
Re: memory usuage
Deleting the cache is a pretty good fix for many things.
Always delete it or use -c when you install a new build as well.
Always delete it or use -c when you install a new build as well.