Hi! I've been using sabnzbd for years, so i'm a huge fan, and thanks for the best server side client out there
Right now I updated my server, and of course sabnzbd was updated as well from 0.4.x.x something to 0.5.6.
I loaded a couple of nzbs from binsearch (roughly around 40GB in data) and to my suprise 20 minutes later all of my downloads had failed due to "memory allocation".
Going into my server a quick "free -m and top" showed me a huge increase in memory allocation. sabnzbd is trying to use up all of my ram and when it fails all of my downloads dies of course.
I only have 2GB of ram but that's never been an issue before. To my little knowledge I belive sabnzbd used like less than 400mb on my system on older versions. Also my system rarely if ever had to resort to using swap at least not 40% of my swap memory.
A quick scooping around on this forum it seems like i'm the only one so far with this problem.
I'll be back with a debug log when my server is rebooted after a new kernel upgrade.
Version: (0.5.6)
OS: (archlinux)
Install-type: (linux source)
Skin (if applicable): (Default)
Firewall Software: (none)
Are you using IPV6? (no)
Is the issue reproducible? (yes - everytime I start the daemon)
My System:
Amd Phenom X2 550+
2GB RAM
archlinux kernel 2.6.36
python2 2.7.1-3
sabnzbd log http://paste.pocoo.org/show/312846/
sabnzbd python log http://paste.pocoo.org/show/312844/
sabnzbd error log empty
Showing "top" screenshot:
EDIT There's something very wrong, i'm getting failed on all of my downloads "Download failed - Out of your server's retention?" <---- i'm on giganews, they have over 800 days in retention the file is less than 16 days old.
[linux] sabnzbd 0.5.6 severe memory leak, using 2GB!??! [SOLVED]
Forum rules
Help us help you:
Help us help you:
- Are you using the latest stable version of SABnzbd? Downloads page.
- Tell us what system you run SABnzbd on.
- Adhere to the forum rules.
- Do you experience problems during downloading?
Check your connection in Status and Interface settings window.
Use Test Server in Config > Servers.
We will probably ask you to do a test using only basic settings. - Do you experience problems during repair or unpacking?
Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
[linux] sabnzbd 0.5.6 severe memory leak, using 2GB!??! [SOLVED]
Last edited by greenfish on January 2nd, 2011, 2:48 pm, edited 1 time in total.
Re: [linux] sabnzbd 0.5.6 severe memory leak, using 2GB!??!
What I do not understand is , why do you have so many instances of sabnzbd+ running ?
There should only be 1 running .. maybe that is causing the problem ?
There should only be 1 running .. maybe that is causing the problem ?
-
- Newbie
- Posts: 6
- Joined: January 1st, 2011, 9:01 pm
Re: [linux] sabnzbd 0.5.6 severe memory leak, using 2GB!??!
I believe those are threads, have about 20 as well on a debian.
greenfish, i am not sure if it s related but check the cache_limit option in the ini file
greenfish, i am not sure if it s related but check the cache_limit option in the ini file
Re: [linux] sabnzbd 0.5.6 severe memory leak, using 2GB!??!
You log is full if these messages:
Disk error on creating file /media/DOWNLOAD2/complete/newsgroups/...
SABnzbd cannot write its downloaded files to disk!
You also start SABnzbd with weird parameters:
-f //.sabnzbd.ini -f /home/greenfish/.sabnzbd.ini/sabnzbd.ini -d
Why twice -f? Why the odd names? Should work, but still...
Disk error on creating file /media/DOWNLOAD2/complete/newsgroups/...
SABnzbd cannot write its downloaded files to disk!
You also start SABnzbd with weird parameters:
-f //.sabnzbd.ini -f /home/greenfish/.sabnzbd.ini/sabnzbd.ini -d
Why twice -f? Why the odd names? Should work, but still...
Re: [linux] sabnzbd 0.5.6 severe memory leak, using 2GB!??!
Hi! No those are threads (20 connections from giganews).rascalli wrote: What I do not understand is , why do you have so many instances of sabnzbd+ running ?
There should only be 1 running .. maybe that is causing the problem ?
freakyzoidberg
I'll try anyting if it helps, but remember i've never had issues running sabnzbd before. All of my issues started after I went on the 0.5.6 version.I believe those are threads, have about 20 as well on a debian.
greenfish, i am not sure if it s related but check the cache_limit option in the ini file
But why is that? I mean what's causing the errors? I've never had these issues before. The memory leak and those errors started when I upgraded to the 0.5.6 version of sabnzbd. Just to be sure I ran a memtest and my RAM turns out OK, ran smartclt and fsck on my hdd, all came back normal.You log is full if these messages:
Disk error on creating file /media/DOWNLOAD2/complete/newsgroups/...
SABnzbd cannot write its downloaded files to disk!
I start the sabnzbd through the /etc/rc.local by executing "sabnzbd -f /home/greenfish/.sabnzbd.ini/sabnzbd.ini -d"You also start SABnzbd with weird parameters:
-f //.sabnzbd.ini -f /home/greenfish/.sabnzbd.ini/sabnzbd.ini -d
Why twice -f? Why the odd names? Should work, but still...
That's the way i've always booted sabnzbd on my server. Is there another way to turn sabnzbd into a daemon?
I'm gonna try executing "sabnzbd" in console and see what happens.
Thanks for your help and input so far guys
20 minutes later .. ram usage is like 520mb that's not too bad compared to before when 99% of all of my RAM was in use!
I'm gonna try putting sabnzbd in daemon mode and see if that's the main culprit on the 0.5.6 on my end.
Here's how I execute the daemon, anone got a better advice?
"I start the sabnzbd through the /etc/rc.local by executing "sabnzbd -f /home/greenfish/.sabnzbd.ini/sabnzbd.ini -d"
UPDATE: I deleted sabnzd 0.5.6, cleaned out all config files, re-mounted all my mountpoints, re-installed sabnzbd 0.5.6 again but this time I used the daemon script provided on this site. 30 minutes later my RAM usage is only 500mb during downloading and around 700mb when i'm extracting. Thanks everyone for your help
Last edited by greenfish on January 2nd, 2011, 2:50 pm, edited 1 time in total.