I have a weird stalling/hanging issue with FreeBSD 10.0 (64 bit) and sabnzbd+ 0.718. The system is a storage server with a fairly minimal number of packages, mostly NFS and Samba server, couple of support packages and nessusd service. 32 GB of RAM, Xeon E5430 CPU, boot drive is a ZFS file system on SSD, mass storage is a 24 TB RAIDZ2 pool (SAS controller with JBOD box). System is running headless, administration is by SSH login. sabnzbd is using its own user ID, no root privileges. Load average with sabnzbd running is about 0.7, Python is version 2.7.
Whenever the sabnzbd service is running, sabnzbd will eventually, after an undetermined amount of time, "soft freeze". The system will stall, downloads stop, web interface will not respond. Even the clock on the server will stop and sometimes, NFS and Samba will stall. Once I either login to the server or, if already logged in, issue any command, everything will resume working as if nothing had happened. There is nothing unusual in the sabnzbd log file.
The only thing I am certain about is that this problem is tied to sabnzbd - no stalls on the server when sabnzbd service is not running.
I am not sure what additional information would be required, but will gladly provide what is needed. Any help is greatly appreciated.
Soft Freeze/Stall with FreeBSD 10.0
Re: Soft Freeze/Stall with FreeBSD 10.0
It can very well be related to SABnzbd.
The problem is, I don't know anything about the quality of the Python port for FreeBSD.
I don't know much about FreeBSD for that matter.
The curious part is that issuing a command will unfreeze SABnzbd.
Sounds like the OS is actually freezing SABnzbd. Some feature of FreeBSD?
The problem is, I don't know anything about the quality of the Python port for FreeBSD.
I don't know much about FreeBSD for that matter.
The curious part is that issuing a command will unfreeze SABnzbd.
Sounds like the OS is actually freezing SABnzbd. Some feature of FreeBSD?
Re: Soft Freeze/Stall with FreeBSD 10.0
@shypike: Thanks for taking notice of my plight! I actually would not know about the quality of the Python port in FreeBSD, but I have not heard anything indicating issues. Would another version than 2.7 be preferred?
I tried recompiling sabnzbd+ and all packages it depends on, but no difference.
However, I noted the following entry in dmesg that appears to be related to the issue:
(Repeated many times over - not sure if this might help)
I tried recompiling sabnzbd+ and all packages it depends on, but no difference.
However, I noted the following entry in dmesg that appears to be related to the issue:
Code: Select all
sonewconn: pcb 0xfffff80042357188: Listen queue overflow: 8 already in queue awaiting acceptance
Re: Soft Freeze/Stall with FreeBSD 10.0
I don't know how to interpret that.
Some suggestions:
Do not set too many Usenet server connections. 8 per server is usually enough.
Some Unix/Linux systems have low maximum values for the amount of open files and sockets.
SABnzbd can be affected by it.
Some suggestions:
Do not set too many Usenet server connections. 8 per server is usually enough.
Some Unix/Linux systems have low maximum values for the amount of open files and sockets.
SABnzbd can be affected by it.
Re: Soft Freeze/Stall with FreeBSD 10.0
Thanks, reducing the number of connections is worth trying. Right now, 20 connections are used.
Looking at sockets:
The "denied" mbufs might indicate an issue, so I will try to increase the number of mbufs from 10k to 25k. I will report back the results. Edit: I missed one datapoint: the mbuf clusters are already at 2M and the limit is way beyond actual usage, so sorry for the dead end.
Looking at sockets:
Code: Select all
root@store:/ # netstat -m
1034/9316/10350 mbufs in use (current/cache/total)
1026/8592/9618/2039272 mbuf clusters in use (current/cache/total/max)
1023/6632 mbuf+clusters out of packet secondary zone in use (current/cache)
0/37/37/1019636 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/302114 9k jumbo clusters in use (current/cache/total/max)
0/0/0/169939 16k jumbo clusters in use (current/cache/total/max)
2310K/19661K/21971K bytes allocated to network (current/cache/total)
58/3643/6155 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
19/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile