I didnt like how sabnzbd didnt parse out the categories from the bookmarks RSS feed. ( i totally understand why, its not good to statically code when the use of APIs can be used )
I have wrote a python script that will make use of the SABnzbd and NZBMatrix API functions.
The application logic is as follows:
Grab the NZBMatrix RSS feed and parse out the individual nzb link, category and title.
Based on the title it looks in a "downloaded nzb" folder and checks if if it has already downloaded the file.
If not it downloads the file to the nzb folder and uses the SABnzbd API to queue up the download using the location on disk of the NZB file as well as the category.
I ran into a number of issues, originally i wanted it to parse the RSS feed find the http links to the nzbfiles. To download the nzbfile the nzbmatrix API key is required so adding the files to the queue in SABnzbd required the SABnzbd API key and the NZBMatrix API key. Both of these parameters use the name "APIkey" so sabnzbd doesnt know which API key to use meaning we cant add items to the queue via http links so we have to download the file and use the location on disk.
This is a really fast and dirty script, if theres enough interest ill get the script to log into the NZBmatrix site and delete the downloaded nzbfiles from the RSS feed, this will keep your nzbmatrix account a lot more tidy
A few parameters need to be set in the python file at the bottom, pretty straight forward.
If you want the script to execute every 1 hour then use crontab -e and the entry
Code: Select all
0 * * * * /usr/bin/sabmatrix.py