I have been working on an NZB indexing toolkit written in C# (it runs on Windows the default .NET run-times and on linux using Mono).
The idea is simple, it produces NZB files from Usenet headers. The NZB's can then be used on web portals or other hosting services.
I have started a SF project for the project.
http://sourceforge.net/p/nntpit/wiki/Home/
NNTP Indexing Toolkit
Re: NNTP Indexing Toolkit
Web Portal Created
http://thecube.bluebit.com.au
The above is a very simple web portal interface to make available the indexed releases. This portal is only indexing a few groups as an example and it does not go back very far. I may add more groups and backfill it in time but for now it is just an example of the project work to date.
I wrote this toolkit to learn about the problems involved in indexing Usenet posts. I was not entirely happy with the current tools available and wanted an open toolkit to play around with that allowed downloading nntp headers, parsing the headers and grouping the posts into file and multi file releases.
http://thecube.bluebit.com.au
The above is a very simple web portal interface to make available the indexed releases. This portal is only indexing a few groups as an example and it does not go back very far. I may add more groups and backfill it in time but for now it is just an example of the project work to date.
I wrote this toolkit to learn about the problems involved in indexing Usenet posts. I was not entirely happy with the current tools available and wanted an open toolkit to play around with that allowed downloading nntp headers, parsing the headers and grouping the posts into file and multi file releases.
Re: NNTP Indexing Toolkit
I am not sure if people are interested in alternative Nntp indexing tools as it looks like everyone has jumped on newsnab. That system in fine I just wanted alternatives for people that wanted to try different solutions with different approaches.
I have added a bunch of stuff to the Web Portal:
- Rar pawword checking
- Nfo download
- Nfo link extraction, currently looks for TvRage and imdb
- Statistics for indexer and data download
One of the main things I wanted to know when I started this project was the amount of data it required to build and maintain indexes. To help answer this question I added byte counters to the NntpClientLib I am using and reported this data in the log files. This data is then inserted into the DB and status graphs can be built. These graphs can be viewed on the Status pages
http://thecube.bluebit.com.au/Status
and
http://thecube.bluebit.com.au/Status/DataUsage
The indexer is using xzver compression for header download so there is a 90% reduction in header data but rar segments count for over 50% of the data in total so I might look at perhaps just grabbing the first 1K of the rar instead of the full first section which can be around 700-900K most of the time. Nfo is very low, below 5% so that is not a concern and in fact for the amount of meta data it can add tot he portal it is a very good investment.
I also added the group list and release count per group to the main page, mainly indexing media groups atm.
I have added a bunch of stuff to the Web Portal:
- Rar pawword checking
- Nfo download
- Nfo link extraction, currently looks for TvRage and imdb
- Statistics for indexer and data download
One of the main things I wanted to know when I started this project was the amount of data it required to build and maintain indexes. To help answer this question I added byte counters to the NntpClientLib I am using and reported this data in the log files. This data is then inserted into the DB and status graphs can be built. These graphs can be viewed on the Status pages
http://thecube.bluebit.com.au/Status
and
http://thecube.bluebit.com.au/Status/DataUsage
The indexer is using xzver compression for header download so there is a 90% reduction in header data but rar segments count for over 50% of the data in total so I might look at perhaps just grabbing the first 1K of the rar instead of the full first section which can be around 700-900K most of the time. Nfo is very low, below 5% so that is not a concern and in fact for the amount of meta data it can add tot he portal it is a very good investment.
I also added the group list and release count per group to the main page, mainly indexing media groups atm.
Re: NNTP Indexing Toolkit
Can't talk for everyone, but I like alternatives. Places like binsearch and nzbindex have helped many times where newsnab sites have not. So hey, alternatives good, keep it up.faush01 wrote:I am not sure if people are interested in alternative Nntp indexing tools as it looks like everyone has jumped on newsnab. That system in fine I just wanted alternatives for people that wanted to try different solutions with different approaches.
Re: NNTP Indexing Toolkit
I had some time over the long weekend to add an API to the Web Portal, it uses the same format and parameters as the newsnab api so can be used in tool like Sick Beard by just using the url and not entering the Key as the site does not require login or keys etc.
In Sick Beard just use the URL:
http://thecube.bluebit.com.au/
with no guid key when setting up a newsnab like search indexer
For more info on the API
http://thecube.bluebit.com.au/Home/ApiDocs
For an example of a search, last 100 shows added to the indexer
http://thecube.bluebit.com.au/api?t=tvsearch
In Sick Beard just use the URL:
http://thecube.bluebit.com.au/
with no guid key when setting up a newsnab like search indexer
For more info on the API
http://thecube.bluebit.com.au/Home/ApiDocs
For an example of a search, last 100 shows added to the indexer
http://thecube.bluebit.com.au/api?t=tvsearch
Re: NNTP Indexing Toolkit
so just found this and trying to get it to work but keep getting errors unhandled exception: system.nullreferenceexcetion: object reference not set to an instance of an object.
also not sure how to get a webpage to display this info. still looking into this.
also not sure how to get a webpage to display this info. still looking into this.