Products Downloads Prices Support Company
Index FAQ Configs Feeds In Feeds Out Feeds Out  

Header sucking

This allows you to configure DNews to suck certain groups in 'header only' mode, in this mode DNews only sucks the header of the items and then fetches the body of the message when the item is read, a cache is used so if another user reads the same body it is not requested again.

NOTE: You can only header suck from 'one' upstream host, not multiple ones.

As new bodies are delivered to the user 'as they arrive', a user who is at the end of a modem may not notice any performance drop.

To implement this for *binaries* you would do this:

  1. Add to dnews.conf, header_groups *binaries*,*pictures*
  2. Expire existing items in these groups, tellnews expire_groups *binaries* (this step will take as long as a normal expire, maybe an hour or so)
  3. Adjust your ME feed to allow binaries groups if it doesn't already
  4. Fetch or undelete the new groups if you didn't previously have them, e.g. tellnews undelete "*" or tellnews getgroups
  5. Make sure you have a posting feed to your upstream site, this is currently required when using header sucking. (check your newsfeeds.conf file, add the word posting)
  6. Examine the following table to see if you need to adjust any other settings
  7. Use the command "tellnews status_hcache" to see how its working
  8. Restart DNews when you change header* settings in dnews.conf!!!

Now try reading one of the specified groups.

Note: header sucking does not currently work for dmulti systems, it is assumed that these features are going to be of most use to smaller sites where dmulti is not used.

Note: You cannot use the 'pull true' option while header sucking

WARNING: Although this can give enourmous savings in network traffic, it comes at a cost, the cost is that your server is really a 'smart cache' for the upstream site, you are at the mercy of the upstream site to be reliable and online and consistent, if it is not then the bodies may be unavailable when your users try and read them, this can be most frustrating,  also the item numbering must match perfectly, if you try and change upstream servers, or they renumber their spool, then your existing indexes become invalid.

An important point to remember is that when using header sucking, DNews is in replicate mode in relation to those news groups, this can have side effects that you might not expect.  Also when turning header caching on and off, you have to use the expire_groups command to delete the contents of the header sucking groups.

Related new settings:

Setting Example Description
header_groups *binaries*,*warez* Groups which DNews should suck only headers for.
header_chan_n 3 Number of simultaneous channels to use when fetching bodies, if set too small then long delays will occur when a user reads a group.
header_body_mb 100 How much space DNews can use to cache bodies of header only items. (NOT DYNAMIC, restart DNews when you change)
header_path d:/dnews/header Directory to store cached bodies, defaults to (workarea)
header_host 2 Site to fetch bodies from, nntp_feeder is used by default, use 2 for nntp_suck3 etc... (yes we stuffed the numbering, it counts from zero instead of one)
header_prefetch 20 This specifies how many headers/items to fetch 'on the fly' when an uncached news group is first read, this can be used with a normal sucking feed instead of the 'downloading' message, when a group command is recieved DNews will rush off and fetch this many headers/items before responding to the user, so the user is never faced with an empty news group.
header_timeout 30 How long to try and talk to the upstream site before admitting to the user that item 'xxx' cannot be fetched.

 

Use the command  tellnews status_hcache to determine how well the cache is working.

Example Output and explanation:

Header requests cached/remote 23/10 5000k/3340k, Size 10/4000 3MB/20MB

    23/10    23 items read from cache, 10 items fetched from upstream

    5000k/2340k  5000K read from cache, 2340k Read from upstream

    10/4000 10 of the 4000 cache entries are currently used

    3MB/20MB 3MB of the cache files are strored, of 20mb permitted.