Introduction

Introduction to bayes-irc

General idea

The general idea for bayes-irc came up when several guys on IRC started to anoy me with ads to join their channels so they could get some network-service bot. Paul Graham's article A plan for spam gave me a basic understanding on how E-Mail spam filters work and my first thoughts while reading his article were that the same mechnism could be used to filter out annoying texts on IRC.

Differences between E-mail and chat filtering

The main differences between E-mail and chat filtering lies in the informational content you are able to score. When scoring E-mail for spam possibility you don't only score the message body, (that is the particular E-Mail text) you also use the subject line, the sender format and all other meta informations you get from the message header (i.e. the different SMTP-servers used to send the E-mail). If you want to score a chat line, the amount of meta-informations is quite smaller. In case of IRC chat you just have the particular text-line, the full-address of the user sending that particular line, (Nickname!ident@host.com) the server you're connected to and in some cases the channel content was sent to. And that's it. When using the Bayesian approach, the filter-system splits each line into some tokens. In most cases marks like commata, dots and semicola are removed. Then each token is scored and the scores of the 20 most spam-like tokens and the 20 most ham-like tokens are used to calculate the overall score of the whole sentence. The problem is, that in most cases the system cannot build 40 tokens out of a single chatline. For example imagine the user ping!myident@internet.com sends 'lol ^^' to #somechannel on irc.somenetwork.org. If you try to score just the sentence 'lol ^^' you'll get two tokens resulting in a very high or very low score. This depends on what you trained in the classifier before. You can increase the number of tokens and thus resulting in a perhaps more useful classifier if you also learn (and score) the user's fullhost (ping!myident@internet.com), the channel (#somechannel) and the IRC-server (irc.somenetwork.org). bayes-irc helps you with this by breaking up the fullhost and the IRC-server into different tokens using '!' and '@' as delimiter. So, when you try to classify chatlines, you should use all informations you can obtain.

Overview

The main point in Paul Graham's article is, that each user defines what he thinks that spam is and the Bayesian approach learns for you, what YOU think spam is. This is the reason why it doesn't make sense to provide a pre-trained classifier which everyone can use. So this can be interpreted as an advantage and also a disadvantage, but the main point is, that you have to train a classifier on your own with the Bayesian approach. This section will try to give you the basic knowledges to do so.

(Short) Introduction to pattern classification

Generally pattern classifiation handles a given (distinct) pattern and returns a value that classifies the given pattern into a former defined class. For the spam filtering discussion the pattern classification problem can be reduced to two classes: The Ham- and the Spam-class.

In most cases first of all a training program has to bootstrap a basic classifier which can be used as a basis for further processings. The main problem is that a classifier returns better results if the underlying trained basis - the classifier's knowledge - is huge. Since noone wants to sit on IRC for weeks to get a good basic classifier, this phase is used offline from the command console in bayes-irc.

The booststrapped classifier can afterwards be used to classify a given pattern. The main point in this discussion is, that the examples in the training phase should be equivalent to the examples in the classification phase to achieve good results. So if you're planning to collect samples for Ham and Spam for the training phase you should be aware of the latter classification phase and also think of the previous mentioned differences to E-mail filtering.

Training program

The training program bayesirc-training can be used to bootstrap a classifier from scratch with an existing positive and negative training set. As pointed out before you should include almost every information you can additional obtain to train your classifier. You have to be aware of this fact when generating your training data. For getting a basic idea what you should include in the training phase there is a sample training set in the data-directory of each bayes-irc distribution.

Positive set

The positive training set consists of training samples that you consider to be harmless - or in other words Ham. To keep it small and simple the training program uses simple ASCII-text files as inputs with a single line as a positive example. Just look at the sample in the data-directory. (good.txt)

Negative set

The same applies to the negative training set. the set should consist of training sample you consider to be harmfull - or in other words Spam. You should note additionally that you should try to keep both training sets in equal sizes, i.e. if your negative set contains 2000 lines of spam, your positive set should hold at least 1800 samples and at most 2200 (These values are just examples - no recomments!). Make sure to additionally take a look at the sample in the data-directory. (bad.txt)

Statistic program

The bayesirc-stats program can be used to score a single portion of text offline with no IRC-Client. It can help you determine, how good or how bad your already trained classifier works for you or if you should adjust your training set to be able to use the resulting classifier in production.

IRC-Client plugins

The basic idea for the IRC client plugins is that you might want to actually use the Bayesian filtering on IRC once you have trained a working classifier. You could also copy & paste each line to the statistics program, get the score and return to your chat client to kick the person who send that message if the message's score was too high, but that isn't practicable at all.

mIRC

The IRC-Client dll's currently consist of a small set of functions which can be used in combination with the scripting abilities of the client to improve unwanted message filtering. For the mIRC dll there are currently nine functions available where two of them are used during loading and unloading the dll from mIRC. See the mIRC help file (/help /dll) on loading and unloading addon dlls. All further functions can be used within a mIRC script using the $dll (or $dllcall) identifiers with the given procdure names. The functions load and save will load or save a classifier file. The only required parameter is the desired filename. (make sure to include the correct absolute or relative path) The classify-function classifies a given text string and returns the classification based on the currently loaded classifier. The functions learnAsHam and learnAsSpam can be used to improve the current classifier with new examples using online training. The functions reclassifyAsHam and reclassifyAsSpam are used to adjust the classification data basis if a text was learned for the wrong category. This can happen when you build a script that automatically learns text which is below a certain classification threshold (i.e. 0.4) and you encounter that there are test portions you consider as spam below this threshold.

X-Chat

For the X-Chat plugin the functions can be called directly on X-Chat's command line. The current implementation of the X-Chat plugin offers no direct function to classify a given text manually. Instead there are the same basic functions as for the mIRC plugin: loadClassifier, saveClassifier, learnAsHam, learnAsSpam, reclassifyAsHam and reclassifyAsSpam. Beside these fundamental functions there is another command that lets the user set a certain threshold - setThreshold. Whenever there is text on a channel or in a private chat, the plugin will automatically classify and learn the text with the channel, network, server and the user which sent the text included. If the classification of the text is above the currently set threshold the text won't be shown up. Note that the threshold must be between 0 and 1, so if you want to filter out any text that has a score above a spam probability of 66% you would set the threshold to the value 0.66.

more to come...

I begun the work on a plugin for irssi, but it isn't usable at this time. Hopefully there might be some more plugins in future if bayes-irc gets more popular. Another good way to build modular plugins for all kinds of chat clients would be to provide an easy to learn scripting language interface. A popular language for this purpose would be Python which can be integrated into the library framework using facilities from the Boost C++ Libraries.

Future work on bayes-irc

There are some tasks left on bayes-irc which are considered to be handled in future releases.

  • First of all the list of IRC-plugins is rather short. The development concentrated on the library itself and some rudimental plugins and tools first. In future releases plugins for more IRC clients are planned.
  • The mIRC and X-Chat plugins are just an example what can be done with the library. The X-Chat plugin can be improved to be able to set a threshold for disabling the line and another threshold where the sender is being kicked from the channel. (if there are sufficient privileges for you) For the mIRC plugin a sample skript should be set-up to test the functionality of the library on that client, too.
  • In order to gain performance the library could be re-written with enhanced C++ Template Metafunctions. This step can be necessary when the library is considered being too slow for the real-time conditions that exist within the chat domain.
  • As a challenge for future versions the integration of the library into an IRC-daemon could be tested. The main idea behind this task is to provide a new channel- and/or user-mode so that a user is not able to send a given message if it reaches a given spam-probability. There are some issues concerning this task - like you trains the classifier and should there be any self-learning facilities and so on - but it isn't concerned to be impossible.

Possibly some of these tasks will never be realized, but they give ideas for future work in this research domain.