Tuesday, March 29, 2005


Learning Classifier Systems and Model Building

I'd like to point out two recent IlliGAL reports written by Martin Butz et al., namely the IlliGAL Reports No. 2005010 and 2005011, which discuss the use of Bayesian networks of BOA and marginal product models of ECGA in the well-known XCS learning classifier system of Stewart W. Wilson. This work shows that just as machine learning helped genetic algorithms to automatically identify and process important building blocks, it can help learning classifier systems to automatically identify and process more complex features (making them more broadly applicable as a result). These are just first steps, but I think that they are important ones. I hope to see more papers related to this topic in future.

This topic also reminds me of one analogy I thought about when writing my dissertation few years back. I think that feature extraction in machine learning and effective processing of building blocks (especially the mixing) are very closely related. While in machine learning we want to identify features that allow us to learn good models of the data, in optimization we want to identify features that make good solutions good. That's one of the reasons why incorporating machine learning techniques in genetic algorithms lead to so many powerful methods (such as COMIT, ECGA, FDA, BOA, hBOA, IDEAs, etc.)

The two technical reports are based on Martin Butz's thesis, which talks about these topics in greater detail. Of course, as I am one of the coauthors of the two papers, my views may be slightly biased :-)

Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?