Tuesday, June 28, 2005

 

Live from GECCO-VIII: Reflections on a compact classifier system

I am sitting on some of the EDA sessions and I can help thinking about a discussion I had with Tian-Li Yu when I was preparing the papers about the compact classifier system. The discussion was about the main differences of DSMGA when compared to eCGA or BOA. DSMGA, unlike eCGA and BOA, provides a crisp and clear presentation of the identified building blocks at the end of a run—just to mention the reason of my wondering. Once the population has converged, eCGA and BOA models just indicate that all the variables are independent. From a substructure identification point of view, the final DSM gives a clear picture of how the problem looks like. Such a property keeps getting my attention every time I sit in an EDA session.

Mental not to self: I need to talk to Tian-Li again. Genetics-based machine learning systems that can provide problem decomposition at the end of the run is one of the challenges I would like to see more solutions on.

Comments:
Hi, Xavier.
I have similar feelings about discovering the structure of a problem. However, I do not think this is the problem of DSMGA on one side and BOA, eCGA and others on the other side. The point is (correct me if I am wrong) that DSMGA uses the fitness function (which does not change during the run) analysis to reveal the dependencies while the other use finite samples (which are different every generation). Best regards.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?