Wired magazine recently published an interesting article on the Netflix Prize:
The article is a fun read. It provides some perspective on the importance of tuning algorithms and the potential for combining many algorithms for one prediction task. It also makes it clear that the prize-seeking community is very open to sharing results and techniques. Cool.
I would have been interested in reading more about why the researchers think going from 8% RMSE improvement to 10% improvement will be so hard. Is is because they’ve (finally) bumped up against individuals’ abilities to accurately represent their own movie preferences on the 1-5 scale? I ask, because I had thought we were already there before this contest! How much room is there for algorithms to get better at predicting our individual rating idiosyncrasies and inconsistencies?