So briefly looking over the code reveals that:
- johnny-cache will cache the rows returned by the execution machinery in django's sql compiler (monkey-patches the compilers). It looks like it has fancy-pants invalidation (it basically has bulk invalidation through 2-tiered cache key scheme, unlike cache-machine with relies on set_many) and even support for transactions. I'm using this and it's awesome.
- django-cache-machine will cache the result of the QuerySet.iterator method. It seems that it has some limitations: it only (automatically) invalidates on forward relations (FKs) so you have to perform carefull invalidation through your code (eg: you use qs.update(), run queries through models without the custom CachingManager, use Model.create() and whatnot ...). Also, cache-machine will be heavy on the memcached traffic (1 call for every invalidated object, using set_many though ...)
- django-cachebot will cache the rows on the same level as cache-machine (at QuerySet.iterator call). Also, it has a very nice feature that will prefetch objects from reverse relations (like FK reverse descriptors and many to many relations - eg: Group.objects.select_reverse('user_set') and then group.user_set_cache will be equal to group.user_set.all()). Unfortunately the author only tested it on django 1.1 and it needs a django patch to work (the django manager patch is only for 1.1). I really like that select_reverse feature - unfortunately I can't use this on django 1.2 :(
So I'm thinking what I need is johnny-cache for the low level stuff and then some cache-machine to cache some of that "select_reverse" feng-shui that I would have to do myself.
Well, I'm probably missing something here and some other people should have better comparisons for these frameworks. Any feedback?
Edit: cachebot's select_reverse is based on django-selectreverse, however it's enhanced to support nested reverse relations (eg, from the docs: Article.objects.select_reverse('book_set','book_set__publisher_set'))