During the last year, I have not been doing that much messaging, but the previous 7 years I did a substantial amount of it - both point-2-point and lots of pub/sub. I absolutely don't think that Atom / AtomPub by itself will replace messaging. I don't know enough about XMPP to know if it will replace traditional messaging some day. I am a big fan of AMQP, but don't know if it is viable yet or will be viable soon. Tim is dead on that PSB is an impressive corpus of work
My point, was only that Atom & AtomPub appears to be a very good format / protocol to do business events with a polling model. Having done a lot of pub/sub, I have done a lot of clustering. The buffer that Tim describes is not so simple when you get lots of clients and lots of consumers. That is where flow control starts to appear (if you really want to guarantee message delivery) and where the dark corners of fail over and clustering rear their heads. It is nothing against messaging per say - just how it is. My point really was that even "guaranteed" messaging is often not truly guaranteed. And that for the right usage scenario the poll model is simpler and more appropriate.
Furthermore - push models can be appealing to developers because it feels cool (has felt very cool to me) - even simple. My point is just that this isn't typically really the case.
It really just comes down to your requirements. For example message rate, how quickly your event sinks need to process, etc. For where I am thinking right now, high level business events don't typically need to be delivered more than every 10 minutes - perhaps ever minute. I'd much rather deploy some simple HA web infrastructure & Atom/AtomPub than AMQP, TIBCO, SonicMQ, ActiveMQ, etc. to meet those requirements.
In my experience with pub/sub style integration architectures when you are pushing information amongst major domains (e.g., party/customer information), there is always the nagging question if you missed an event - or how do I deal with drift. This is because even though it is supposedly guaranteed messaging, there is always a risk. I've seen people try to capture events and persist them so that they can be replayed in the event of this type of a problem. Or more commonly, check the master system every month to ensure that the systems are synchronized. An Atom feed seems like a good way to avoid this failure all together - in that rather than pushing a copy of the event to each interested sink, the sink just reads the feed. Sure the client has to be a little bit more intelligent, but it just seems more deterministic to me. But I have more to learn.