An RNN can be trained to
a) Simulate the random output behavior of the sequence
b) Predict the categorial hidden states at any given sequence location from supervised training data.
A HMM is able to carry out both a and b. It also has the capacity to do:
c) Predict the categorical hidden states any sequence location with reliance on supervised training data.
Effectively, an HMM can identify/reveal hidden categorical states directly from unsupervised learning. To my knowledge, unsupervised labeling of hidden states is not a trivial task for RNNs.
HMMs are a good model if you know what the categorical states are before you start modeling. If you let the HMM learn the hidden states (e.g. with expectation maximization), there is no good reason to believe they will have obvious semantic value to you.
For RNNs, you can get very similar info. If you do have meaningful hidden state, you can treat a few labeled examples as training data for a supervised learning procedure. The RNN winds up predicting the state variable along with the future of the sequence. If you want something exploratory, you can use the RNN's hidden state activation over the course of a sequence as a general real valued vector that is amenable to cluster analysis. If the RNN state vectors segment well, it's likely these segmentations will have at least as much meaning as learned HMM states.