There are a number of approaches; since it's still relatively new, there's a lot of "playing around" with techniques.
1) The first idea is to just see which topological features persist as the filtration/threshold parameter varies. I personally find the barcode illustration, rather than the birth-death diagram, to be more intuitive in this respect. For instance, say you had LIDAR data about a vehicle. Looking at the persistent homology of this point cloud could allow you to ascertain the size and number of windows (smaller windows would have shorter persistence than larger windows, and the number of persistent bars would correspond to the number of windows or "openings" on the vehicle). This might allow you to figure out if the vehicle is a van or an SUV.
2) For applications like those in neuroscience (I like, for instance, the talk you can find on Youtube titled "Kathryn Hess (6/27/17) Bedlewo: Topology meets neuroscience") there is instead a look at how these ranks behave over time. A rough sketch is this: look at the topology of the pattern of activation in neurons as a mouse (or an AI) learns something. As the learning process happens, what happens to the Betti numbers?
3) Sometimes one might want explicit generators. A thought-provoking small case of this can be found in "Mind the Gap:A Study in Global Development through Persistent Homology" https://arxiv.org/abs/1702.08593 This paper looks at statistics like GDP and infant mortality of countries around the world and find explicit oddities in the data. It's a proof of concept, to me; I'm interested to see where it'll go.
There is a lot more of course but that brings you to three interesting yet different directions.
1) The first idea is to just see which topological features persist as the filtration/threshold parameter varies. I personally find the barcode illustration, rather than the birth-death diagram, to be more intuitive in this respect. For instance, say you had LIDAR data about a vehicle. Looking at the persistent homology of this point cloud could allow you to ascertain the size and number of windows (smaller windows would have shorter persistence than larger windows, and the number of persistent bars would correspond to the number of windows or "openings" on the vehicle). This might allow you to figure out if the vehicle is a van or an SUV.
2) For applications like those in neuroscience (I like, for instance, the talk you can find on Youtube titled "Kathryn Hess (6/27/17) Bedlewo: Topology meets neuroscience") there is instead a look at how these ranks behave over time. A rough sketch is this: look at the topology of the pattern of activation in neurons as a mouse (or an AI) learns something. As the learning process happens, what happens to the Betti numbers?
3) Sometimes one might want explicit generators. A thought-provoking small case of this can be found in "Mind the Gap:A Study in Global Development through Persistent Homology" https://arxiv.org/abs/1702.08593 This paper looks at statistics like GDP and infant mortality of countries around the world and find explicit oddities in the data. It's a proof of concept, to me; I'm interested to see where it'll go.
There is a lot more of course but that brings you to three interesting yet different directions.