A long time ago (so forgive me if I can't provide the reference), I read a paper about exploring a parameter space with a focus on "novelty".
The basic idea was that there were two different machine learning algorithms competing against each other. One was searching for configurations in the parameter space (I forget what type of learning algorithm the first one was), and the other was an unsupervised learning algorithm (a Kohonen self-organizing map (SOM), IIRC) which tried to model/predict the output of the model with the parameters chosen by the first one.
So there was a parameter space being explored by some search/learning algorithm. These parameters, when fed into a model (I forget what kind, I think it was a procedural graphics formulas of some kind) produce an output image. This image (not the parameters) is then fed into the SOM (which is doing online learning, so it's training and classifying at the same time), which slowly adapts to what sorts of images it sees, creating a sort of internal model.
The parameter-configurations found by the first algorithm were then scored against how well the SOM could fit/represent the output image of those parameters (IIRC, in the SOM algorithm you can calculate a score for how well it's doing).
So if the first algorithm would find something novel, something unexpected, the SOM would have more trouble fitting that output with its internal model.
From what I remember, it worked relatively well. The output images definitely had more "variety" in them than purely randomly chosen parameter values.
I'm not at all sure how to apply these ideas to planet/ecosystem-generation, though.
Apologies for the super-vague description, but it must have been at least 10 years since I read that paper :)
--
BTW another completely different approach to this problem might be found by using constraint solvers. Check out this site: http://www.gamesbyangelina.org/2013/06/the-saturday-paper-go... (if you're interested in procedural content generation, there's a lot of cool articles there, bringing together scientific research and game development).
I'm not entirely sure why, but from most examples I've seen, generally when you add constraints to a random generator, it tends to produce more variety. Variety is really all about restricting the randomness, anyway. White noise all looks the same, but it's the most random from an entropy point-of-view.
The basic idea was that there were two different machine learning algorithms competing against each other. One was searching for configurations in the parameter space (I forget what type of learning algorithm the first one was), and the other was an unsupervised learning algorithm (a Kohonen self-organizing map (SOM), IIRC) which tried to model/predict the output of the model with the parameters chosen by the first one.
So there was a parameter space being explored by some search/learning algorithm. These parameters, when fed into a model (I forget what kind, I think it was a procedural graphics formulas of some kind) produce an output image. This image (not the parameters) is then fed into the SOM (which is doing online learning, so it's training and classifying at the same time), which slowly adapts to what sorts of images it sees, creating a sort of internal model.
The parameter-configurations found by the first algorithm were then scored against how well the SOM could fit/represent the output image of those parameters (IIRC, in the SOM algorithm you can calculate a score for how well it's doing).
So if the first algorithm would find something novel, something unexpected, the SOM would have more trouble fitting that output with its internal model.
From what I remember, it worked relatively well. The output images definitely had more "variety" in them than purely randomly chosen parameter values.
I'm not at all sure how to apply these ideas to planet/ecosystem-generation, though.
Apologies for the super-vague description, but it must have been at least 10 years since I read that paper :)
--
BTW another completely different approach to this problem might be found by using constraint solvers. Check out this site: http://www.gamesbyangelina.org/2013/06/the-saturday-paper-go... (if you're interested in procedural content generation, there's a lot of cool articles there, bringing together scientific research and game development).
I'm not entirely sure why, but from most examples I've seen, generally when you add constraints to a random generator, it tends to produce more variety. Variety is really all about restricting the randomness, anyway. White noise all looks the same, but it's the most random from an entropy point-of-view.