Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't actually think you want to mirror the natural data distribution, but rather to provide a distribution which has a sufficiently high variance in the right directions so that the resulting NN polytope has a chance of being approximately 'correct'.

Because you have this piecewise-linear sort of warping of the feature space going on, the NN is basically a whole bunch of lever-arms. The broader the support that you can give those lever arms, the less they will be influenced by noise and randomness ... hence my obsession with putting enough variance into the dataset along relevant dimensions.

To put this another way, I think that the synthetic data manifold has to be 'fat' in all the right places.




You have a good point, and I probably should have been more clear. When I said same distribution and same parameters, one of the parameters I was thinking of is mean and variance. Though to be fair mean and variance aren't formal parameters of every distribution.

Can you give an example of successful synthetic data generation which doesn't need to map to the same distribution? I'm surprised at that idea.


Well, in a sensing-for-autonomous-vehicles type problem, it's actually more important to have simple and easy to specify data distributions than ones which map to reality, which in any case may be so poorly or incompletely understood that it's impossible to write the requirement for.

So, as a simple example, the illumination in a real data-set might be strongly bimodal, with comparatively few samples at dawn and dusk, but we might in a synthetic dataset want to sample light levels uniformly across a range that is specified in the requirements document.

Similarly, on the road, the majority of other vehicles are seen either head-on or tail-on, but we might want to sample uniformly over different target orientations to ensure that our performance is uniform, easily understood, and does not contain any gaps in coverage.

Similarly, operational experience might highlight certain scenarios as being a particularly high risk. We might want to over-sample in those areas as part of a safety strategy in which we use logging to identify near-miss or elevated-risk scenarios and then bolster our dataset in those areas.

In general, the synthetic dataset should cover the real distribution .. but you may want it to be larger than the real distribution and focus more on edge-cases which may not occur all that often but which either simplify things for your requirements specification, or provide extra safety assurance.

Also, given that it's impossible to make synthetic data that's exactly photo-realistic, you also want enough variation in enough different directions to ensure that you can generalize over the synthetic-to-real gap.

Also, I'm not sure how much sense the concepts of mean and variance make in these very very high dimensional spaces.


In the physical sciences there are plenty of domains where accurate measurements are sparse. In a case close to home for me, it's measurements of water depth off coasts (accurate to centimeters onna grid of size meters). The place where you have these measurements in the real world can be counted on one hand. But now you want to train a ML algorithm to be able to guess water depth in environments all over the world, so in this case you need your data to be representative of a bunch of possible cases that are outside real data. This differs slightly from the GP who I think is talking about creating data that isn't even represented in the real world at all, but that would help an algorithm predict real world data anyway. But they are fairly related topics.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: