Does generating synthetic data using this make sense for RL of models to be be really good at debugging code? Currently all LLMs inhaled all of code in the world but the data is only text of the code (maybe plus changes that fixed bugs etc) but the amount of insight that can be generated by actually running the code and getting the runtime values, step-by-step is almost infinite.
Does generating synthetic data using this make sense for RL of models to be be really good at debugging code? Currently all LLMs inhaled all of code in the world but the data is only text of the code (maybe plus changes that fixed bugs etc) but the amount of insight that can be generated by actually running the code and getting the runtime values, step-by-step is almost infinite.
Is this sort of data useful for training LLMs?