Other comments point out to Bayesian inference being good for modelling an uncertain outcome, while deep learning is good for prediction.
However Bayesian inference is a good choice for prediction when you have few data points (deep learning is sample-size hungry). And it is especially good when you have high uncertainty in your labelled training data (ie large variance in the response variable for given input). Here a Bayesian regression (or even classification) model wouldn’t magically remove the uncertainty but rather you’d be able to account for the predictive variance (instead of being none-the-wiser using just good ole deep learning). You can then take it from there how you wish to treat the predictions, given the predictive variance as well.
The choice is not between Bayesian methods and Deep Learning, but between statistical models and machine learning models (say, from random forest to GBM to xgboost and then maybe Deep Learning). There is overlap between statistical models and machine learning models—it is a matter sometimes of focus—and Bayesian methods can also be applied to what are typically considered ML approaches (see for example Bayesian hierarchical random forest).
But are machine learning models not statistical models? There is sample data which is statistics, and the objective function is also statistics, eg mean square error or negative log-likelihood or ELBO. And if you’re using stochastic gradient descent or a form of it, then that has statistical properties.
I don’t see any clear distinction between machine learning and statistics. Machine learning is a type of statistical model which relies on iterative optimization.
Bayesian inference on the other hand is a specific type of statistical model where the aim is to model distributions, not just the output variable (which is what ML is traditionally focused on).
And yes there is overlap, you can take a Bayesian approach to machine learning and that can make total sense sometimes.
"Bayesian inference on the other hand is a specific type of statistical model where the aim is to model distributions, not just the output variable (which is what ML is traditionally focused on)." - What distribution are you referring to? One of the advantages of the Bayesian approach (in the context of models and not of probability, it is not a model, it is a way of estimating the values of the parameters of a model) is that it provides a proper statistical distribution—and not a distribution based on theoretical formulas that require certain assumption to be true to have certain properties—of parameters and model predictions.
You can read more at https://www.fharrell.com/post/stat-ml/ (Frank Harrell is a top statistician who was once a frequentist and now is a bayesian. He writes also on the differences between ML and SM and how to choose between the two)
However Bayesian inference is a good choice for prediction when you have few data points (deep learning is sample-size hungry). And it is especially good when you have high uncertainty in your labelled training data (ie large variance in the response variable for given input). Here a Bayesian regression (or even classification) model wouldn’t magically remove the uncertainty but rather you’d be able to account for the predictive variance (instead of being none-the-wiser using just good ole deep learning). You can then take it from there how you wish to treat the predictions, given the predictive variance as well.