Model bias is not a huge issue. Maybe something about class imbalance or regularization. The huge issue is deployment - what is the model used for? How is it affecting people in reality? What metric is it optimizing?
Between all these the degree of L1 regularization or the class weights are minor things. Most models will perform similarly given the same data. It's mostly the data that makes the difference.
There's an interplay between the two insofar as a model built to handle a specific dataset will involve design decisions informed by the data. E.g. you might pick a certain level of L1 regularization because it maximizes performance on the data you have, which can lead to bias against data you don't have.
But if you take "model" to mean the pure mathematical description without parameters or hyperparameters that need to be determined by experimentation, then I agree that optimizing the model on a dataset will not lead to bias against specific groups of humans unless the data used contains such a bias.
Between all these the degree of L1 regularization or the class weights are minor things. Most models will perform similarly given the same data. It's mostly the data that makes the difference.