Lossy and lossless are more interchangeable in computer science than people give credit so i wouldn't dwell on that too much. You can optimally convert one into the other with arithmetic coding. In fact the actual best in class algorithms that have won the hutter prize are all lossy behind the scenes. They make a prediction on the next data using a model (often AI based) which is a lossy process and with arithmetic coding they losslessly encode the next data with bits proportional to how correct the prediction was. In fact the reason why the hutter prize is lossless compression is exactly because converting lossy to lossless with arithmetic coding is a way to score how correct a lossy prediction is.