The actual harms being done today are still more pressing than the hypothetical harms of future. And should be prioritized in terms of resources spent.
If it's a valid dichotomy (I don't think it is) then the answer is to stop research on LLMs, and task the researchers with fighting human slavery instead.
I do not think that those researchers are fungible. We could however allocate a few hundred million less to AI research, and more to fighting human exploitation. We could pass stronger worker protection and have the big corporations pay for it - which then they have less money to spent on investments (in AI). Heck we could tax AI investments or usage directly, and spend it on worker rights or other cases of human abuse.
It isn’t the primary motivation of capitalists unfortunately, but improving automation could be part of the fight against human slavery and exploitation.