Well, docs have seen this coming from miles away. I don't think anyone having substantial experience in clinical medicine is surprised by those developments, unfortunately. But it doesn't stop here. Insurance companies will be (are) building models to overcome legal barriers. Imagine: you're 20 and healthy, but located somewhere suggesting a higher risk of developing some chronic disease in the future ? Then no insurance covering this particular condition, for you specifically. A real-world application of the 'fuck you in particular' meme. This of course extends to all sorts of sensitive matters, such as your ethnicity, sexual preferences, etc.
Now this is a really scary application of AI, but you won't hear those wanting AI regulation such as Musk complain about that, right?
That's one reason (among many) the preexisting condition part of ACA is so important.
Without it, health insurance companies would have every incentive to do what car insurance companies do -- buy profiles and records from third parties and use those to adjust rates and willingness to insure.
E.g. the obvious step of buying genetic information from 23andme, because it isn't covered by HIPAA
I'd feel a lot better with customer-centric privacy protections around the collector and storer, a la HIPAA.
Instead of regulating only some of the uses.
HHS already had to administratively extend to cover gaps (we'll see how that goes, post-Chevron) and Congress attempted to repeal it for workplace purposes in 2017.
And there's still the gray market question about 23andme -> Equifax-alike packaging it into a blended proprietary risk score -> insurance companies using that (of course 'without knowing that genetic information was included').
Now this is a really scary application of AI, but you won't hear those wanting AI regulation such as Musk complain about that, right?