The Wall Street Journal asked that question last week. And their subheadline:
We asked readers: Is it worth giving up some potential privacy if the public benefit could be great?
A good many of the published answers centered on Yes, with oversight by, among others, medical professionals.
This reader (unpublished in the WSJ) says, resoundingly, No. Not now, and not for the foreseeable future, say I. Personal data aggregators, whether government or private enterprise, have shown no ability to protect our personal data, whether from hackers or from organizational carelessness, incompetence, or ignorance. With our medical data especially, very good protection, even six sigma-level protection, isn’t good enough. This is one of the few areas where perfection must be the standard. Since that’s an unachievable standard, AIs must not be permitted any access to our personal data, including our personal medical data.
There are additional reasons for saying no. One is the inherent bias programmers build into AIs. Alphabet’s overtly bigoted Gemini is an extreme example, but the programmers build their biases into AIs through the data sets they use and have their AIs use in training.
There’s also the just as overt bigotry too many medical training institutions apply through their emphasis on diversity, equity, inclusion claptrap at the expense of training actual medicine. Those institutions are producing the doctors that would the second generation of “medical” professionals doing the oversight.
In the current state of affairs, and for that foreseeable future, it’s not feasible to let AIs into any aspect of our personal lives. The blithely assumed public benefit is vastly overwhelmed by the threat to our individual privacy—the “public,” after all, is all of us individuals aggregated.