top of page
Search
  • Writer's pictureDr. Adrian Pujayana

Is Artificial Intelligence Trustworthy in HealthCare?

Updated: Jul 20, 2023

by Adrian Pujayana

Unless you’ve been in a cave in the last year, you’ve heard of AI…a cognitive type of intelligence constructed by a collective set of data available on the internet. It has the abiilty to calculate, problem solve, and generate information based on interpreting, truncating, and processing a set of data and outputing the content as text, image, or even music. But can you trust AI to give you healthcare advice?


This month we will feature 4 articles generated by the AI engine ChatGPT that I prompted as follows:

Included are OUR comments with regards to the article’s merit and the Doctor’s discernment of the content's relevance to the people we care for. Here are some perspectives about the benefits and risks of using AI, especially in the healthcare system.


AI can save time! As the name implies, AI is a kind of synthesized intelligence that has many pros and cons. AI engines can collect vast amounts of information and process, summarize, and discover patterns from within the data and quickly give you a response in a manner that would take a single person or group of people much more time to accomplish.


AI can diminish human error By having many hundreds or thousands of CPU computing power over the internet, AI engines can multi-task information and generate a content that is most concise, perhaps more precise and calculated, thereby eliminating human error when it generates computational tasks. For instance, a set of data whereby a doctor can input a patient's age, diagnosis, weight, bloodwork values, previous history, etc., and can thereby calculate a precise dose of a particular prescription medication, at the same time determining cross-over reactions with other medications, as well as determining the likelihood of success that medication would have for the patient.


AI can facilitate Decision-Making

Consolidating personal data and interpreting them as a risk/benefit index can make a person's human experience become objectified. For better or worse, AI uses an algorithm, a kind of prescribed methodology based on the programmer's sensibilities and priorities, converting information into some form of data value, likely a quantifiable numeric value and translates it into pros and cons...or yes and no's, and makes a decision based on the relative strength of those yes's and no's to conclude a decision that is likely to yield most success, or closest outcome to what you are asking the AI to do. AI is likely to produce an outcome in the highest probability of success or accuracy based on what you ask it to do. For instance, if you ask an image generating AI like Dall-E to produce an image of an apple, it will likely come up with a spectacular image of a ripe, perhaps red, voluptuous apple. But not all apples look this way, and you don't get a sense of it's flavor, texture, weight, and ways you can eat it, cook it, or prepare it. AI just decides for you as to what this apple looks like, and the environment in which it is present. This can lead to a somewhat biased outcome of the AI generated outcome, and in some ways, produce a narrow impression of what the outcome might be, for better or worse.


AI can automate repetitive tasks

By identifying repetition and patterns, an AI engine can predict outcome based on pattern recognition and likelihood that pattern will be discovered under a set of circumstances. For example, AI taught to read an X-Ray is as good as it's programmer who can look for features of cancers, arthritis and fractures, and uses the algorithm set forth by the programmer to identify these artifacts on a patient X-Ray. Similarly, AI that is taught to read bloodwork patterns may recognize a likelihood of particular diseases like diabetes, hypertension or a number of conditions based on a patient's history of bloodwork over time. These are tasks that would take an individual doctor perhaps more time to perform, but AI can prompt the doctors and red-flag these findings so the doctors can address the issues more thoroughly.


Last but not least, AI is impersonal and has it's own biases

Just like humans, AI has it's own biases based on the data it values over other data present. Information has different value for different context. For example, the weather in Los Angeles today may or may not affect a person in New York. That information has to be contextualized, and determined to be important based on why you need that information. AI generated responses can help facilitate decision processes or generate an answer to your question, but it isn't the same as interacting with a human, who has the choice to change their mind, or value/de-value data based on order of importance an context. AI is as good as it's data set, and the filters it's consumers have to discern the value of AI's answers. In the end, the human has to filter, react, and assign value to the AI generated content, and value it's worth and application to life.


Artificial Intelligence is very promising for processing information, articulating a thought process, and opening up alternatives by giving a perspective you may not have before. But it is a tool, like a paintbrush is to an artist, and a scalpel is to a surgeon. So use AI driven content with discretion, and remember that a healthcare advocate who gets to know you and your circumstances is a true advocate. An AI bot doesn't care what you do with the information, or how you end up, it just gives you the information as a matter of fact, not a matter of hope which is what your human companions and advocates are for.

23 views0 comments

Recent Posts

See All
bottom of page