HEALTH TRAIN EXPRESS Mission: To promulgate health education across the internet: Follow or subscribe to Health Train Express as well as Digital Health Space for all the updates for health policy, reform, public health issues. Health Train Express is published several times a week.Subscribe and receive an email alert each time it is published. Health Train Express has been published since 2006.
Listen Up
Wednesday, November 20, 2024
Statutory health insurance in Germany: a health system shaped by 135 years of solidarity, self-governance, and competition - The Lancet
Tuesday, November 19, 2024
Remote Care Today Turnkey Solutions
In-home Virtual Care has become a major focus for Medicare and with good reason. The results for patients, physicians, hospitals, and home health care agencies have been more than remarkable.
They have been astounding.
REMOTE CARE & YOU,
THE PATIENT
Outside of their offices, physicians don't know what is happening with their patients. That's why Remote Care Management is becoming the "go-to" program for seniors across the country. Remote care allows your physician to monitor your health continuously, so your medical care is always tailored to your needs. And it's covered by Medicare. (Copay may apply.)
The Fountain of Youth: Radical extension of the human lifespan, science fiction or reality?
AI in healthcare: Latest updates on generative AI, ChatGPT, more | Modern Healthcare
Tracking the latest in AI, ChatGPT
Patients want small talk from AI doctors
Patients don’t mind an artificial intelligence doctor as long as they’re willing to engage in small talk, according to a study from researchers at Penn State. Researchers asked 382 online participants to interact with a medical chatbot over two visits spaced about two weeks apart. They found that the more social information an AI doctor recalls about patients, the higher the patients’ satisfaction, but only if they were offered privacy control. The AI doctor used a pre-compiled script to chat with patients about topics related to diet, fitness, lifestyle, sleep, and mental health.
Where health systems are heading with AI
Health system executives are cautious about the hype of AI. They are trying to understand the risks, opportunities, and processes needed to adopt the technology. Here’s what executives at seven healthcare organizations said about where they stand with AI today.
Microsoft partners with Medline for AI tool
Technology giant Microsoft announced Wednesday it planned to build an AI tool with medical supply chain company Medline. The companies said the tool, dubbed Mpower, will aim to ease inventory management workflows and give users recommendations they can choose to implement. The tool will be built on Microsoft’s 365 suite of applications. Last Thursday, Microsoft said it was adding new AI tools for healthcare customers in partnership with electronic health record vendor Epic Systems.
GE Healthcare to lead generative health AI consortium
Community Health Systems to bring in AI chatbots for call centers
Community Health Systems said Monday it has signed a deal to bring chatbots from artificial intelligence startup Denim Health to work in the health system’s call centers. The Franklin, Tennessee-based hospital chain will use Denim’s AI chatbots in its call center to serve around 1,000 CHS-affiliated primary care providers and handle more than 25,000 inbound calls daily. The health system said it has been working with Denim Health since late 2023 to develop the technology and incorporate conversational AI into its call centers. A CHS spokesperson said staffing would not be affected by this move.
Abridge launches AI research effort with Epic, CMS
AI vendor Abridge is launching a clinical research collaborative dedicated to studying the impact of ambient AI across five key focus areas: clinician experience, patient experience, healthcare costs, outcomes, and health equity. Dr. Jackie Gerhart, chief medical officer at EHR vendor Epic will be a part of Abridge’s research collaborative along with leaders from Yale New Haven Health System, Stanford School of Medicine, University of California San Francisco, The University of Chicago Pritzker School of Medicine and the Centers for Medicare and Medicaid Services. Ambient AI documentation technology takes a recording of a doctor-patient conversation and turns it into usable clinical notes in the electronic health record. Abridge, which is partnering with Epic for the EHR company’s Workshop program, is one of the leading vendors in the space.
California governor signs AI bills targeting providers, insurers
California Gov. Gavin Newsom (D) has signed several artificial intelligence-related bills into law, including two specifically focused on healthcare. Read more.
AI in healthcare: Latest updates on generative AI, ChatGPT, more | Modern Healthcare
Monday, November 18, 2024
A.I. Chatbots Defeated Doctors at Diagnosing Illness
Gina Kolata
By Gina Kolata
Nov. 17, 2024
Dr. Adam Rodman, an expert in internal medicine at Beth Israel Deaconess Medical Center in Boston, confidently expected that chatbots built to use artificial intelligence would help doctors diagnose illnesses.
Instead, in a study Dr. Rodman helped design, doctors who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot. And, to the researchers’ surprise, ChatGPT alone outperformed the doctors.
“I was shocked,” Dr. Rodman said.
The chatbot, from the company OpenAI, scored an average of 90 percent when diagnosing a medical condition from a case report and explaining its reasoning. Doctors randomly assigned to use the chatbot got an average score of 76 percent. Those randomly assigned not to use it had an average score of 74 percent.
The study showed more than just the chatbot’s superior performance.
It unveiled doctors’ sometimes unwavering belief in a diagnosis they made, even when a chatbot potentially suggests a better one.
The study illustrated that while doctors are being exposed to the tools of artificial intelligence for their work, few know how to exploit the abilities of chatbots. As a result, they failed to take advantage of A.I. systems’ ability to solve complex diagnostic problems and offer explanations for their diagnoses.
A.I. systems should be “doctor extenders,” Dr. Rodman said, offering valuable second opinions on diagnoses.
But it looks as if there is a way to go before that potential is realized.
Case History, Case Future
The experiment involved 50 doctors, a mix of residents and attending physicians recruited through a few large American hospital systems, and was published last month in the journal JAMA Network Open.
The test subjects were given six case histories and were graded on their ability to suggest diagnoses and explain why they favored or ruled them out. Their grades also included getting the final diagnosis right.
The graders were medical experts who saw only the participants’ answers, without knowing whether they were from a doctor with ChatGPT, a doctor without it, or from ChatGPT by itself.
The case histories used in the study were based on real patients and are part of a set of 105 cases that have been used by researchers since the 1990s. The cases intentionally have never been published so that medical students and others could be tested on them without any foreknowledge. That also meant that ChatGPT could not have been trained on them.
But, to illustrate what the study involved, the investigators published one of the six cases the doctors were tested on, along with answers to the test questions on that case from a doctor who scored high and from one whose score was low.
That test case involved a 76-year-old patient with severe pain in his low back, buttocks and calves when he walked. The pain started a few days after he had been treated with balloon angioplasty to widen a coronary artery. He had been treated with the blood thinner heparin for 48 hours after the procedure.
The man complained that he felt feverish and tired. His cardiologist had done lab studies that indicated a new onset of anemia and a buildup of nitrogen and other kidney waste products in his blood. The man had had bypass surgery for heart disease a decade earlier.
The case vignette continued to include details of the man’s physical exam and then provided his lab test results.
The correct diagnosis was cholesterol embolism — a condition in which shards of cholesterol break off from plaque in arteries and block blood vessels.
Participants were asked for three possible diagnoses, with supporting evidence for each. They also were asked to provide, for each possible diagnosis, findings that do not support it or that were expected but not present.
The participants also were asked to provide a final diagnosis. Then they were to name up to three additional steps they would take in their diagnostic process.
Like the diagnosis for the published case, the diagnoses for the other five cases in the study were not easy to figure out. But neither were they so rare as to be almost unheard-of. Yet the doctors on average did worse than the chatbot.
What, the researchers asked, was going on?
The answer seems to hinge on questions of how doctors settle on a diagnosis, and how they use a tool like artificial intelligence.
The Physician in the Machine
How, then, do doctors diagnose patients?
The problem, said Dr. Andrew Lea, a historian of medicine at Brigham and Women’s Hospital who was not involved with the study, is that “we really don’t know how doctors think.”
In describing how they came up with a diagnosis, doctors would say, “intuition,” or, “based on my experience,” Dr. Lea said.
That sort of vagueness has challenged researchers for decades as they tried to make computer programs that can think like a doctor.
The quest began almost 70 years ago.
“Ever since there were computers, there were people trying to use them to make diagnoses,” Dr. Lea said.
One of the most ambitious attempts began in the 1970s at the University of Pittsburgh. Computer scientists there recruited Dr. Jack Myers, chairman of the medical school’s department of internal medicine who was known as a master diagnostician. He had a photographic memory and spent 20 hours a week in the medical library, trying to learn everything that was known in medicine.
Dr. Myers was given medical details of cases and explained his reasoning as he pondered diagnoses. Computer scientists converted his logic chains into code. The resulting program, called INTERNIST-1, included over 500 diseases and about 3,500 symptoms of disease.
To test it, researchers gave it cases from the New England Journal of Medicine. “The computer did really well,” Dr. Rodman said. Its performance “was probably better than a human could do,” he added.
But INTERNIST-1 never took off. It was difficult to use, requiring more than an hour to give it the information needed to make a diagnosis. And, its creators noted, “the present form of the program is not sufficiently reliable for clinical applications.”
Research continued. By the mid-1990s there were about a half dozen computer programs that tried to make medical diagnoses. None came into widespread use.
“It’s not just that it has to be user-friendly, but doctors had to trust it,” Dr. Rodman said.
And with the uncertainty about how doctors think, experts began to ask whether they should care. How important is it to try to design computer programs to make diagnoses the same way humans do?
“There were arguments over how much a computer program should mimic human reasoning,” Dr. Lea said. “Why don’t we play to the strength of the computer?”
The computer may not be able to give a clear explanation of its decision pathway, but does that matter if it gets the diagnosis right?
The conversation changed with the advent of large language models like ChatGPT. They make no explicit attempt to replicate a doctor’s thinking; their diagnostic abilities come from their ability to predict language.
“The chat interface is the killer app,” said Dr. Jonathan H. Chen, a physician and computer scientist at Stanford who was an author of the new study.
“We can pop a whole case into the computer,” he said. “Before a couple of years ago, computers did not understand language.”
However many doctors may not be exploiting its potential.Operator Error
After his initial shock at the results of the new study, Dr. Rodman decided to probe a little deeper into the data and look at the actual logs of messages between the doctors and ChatGPT. The doctors must have seen the chatbot’s diagnoses and reasoning, so why didn’t those using the chatbot do better?
It turns out that the doctors often were not persuaded by the chatbot when it pointed out something that was at odds with their diagnoses. Instead, they tended to be wedded to their own idea of the correct diagnosis.
“They didn’t listen to A.I. when A.I. told them things they didn’t agree with,” Dr. Rodman said.
That makes sense, said Laura Zwaan, who studies clinical reasoning and diagnostic error at Erasmus Medical Center in Rotterdam and was not involved in the study.
“People generally are overconfident when they think they are right,” she said.
But there was another issue: Many of the doctors did not know how to use a chatbot to its fullest extent.
Dr. Chen said he noticed that when he peered into the doctors’ chat logs, “they were treating it like a search engine for directed questions: ‘Is cirrhosis a risk factor for cancer? What are possible diagnoses for eye pain? “It was only a fraction of the doctors who realized they could literally copy-paste the entire case history into the chatbot and just ask it to give a comprehensive answer to the entire question,” Dr. Chen added.
“Only a fraction of doctors actually saw the surprisingly smart and comprehensive answers the chatbot was capable of producing.”
Wednesday, November 13, 2024
8 Top Pharma Trends In The Digital Health and AI Era - The Medical Futurist
1. Artificial Intelligence for drug research and development
The process of drug research and development has traditionally been a time-consuming and labor-intensive one. This has involved considerable trial-and-error research before a drug can proceed to further developmental stages. This process can be made more time- and cost-efficient with the assistance of artificial intelligence (AI).
AI models, such as those developed by Benevolent AI, can analyze significant amounts of datasets from scientific literature, clinical records, and chemical databases in a more time-efficient manner than humans can. From this information, they can precisely identify targets and how potential drugs will interact with them.
Companies like Schrödinger and Google DeepMind have used AI for drug formulation. Their software predicts the behavior of drug candidates and assesses their safety and effectiveness.
2. New reimbursement models
Pharma companies can tap into the new healthcare experience that patients can have in the digital health era to offer more than just medication. By combining medication and technology packages, they can offer more enticing reimbursement models for both payers and providers.
There have been several examples of such innovative models in the past that combine pharmaceuticals with technology. GSK has worked with Propeller Health on smart inhalers. Partners Healthcare Center and Japanese drug maker Daichii-Sankyo teamed up to bring a connected wearable for patients with atrial fibrillation.
Digital tools have been shown to improve health outcomes while minimizing financial costs. With such offerings, pharma companies can make their products stand out while being beneficial for both patients and insurance providers.
3. Large language models for improved workflow and customer service
Large language models (LLMs) have been popularised by tools such as ChatGPT and Google Gemini. Beyond the hype, the technology is a practical trend in the pharma industry. LLMs can boost a company’s efficiency by optimizing internal operations and customer service.
Roche’s internal LLM tool, Roche GPT, assists the pharma company’s team in optimizing repetitive tasks and sharing knowledge. The tool further supports their business by automating structured data extraction about therapies and patients from scientific articles and clinical test results. Pfizer has also deployed a similar tool to help with its marketing efforts.
LLMs could further be used to improve customer service. With an LLM-powered chatbot, patients can get answers to their queries such as medication side effects in their native language
4. Automation in the supply chain
The pharma industry’s supply chain stands to gain a lot by embracing automation in its midst. For example, by integrating AI, drug shortages can be averted. By analyzing data from various sources, AI software can forecast potential disruptions and suggest adequate measures to ensure a steady supply of essential medication.
5. Digital therapeutics
Using software as treatment might have sounded like a science fiction concept a decade or so ago, but this prospect is very real and promising with the advent of digital therapeutics (DTx). DTx can be described as evidence-based software applications designed to prevent, manage, or treat medical conditions.
The accessibility, privacy, and minimal side effects that DTx provides have enticed pharma companies to invest in this trend. Pfizer has teamed with Sidekick Health to launch a DTx solution for atopic dermatitis. Eli Lilly also partnered with Sidekick Health to develop apps to support breast cancer treatment.
Other companies like RelieVRx or HelloBetter integrate cognitive behavioral therapy principles in their apps to ease chronic pain. We share more promising DTx examples in a dedicated article.
6. in silico clinical trials
in silico clinical trials promise to enable the conduction of experiments wholly via computer simulation, without the need for animal or human testing. By running drug trials on computer simulations of organs, this approach can be both time and cost-effective while circumventing the side effects on live participants.
For further reading please click on this link
8 Top Pharma Trends In The Digital Health and AI Era - The Medical Futurist