HEALTH TRAIN EXPRESS Mission: To promulgate health education across the internet: Follow or subscribe to Health Train Express as well as Digital Health Space for all the updates for health policy, reform, public health issues. Health Train Express is published several times a week.Subscribe and receive an email alert each time it is published. Health Train Express has been published since 2006.
Listen Up
Saturday, November 23, 2024
Your Psychotropic Guidelines
Thursday, November 21, 2024
Anti-Cancer Nutrition
The power of Fruits to prevent cancers.
Oranges
Living a long and fulfilling life involves a combination of healthy habits, a positive mindset, and strong social connections. Here are some key factors to consider:
### 1. **Balanced Diet**
- **Eat Whole Foods:** Incorporate plenty of fruits, vegetables, whole grains, and lean proteins into your diet.
- **Stay Hydrated:** Drink plenty of water and limit sugary drinks.
### 2. **Regular Exercise**
- **Stay Active:** Aim for at least 150 minutes of moderate aerobic activity each week. Activities can include walking, cycling, or swimming.
- **Strength Training:** Include muscle-strengthening activities at least twice a week.
### 3. **Mental Well-being**
- **Manage Stress:** Practice mindfulness, meditation, or yoga to help reduce stress levels.
- **Stay Curious:** Engage in lifelong learning and hobbies to keep your mind sharp.
### 4. **Social Connections**
- **Build Relationships:** Maintain strong ties with family and friends. Social engagement is linked to longer life.
- **Volunteer:** Helping others can enhance your sense of purpose and community.
### 5. **Regular Health Check-ups**
- **Preventive Care:** Regular check-ups and screenings can help catch potential health issues early.
### 6. **Avoid Harmful Behaviors**
- **Limit Alcohol and Quit Smoking:** Reducing or eliminating these can significantly improve health outcomes.
### 7. **Positive Outlook**
- **Practice Gratitude:** Focusing on positive aspects of life can improve mental resilience.
- **Find Purpose:** Engage in activities that give you a sense of purpose and fulfillment.
By integrating these practices into your daily routine, you can enhance not only the length of your life but also its quality.
Wednesday, November 20, 2024
Statutory health insurance in Germany: a health system shaped by 135 years of solidarity, self-governance, and competition - The Lancet
Tuesday, November 19, 2024
Remote Care Today Turnkey Solutions
In-home Virtual Care has become a major focus for Medicare and with good reason. The results for patients, physicians, hospitals, and home health care agencies have been more than remarkable.
They have been astounding.
REMOTE CARE & YOU,
THE PATIENT
Outside of their offices, physicians don't know what is happening with their patients. That's why Remote Care Management is becoming the "go-to" program for seniors across the country. Remote care allows your physician to monitor your health continuously, so your medical care is always tailored to your needs. And it's covered by Medicare. (Copay may apply.)
The Fountain of Youth: Radical extension of the human lifespan, science fiction or reality?
AI in healthcare: Latest updates on generative AI, ChatGPT, more | Modern Healthcare
Tracking the latest in AI, ChatGPT
Patients want small talk from AI doctors
Patients don’t mind an artificial intelligence doctor as long as they’re willing to engage in small talk, according to a study from researchers at Penn State. Researchers asked 382 online participants to interact with a medical chatbot over two visits spaced about two weeks apart. They found that the more social information an AI doctor recalls about patients, the higher the patients’ satisfaction, but only if they were offered privacy control. The AI doctor used a pre-compiled script to chat with patients about topics related to diet, fitness, lifestyle, sleep, and mental health.
Where health systems are heading with AI
Health system executives are cautious about the hype of AI. They are trying to understand the risks, opportunities, and processes needed to adopt the technology. Here’s what executives at seven healthcare organizations said about where they stand with AI today.
Microsoft partners with Medline for AI tool
Technology giant Microsoft announced Wednesday it planned to build an AI tool with medical supply chain company Medline. The companies said the tool, dubbed Mpower, will aim to ease inventory management workflows and give users recommendations they can choose to implement. The tool will be built on Microsoft’s 365 suite of applications. Last Thursday, Microsoft said it was adding new AI tools for healthcare customers in partnership with electronic health record vendor Epic Systems.
GE Healthcare to lead generative health AI consortium
Community Health Systems to bring in AI chatbots for call centers
Community Health Systems said Monday it has signed a deal to bring chatbots from artificial intelligence startup Denim Health to work in the health system’s call centers. The Franklin, Tennessee-based hospital chain will use Denim’s AI chatbots in its call center to serve around 1,000 CHS-affiliated primary care providers and handle more than 25,000 inbound calls daily. The health system said it has been working with Denim Health since late 2023 to develop the technology and incorporate conversational AI into its call centers. A CHS spokesperson said staffing would not be affected by this move.
Abridge launches AI research effort with Epic, CMS
AI vendor Abridge is launching a clinical research collaborative dedicated to studying the impact of ambient AI across five key focus areas: clinician experience, patient experience, healthcare costs, outcomes, and health equity. Dr. Jackie Gerhart, chief medical officer at EHR vendor Epic will be a part of Abridge’s research collaborative along with leaders from Yale New Haven Health System, Stanford School of Medicine, University of California San Francisco, The University of Chicago Pritzker School of Medicine and the Centers for Medicare and Medicaid Services. Ambient AI documentation technology takes a recording of a doctor-patient conversation and turns it into usable clinical notes in the electronic health record. Abridge, which is partnering with Epic for the EHR company’s Workshop program, is one of the leading vendors in the space.
California governor signs AI bills targeting providers, insurers
California Gov. Gavin Newsom (D) has signed several artificial intelligence-related bills into law, including two specifically focused on healthcare. Read more.
AI in healthcare: Latest updates on generative AI, ChatGPT, more | Modern Healthcare
Monday, November 18, 2024
A.I. Chatbots Defeated Doctors at Diagnosing Illness
Gina Kolata
By Gina Kolata
Nov. 17, 2024
Dr. Adam Rodman, an expert in internal medicine at Beth Israel Deaconess Medical Center in Boston, confidently expected that chatbots built to use artificial intelligence would help doctors diagnose illnesses.
Instead, in a study Dr. Rodman helped design, doctors who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot. And, to the researchers’ surprise, ChatGPT alone outperformed the doctors.
“I was shocked,” Dr. Rodman said.
The chatbot, from the company OpenAI, scored an average of 90 percent when diagnosing a medical condition from a case report and explaining its reasoning. Doctors randomly assigned to use the chatbot got an average score of 76 percent. Those randomly assigned not to use it had an average score of 74 percent.
The study showed more than just the chatbot’s superior performance.
It unveiled doctors’ sometimes unwavering belief in a diagnosis they made, even when a chatbot potentially suggests a better one.
The study illustrated that while doctors are being exposed to the tools of artificial intelligence for their work, few know how to exploit the abilities of chatbots. As a result, they failed to take advantage of A.I. systems’ ability to solve complex diagnostic problems and offer explanations for their diagnoses.
A.I. systems should be “doctor extenders,” Dr. Rodman said, offering valuable second opinions on diagnoses.
But it looks as if there is a way to go before that potential is realized.
Case History, Case Future
The experiment involved 50 doctors, a mix of residents and attending physicians recruited through a few large American hospital systems, and was published last month in the journal JAMA Network Open.
The test subjects were given six case histories and were graded on their ability to suggest diagnoses and explain why they favored or ruled them out. Their grades also included getting the final diagnosis right.
The graders were medical experts who saw only the participants’ answers, without knowing whether they were from a doctor with ChatGPT, a doctor without it, or from ChatGPT by itself.
The case histories used in the study were based on real patients and are part of a set of 105 cases that have been used by researchers since the 1990s. The cases intentionally have never been published so that medical students and others could be tested on them without any foreknowledge. That also meant that ChatGPT could not have been trained on them.
But, to illustrate what the study involved, the investigators published one of the six cases the doctors were tested on, along with answers to the test questions on that case from a doctor who scored high and from one whose score was low.
That test case involved a 76-year-old patient with severe pain in his low back, buttocks and calves when he walked. The pain started a few days after he had been treated with balloon angioplasty to widen a coronary artery. He had been treated with the blood thinner heparin for 48 hours after the procedure.
The man complained that he felt feverish and tired. His cardiologist had done lab studies that indicated a new onset of anemia and a buildup of nitrogen and other kidney waste products in his blood. The man had had bypass surgery for heart disease a decade earlier.
The case vignette continued to include details of the man’s physical exam and then provided his lab test results.
The correct diagnosis was cholesterol embolism — a condition in which shards of cholesterol break off from plaque in arteries and block blood vessels.
Participants were asked for three possible diagnoses, with supporting evidence for each. They also were asked to provide, for each possible diagnosis, findings that do not support it or that were expected but not present.
The participants also were asked to provide a final diagnosis. Then they were to name up to three additional steps they would take in their diagnostic process.
Like the diagnosis for the published case, the diagnoses for the other five cases in the study were not easy to figure out. But neither were they so rare as to be almost unheard-of. Yet the doctors on average did worse than the chatbot.
What, the researchers asked, was going on?
The answer seems to hinge on questions of how doctors settle on a diagnosis, and how they use a tool like artificial intelligence.
The Physician in the Machine
How, then, do doctors diagnose patients?
The problem, said Dr. Andrew Lea, a historian of medicine at Brigham and Women’s Hospital who was not involved with the study, is that “we really don’t know how doctors think.”
In describing how they came up with a diagnosis, doctors would say, “intuition,” or, “based on my experience,” Dr. Lea said.
That sort of vagueness has challenged researchers for decades as they tried to make computer programs that can think like a doctor.
The quest began almost 70 years ago.
“Ever since there were computers, there were people trying to use them to make diagnoses,” Dr. Lea said.
One of the most ambitious attempts began in the 1970s at the University of Pittsburgh. Computer scientists there recruited Dr. Jack Myers, chairman of the medical school’s department of internal medicine who was known as a master diagnostician. He had a photographic memory and spent 20 hours a week in the medical library, trying to learn everything that was known in medicine.
Dr. Myers was given medical details of cases and explained his reasoning as he pondered diagnoses. Computer scientists converted his logic chains into code. The resulting program, called INTERNIST-1, included over 500 diseases and about 3,500 symptoms of disease.
To test it, researchers gave it cases from the New England Journal of Medicine. “The computer did really well,” Dr. Rodman said. Its performance “was probably better than a human could do,” he added.
But INTERNIST-1 never took off. It was difficult to use, requiring more than an hour to give it the information needed to make a diagnosis. And, its creators noted, “the present form of the program is not sufficiently reliable for clinical applications.”
Research continued. By the mid-1990s there were about a half dozen computer programs that tried to make medical diagnoses. None came into widespread use.
“It’s not just that it has to be user-friendly, but doctors had to trust it,” Dr. Rodman said.
And with the uncertainty about how doctors think, experts began to ask whether they should care. How important is it to try to design computer programs to make diagnoses the same way humans do?
“There were arguments over how much a computer program should mimic human reasoning,” Dr. Lea said. “Why don’t we play to the strength of the computer?”
The computer may not be able to give a clear explanation of its decision pathway, but does that matter if it gets the diagnosis right?
The conversation changed with the advent of large language models like ChatGPT. They make no explicit attempt to replicate a doctor’s thinking; their diagnostic abilities come from their ability to predict language.
“The chat interface is the killer app,” said Dr. Jonathan H. Chen, a physician and computer scientist at Stanford who was an author of the new study.
“We can pop a whole case into the computer,” he said. “Before a couple of years ago, computers did not understand language.”
However many doctors may not be exploiting its potential.Operator Error
After his initial shock at the results of the new study, Dr. Rodman decided to probe a little deeper into the data and look at the actual logs of messages between the doctors and ChatGPT. The doctors must have seen the chatbot’s diagnoses and reasoning, so why didn’t those using the chatbot do better?
It turns out that the doctors often were not persuaded by the chatbot when it pointed out something that was at odds with their diagnoses. Instead, they tended to be wedded to their own idea of the correct diagnosis.
“They didn’t listen to A.I. when A.I. told them things they didn’t agree with,” Dr. Rodman said.
That makes sense, said Laura Zwaan, who studies clinical reasoning and diagnostic error at Erasmus Medical Center in Rotterdam and was not involved in the study.
“People generally are overconfident when they think they are right,” she said.
But there was another issue: Many of the doctors did not know how to use a chatbot to its fullest extent.
Dr. Chen said he noticed that when he peered into the doctors’ chat logs, “they were treating it like a search engine for directed questions: ‘Is cirrhosis a risk factor for cancer? What are possible diagnoses for eye pain? “It was only a fraction of the doctors who realized they could literally copy-paste the entire case history into the chatbot and just ask it to give a comprehensive answer to the entire question,” Dr. Chen added.
“Only a fraction of doctors actually saw the surprisingly smart and comprehensive answers the chatbot was capable of producing.”