Listen Up

Tuesday, June 11, 2024

Where is My Medical Record? - Where is My Medical Record?

Your health records contain a lot of information. Here’s just some of what is in them:

  • Your medical history
  • Your family medical history
  • Your test results
  • Your doctor’s, nurse’s, and other clinicians’ notes
  • Findings from check-ups and examinations
  • Medicines that have been prescribed for you
  • Suggestions and next steps for your care
A law called HIPAA gives you the right to see all of your records from doctors, hospitals, pharmacies, and other places. HIPAA stands for “Health Insurance Portability and Accountability Act” and was passed by the U.S. Congress in 1996.

In 2016, the U.S. Congress made that rule stronger by passing the 21st Century Cures Act. The Cures Act orders hospitals and doctors to make it easy for you to see health records.

As of April 2021, “blocking” you from your health records is against the law and may result in fines for hospitals and doctors.

The new law requires doctors and hospitals to give patients access to their electronic medical records. You have the right to see them quickly and at no cost.

The problem is that some people are having a hard time finding or getting their medical records. If that is true for you, you may be experiencing medical record “information blocking.”

Review the 3 Common Situations below to figure out whether you are experiencing information blocking. Most people fall into Situations 1 and 2. When you decide which applies to you, follow our prompts to find help.

Accessing, downloading, and reading your medical record is your right. If a doctor or hospital is preventing you from getting your record, it is important to report it.

You can report “information blocking” to the Office of the National Coordinator (ONC) for Health IT (information technology). The ONC is part of the U.S. Department of Health & Human Services. The ONC is responsible for researching public complaints of information blocking.

The reporting website is a bit complicated—check it out

Why would you want your medical information?

You may want to access your medical records for many reasons.
Here are just a few:


Sharing information with other doctors or health care providers. You are going to see a new doctor or other clinician. Before you do that, make sure you review information from your current or former providers. That will help you be fully prepared to discuss your health care.
Reminders from past visits. You might want to review what was discussed in a recent healthcare visit. Studies show that patients forget or do not remember more than half of what is discussed with their doctors and other clinicians. Reading your doctor’s notes from a visit can help remind you of the next steps and instructions.
Sharing information with others helping with your health care. It is easy to share information with others who help you with your health care when you have a copy of your health records.
Find errors or mistakes. Review the information in your record to make sure it’s correct. You may find errors or mistakes. Some of them may have little or no impact on your health care, but others may be important to point out to your doctor.
Getting ready for future visits. Reading information from medical visits can help you prepare questions and topics you want to be sure to cover.
Decision-making. You might want to review past visits and notes from your doctors and nurses. Reviewing can help you make informed choices about your health care.
Better understand and manage your health and health care. Your records help you remember your past and current health care. They make it easier to ask good questions. They guide you in making choices for your care and following suggestions.
Keeping your old records. Laws about health records are different in each state. In general, doctors' offices do not have to keep copies of your old medical records. Some offices choose to shred or get rid of your health records after a few years. Keep a copy of your records in a safe place. Old records can be helpful if you develop a serious medical condition in the future.

The law clearly states your medical records are your property, not the hospital nor providers.
If they prevent your access it is a federal crime, punishable by fines.

The punishment for violating the Health Insurance Portability and Accountability Act (HIPAA) can vary depending on the severity and nature of the violation. Here are some of the potential penalties:

1. Civil Penalties:
   - Penalties range from $100 to $50,000 per violation, with a maximum of $1.5 million per year for all violations of an identical provision.
   - Penalties are tiered based on the level of negligence, from "did not know" to "willful neglect."

2. Criminal Penalties:
   - Fines up to $50,000 and/or imprisonment up to 1 year for knowingly obtaining or disclosing protected health information (PHI).
   - Fines up to $100,000 and/or imprisonment up to 5 years for obtaining PHI under false pretenses.
   - Fines up to $250,000 and/or imprisonment up to 10 years for obtaining PHI with the intent to sell, transfer, or use it for commercial advantage, personal gain, or malicious harm.

3. Other Consequences:
   - Corrective action plans and ongoing monitoring by the Department of Health and Human Services (HHS) Office for Civil Rights.
   - Potential loss of medical licenses or exclusion from federal healthcare programs.
   - Lawsuits from affected individuals or entities.

The severity of the penalties depends on factors such as the nature of the PHI disclosed, whether the violation was intentional or due to negligence, and the steps taken to mitigate the breach and prevent future occurrences. It's important for healthcare organizations and professionals to fully understand and comply with HIPAA regulations to avoid these potential consequences.


Where is My Medical Record? - Where is My Medical Record?

Monday, June 10, 2024

As Medical Care Evolves....Is your doctor a dinosaur?

Robert Pearl says that most advances in life are due to serendipity, not vision.  He outlines his unique career direction and the inevitable of advances.


Will ChatGPT be the new MD?

Top 40 Digital Health Trends In One Complex Infographic


Digestible sensors? Artificial organs? Medical tricorders? Does any of these already exist, is their development in progress, or do they only appear in our imagination? The world of medical innovations is complex and diverse, full of promising technologies but also hype and marketing. That’s why we collected the most relevant trends that shape digital health in one infographic that also explains at which stage these innovations are delivered, and which medical process and actor they influence. Check out the infographic here!

How to analyze digital health trends?



An infographic about 40 trends shaping digital health. It analyzed how promising each trend was, whether they would benefit patients or doctors; and if they would improve prevention, diagnostics, treatments, or long-term consequences. This illustration became so popular that we keep updating it, it currently exists in its third reincarnation.

Despite the updates, the basis for visually explaining the trends hasn’t changed. We still take three perspectives answering three questions to efficiently interpret the forces shaping the world of medical innovation. These are the following:

Does the trend affect patients or healthcare professionals?
Which stage of healthcare delivery and the practice of medicine is affected by the trend? Does it appear in prevention, data input and diagnostics, therapy, and follow-up, or does it rather impact patient outcomes and the consequences of certain conditions?
Is the outcome of the trend already available, is its development in progress, or does it still need some time to materialize?

For instance, along these lines, the infographic could tell that direct-to-consumer (DTC) artificial intelligence would impact prevention and diagnostics, it would mean a huge difference for patients, but its availability and applicability are further down the road. We still have to wait for a couple of years, if not decades, for DTC AI to appear in the family physician’s office. DTC is already used in many other industries.  Physicians are often late adopters of technology (EHRs)

DTC Brand Examples

Allbirds
Away
Casper
Dollar Shave Club
Glossier
Harry’s
Hims & Hers
Rent the Runway
Stitch Fix
Warby Parker

In Silico Clinical  Trials:

Imagine a clinical trial performed by artificial intelligence or an LLM?



HumMod is one of the most advanced simulations in this respect. It provides a top-down model of human physiology from whole organs to individual molecules. It features more than 1,500 equations and 10,000 variables such as bodily fluids, circulation, electrolytes, hormones, metabolism, and skin temperature. HumMod aims to simulate how human physiology works, and claims to be the most sophisticated mathematical model of human physiology ever created. HumMod has been in development for decades and it is still far from completion. It may take decades to get there.

Most of these will take considerable time to translate to daily clinical medicine and are presently used for testing purposes in nonclinical situations.  They will all require vetting by the Food and Drug Administration.



























Top 40 Digital Health Trends In One Complex Infographic

A new mobility guide for the blind

The 'white cane' in the digital age.  This is an autonomous driving cane for the blind.  Fashioned after the autonomous driving automobile.  It uses sensors such as infrared, cameras, and lidar to move through the environment avoiding obstacles, and sensing stairs and people.




Much like the autonomous vacuum, or autonomous automobile, the system can follow a map to guide a blind patient around his environment


Take this image and downsize it to be used in the head of the cane.  The wheels in the base will rotate per the obstructions.  The head of the cane will have vibration motors that the patient can feel for tactile guidance.









NATIONAL HEALTH SERVICE TO OFFER. PHYSIOTHERAPY USING A.I.

The NHS will open its first "AI-run physiotherapy clinic" to cut waiting times amid growing demand and staff shortages. An inevitable step that many other health providers will also have to take.

The new platform (the first to be approved by the health regulator) will provide same-day automated video appointments with a digital physiotherapist via an app that responds to information provided by a patient in real-time.

In short, a digital platform, a generative AI agent, and a video database.

What do patients think about it?

"𝐴𝑙𝑙 𝑜𝑓 𝑡ℎ𝑜𝑠𝑒 𝑠𝑢𝑟𝑣𝑒𝑦𝑒𝑑 𝑝𝑎𝑡𝑖𝑒𝑛𝑡𝑠 𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑁𝐻𝑆 𝑝𝑖𝑙𝑜𝑡 𝑠𝑎𝑖𝑑 𝑡ℎ𝑒𝑖𝑟 𝑒𝑥𝑝𝑒𝑟𝑖𝑒𝑛𝑐𝑒 𝑤𝑖𝑡ℎ 𝐹𝑙𝑜𝑘 ℎ𝑎𝑑 𝑏𝑒𝑒𝑛 𝑎𝑡 𝑙𝑒𝑎𝑠𝑡 𝑒𝑞𝑢𝑖𝑣𝑎𝑙𝑒𝑛𝑡 𝑡𝑜 𝑠𝑒𝑒𝑖𝑛𝑔 𝑎 ℎ𝑢𝑚𝑎𝑛 𝑝ℎ𝑦𝑠𝑖𝑜𝑡ℎ𝑒𝑟𝑎𝑝𝑖𝑠𝑡, 𝑎𝑛𝑑 𝟧𝟩% 𝑜𝑓 𝑝𝑎𝑡𝑖𝑒𝑛𝑡𝑠 𝑠𝑎𝑖𝑑 𝑡ℎ𝑒𝑦 𝑡ℎ𝑜𝑢𝑔ℎ𝑡 𝑡ℎ𝑒 𝐴𝐼 𝑒𝑥𝑝𝑒𝑟𝑖𝑒𝑛𝑐𝑒 𝑤𝑎𝑠 𝑏𝑒𝑡𝑡𝑒𝑟."

The provider behind the platform is called @Flok.

The first NHS AI-run physiotherapy clinic is to be rolled out this year to cut waiting times amid growing demand and staff shortages.

The new platform will provide same-day automated video appointments with a digital physiotherapist via an app that responds to information provided by a patient in real-time.

It is the first platform of its kind to be approved by the health regulator, the Care Quality Commission, as a registered healthcare provider.

Patients seeking physiotherapy for issues such as back pain can be referred to the platform Flok Health through a community or primary care healthcare setting, such as their GP. They can also self-refer directly into the service.

The service aims to provide faster care and reduce waiting times and pressure on clinicians, those behind it say.


HOW IT WORKS

Get assessed

Where we’ve been contracted by the NHS in your area (or your insurer) you can self-refer straight to our service - no need to go via your GP. Your first appointment with Flok will be with our AI-powered digital doctor, who will ask you some questions to assess your back pain and ensure our digital treatment is right for you.


Get treatment

If physiotherapy is likely to be effective for you, we'll prescribe you a series of weekly appointments with our digital physio. Each appointment is like a 30-minute video call, except our side of the call is created by our AI engine in real-time, just for you. You can answer questions and your digital physio will respond to you live, in a continuously generated personal video stream.




Get better

During each of your appointments, your digital physio will prescribe a set of exercises for you to practice for the coming week before your next appointment. These exercises are specifically selected for you based on a detailed analysis of your symptoms and movement patterns. Our app guides you through practicing your exercises between appointments and helps you see your progress and stay on track.

An A.I. guided physiotherapy session will provide not only a visual interaction but also an interactive audio session.  The therapist will ask questions of the patient, such as 'where does it hurt?  Is it tender, does it hurt more if you move it. A patient can ask questions such as  "How long will it last?"


Sure! I can help you develop a chatbot for physiotherapy to facilitate communication between a patient and a therapist. Here's a basic outline of how the chatbot could be structured:

1. **Introduction and Assessment**
   - The chatbot greets the patient and introduces itself as a physiotherapy assistant.
   - It asks the patient to provide some basic information, such as their name, age, and any relevant medical history.
   - The chatbot may also ask the patient about their current symptoms, pain levels, and any specific areas of concern.

2. **Educational Resources**
   - The chatbot can provide educational resources on various physiotherapy topics, such as common injuries, rehabilitation exercises, and proper posture.
   - It can share articles, videos, or infographics to help the patient understand their condition and the therapeutic process better.

3. **Treatment Plan and Exercises**
   - Based on the information provided by the patient, the chatbot can provide a general overview of the treatment plan.
   - It can suggest specific exercises or stretches tailored to the patient's condition, along with instructions and visual aids.
   - The chatbot can also remind the patient about the importance of consistency and adherence to the treatment plan.

4. **Progress Tracking**
   - The chatbot can periodically check in with the patient to assess their progress and gather feedback.
   - It may ask questions about pain levels, functional improvements, or any difficulties encountered during exercises.
   - The chatbot can maintain a record of the patient's progress and update the therapist accordingly.

5. **Appointment Scheduling and Reminders**
   - If the patient needs to schedule or reschedule an appointment, the chatbot can provide available time slots and assist in booking.
   - It can send automated reminders about upcoming appointments, ensuring the patient doesn't miss any sessions.

6. **FAQs and Support**
   - The chatbot can have a database of frequently asked questions and provide instant answers to common queries.
   - If the patient requires additional support or has specific concerns, the chatbot can offer guidance and suggest reaching out to the therapist for further assistance.

7. **Referrals and Emergency Situations**
   - If the chatbot detects any emergency situations or symptoms that require immediate attention, it can provide instructions to contact emergency services.
   - In cases where the patient's condition falls outside the scope of the chatbot's capabilities, it can recommend seeking in-person medical advice or referring to a specialist.

It's important to note that while a chatbot can be a useful tool, it should not replace the expertise and personalized care provided by a qualified physiotherapist. The chatbot should always encourage the patient to consult with their therapist for a comprehensive evaluation and individualized treatment plan.










https://www.linkedin.com/news/story/nhs-to-offer-patients-ai-physio-6794986/

Friday, June 7, 2024

New Tool Detects Dementia up to Nine Years before Onset and with High Accuracy | Inside Precision Medicine

A new method for predicting dementia has over 80% accuracy and works up to nine years before a diagnosis. Researchers at Queen Mary University of London report. They say the new method provides a more accurate way to predict dementia than memory tests or measurements of brain shrinkage, two commonly used methods for diagnosing this condition. It’s estimated that over 55 million people suffer from dementia worldwide. Early, accurate diagnosis has long been a challenge.


“Predicting who is going to get dementia in the future will be vital for developing treatments that can prevent the irreversible loss of brain cells that causes the symptoms of dementia,” said Charles Marshall, Professor, and Honorary Consultant Neurologist, who led the research team within the Centre for Preventive Neurology at Queen Mary’s Wolfson Institute of Population Health. 


The team developed the test by analyzing functional MRI (fMRI) scans for changes in the brain’s “default mode network (DMN).” The DMN connects regions of the brain to perform specific cognitive functions and is the first neural network to be affected by Alzheimer’s disease. 

They had fMRI scans from over 1,100 volunteers from the UK Biobank, a large-scale biomedical database and research resource containing genetic and health information from half a million U.K. participants, to estimate the effective connectivity between ten regions of the brain that constitute the default mode network. 


The researchers assigned each patient a “probability of dementia value” based on the extent to which their effective connectivity pattern conforms to a pattern indicating dementia. They compared these predictions to the medical data of each patient, on record with the UK Biobank. 

They found the model accurately predicted the onset of dementia up to nine years before an official diagnosis was made, and with greater than 80% accuracy. Where participants went on to develop dementia, the model could also predict, within a two-year margin of error, how long it would take that diagnosis to be made.  

The researchers also examined whether changes to the DMN might be caused by known risk factors for dementia. They found that genetic risk for Alzheimer’s disease was strongly associated with connectivity changes in the DMN, supporting the idea that these changes are specific to Alzheimer’s disease. They also found that social isolation was likely to increase the risk of dementia through its effect on connectivity in the DMN. 

It is now more important than ever to detect signs of dementia early.  Life expectancy has increased and many people live into their late 90s.   Prevention and early diagnosis can prevent early disability and allow people to 'age in place' which allows for independent living, avoidance of expensive nursing homes, and allow for improved socialization of aged patients.



New Tool Detects Dementia up to Nine Years before Onset and with High Accuracy | Inside Precision Medicine

Saturday, May 25, 2024

Implant by Elon Musk's Neuralink suffers setback after threads retract from patient's brain

NEW MEDICAL DEVICES always require careful study...Do the benefits outweigh possible side effects or complications....We always do this with drugs.


Elon Musk’s brain technology startup, Neuralink, said Wednesday that an issue cropped up with its first human brain implant weeks after it was inserted into a patient.

The company revealed in a blog post that in the weeks following the patient’s surgery in January, a number of the implant’s connective threads retracted from the brain, causing a reduction in the signals the device could capture.

Neuralink provided few other details about the problem and did not disclose what might have caused the threads to retract.

It did say, however, that it modified an algorithm “to be more sensitive to neural population signals,” meaning it was able to improve how the patient’s brain signals were detected and translated.

It is not the first device implanted in the brain

 Decades of research and lofty ambitions to meld minds with computers





Neuralink also uses innovative robotic surgery, rather than a specialized neurosurgeon, to implant the device.

“That’s way different from what people have done before,” said Sergey Stavisky, an assistant professor in the Department of neurological surgery at the University of California, Davis, and co-director of the UC Davis Neuroprosthetics Lab.

Stavisky said automating the procedure with a robot could make it more efficient and effective down the road.

There are many other companies developing brain interfaces.

The process requires electrodes to be inserted in the brain, a communication link, a computer algorithm which interprets what a micro electroencephalogram detects from the brain.

The technique can also be applied to vision 






















Implant by Elon Musk's Neuralink suffers setback after threads retract from patient's brain

No One Truly Knows How AI Systems Work. A New Discovery Could Change That


Today’s artificial intelligence is often described as a “black box.” AI developers don’t write explicit rules for these systems; instead, they feed in vast quantities of data and the systems learn on their own to spot patterns. But the inner workings of the AI models remain opaque, and efforts to peer inside them to check exactly what is happening haven’t progressed very far. Beneath the surface, neural networks—today’s most powerful type of AI—consist of billions of artificial “neurons” represented as decimal-point numbers. Nobody truly understands what they mean, or how they work.

For those concerned about risks from AI, this fact looms large. If you don’t know exactly how a system works, how can you be sure it is safe?

Read More: Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says

On Tuesday, the AI lab Anthropic announced it had made a breakthrough toward solving this problem. Researchers developed a technique for essentially scanning the “brain” of an AI model, allowing them to identify collections of neurons—called “features”—corresponding to different concepts. And for the first time, they successfully used this technique on a frontier large language model, Anthropic’s Claude Sonnet, the lab’s second-most powerful system, .

In one example, Anthropic researchers discovered a feature inside Claude representing the concept of “unsafe code.” By stimulating those neurons, they could get Claude to generate code containing a bug that could be exploited to create a security vulnerability. But by suppressing the neurons, the researchers found, Claude would generate harmless code.

The findings could have big implications for the safety of both present and future AI systems. The researchers found millions of features inside Claude, including some representing bias, fraudulent activity, toxic speech, and manipulative behavior. And they discovered that by suppressing each of these collections of neurons, they could alter the model’s behavior.

As well as helping to address current risks, the technique could also help with more speculative ones. For years, the primary method available to researchers trying to understand the capabilities and risks of new AI systems has simply been to chat with them. This approach, sometimes known as “red-teaming,” can help catch a model being toxic or dangerous, allowing researchers to build in safeguards before the model is released to the public. But it doesn’t help address one type of potential danger that some AI researchers are worried about: the risk of an AI system becoming smart enough to deceive its creators, hiding its capabilities from them until it can escape their control and potentially wreak havoc.

“If we could really understand these systems—and this would require a lot of progress—we might be able to say when these models actually are safe, or whether they just appear safe,” Chris Olah, the head of Anthropic’s interpretability team who led the research, tells TIME.

“The fact that we can do these interventions on the model suggests to me that we're starting to make progress on what you might call an X-ray, or an MRI [of an AI model],” Anthropic CEO Dario Amodei adds. “Right now, the paradigm is: let's talk to the model, let's see what it does. But what we'd like to be able to do is look inside the model as an object—like scanning the brain instead of interviewing someone.”

The research is still in its early stages, Anthropic said in a summary of the findings. But the lab struck an optimistic tone that the findings could soon benefit its AI safety work. “The ability to manipulate features may provide a promising avenue for directly impacting the safety of AI models,” Anthropic said. By suppressing certain features, it may be possible to prevent so-called “jailbreaks” of AI models, a type of vulnerability where safety guardrails can be disabled, the company added.


Researchers in Anthropic’s “interpretability” team have been trying to peer into the brains of neural networks for years. But until recently, they had mostly been working on far smaller models than the giant language models currently being developed and released by tech companies.

One of the reasons for this slow progress was that individual neurons inside AI models would fire even when the model was discussing completely different concepts. “This means that the same neuron might fire on concepts as disparate as the presence of semicolons in computer programming languages, references to burritos, or discussion of the Golden Gate Bridge, giving us little indication as to which specific concept was responsible for activating a given neuron,” Anthropic said in its summary of the research.

To get around this problem, Olah’s team of Anthropic researchers zoomed out. Instead of studying individual neurons, they began to look for groups of neurons that would all fire in response to a specific concept. This technique worked—and allowed them to graduate from studying smaller “toy” models to larger models like Anthropic’s Claude Sonnet, which has billions of neurons. 

Although the researchers said they had identified millions of features inside Claude, they cautioned that this number was nowhere near the true number of features likely present inside the model. Identifying all the features, they said, would be prohibitively expensive using their current techniques, because doing so would require more computing power than it took to train Claude in the first place. (Costing somewhere in the tens or hundreds of millions of dollars.) The researchers also cautioned that although they had found some features they believed to be related to safety, more study would still be needed to determine whether those features could reliably be manipulated to improve a model’s safety.

For Olah, the research is a breakthrough that proves the utility of his esoteric field, interpretability, to the broader world of AI safety research. “Historically, interpretability has been this thing on its own island, and there was this hope that someday it would connect with [AI] safety—but that seemed far off,” Olah says. “I think that’s no longer true.”

 














https://time.com/6980210/anthropic-interpretability-ai-safety-research/