Listen Up

Friday, June 7, 2024

New Tool Detects Dementia up to Nine Years before Onset and with High Accuracy | Inside Precision Medicine

A new method for predicting dementia has over 80% accuracy and works up to nine years before a diagnosis. Researchers at Queen Mary University of London report. They say the new method provides a more accurate way to predict dementia than memory tests or measurements of brain shrinkage, two commonly used methods for diagnosing this condition. It’s estimated that over 55 million people suffer from dementia worldwide. Early, accurate diagnosis has long been a challenge.


“Predicting who is going to get dementia in the future will be vital for developing treatments that can prevent the irreversible loss of brain cells that causes the symptoms of dementia,” said Charles Marshall, Professor, and Honorary Consultant Neurologist, who led the research team within the Centre for Preventive Neurology at Queen Mary’s Wolfson Institute of Population Health. 


The team developed the test by analyzing functional MRI (fMRI) scans for changes in the brain’s “default mode network (DMN).” The DMN connects regions of the brain to perform specific cognitive functions and is the first neural network to be affected by Alzheimer’s disease. 

They had fMRI scans from over 1,100 volunteers from the UK Biobank, a large-scale biomedical database and research resource containing genetic and health information from half a million U.K. participants, to estimate the effective connectivity between ten regions of the brain that constitute the default mode network. 


The researchers assigned each patient a “probability of dementia value” based on the extent to which their effective connectivity pattern conforms to a pattern indicating dementia. They compared these predictions to the medical data of each patient, on record with the UK Biobank. 

They found the model accurately predicted the onset of dementia up to nine years before an official diagnosis was made, and with greater than 80% accuracy. Where participants went on to develop dementia, the model could also predict, within a two-year margin of error, how long it would take that diagnosis to be made.  

The researchers also examined whether changes to the DMN might be caused by known risk factors for dementia. They found that genetic risk for Alzheimer’s disease was strongly associated with connectivity changes in the DMN, supporting the idea that these changes are specific to Alzheimer’s disease. They also found that social isolation was likely to increase the risk of dementia through its effect on connectivity in the DMN. 

It is now more important than ever to detect signs of dementia early.  Life expectancy has increased and many people live into their late 90s.   Prevention and early diagnosis can prevent early disability and allow people to 'age in place' which allows for independent living, avoidance of expensive nursing homes, and allow for improved socialization of aged patients.



New Tool Detects Dementia up to Nine Years before Onset and with High Accuracy | Inside Precision Medicine

Saturday, May 25, 2024

Implant by Elon Musk's Neuralink suffers setback after threads retract from patient's brain

NEW MEDICAL DEVICES always require careful study...Do the benefits outweigh possible side effects or complications....We always do this with drugs.


Elon Musk’s brain technology startup, Neuralink, said Wednesday that an issue cropped up with its first human brain implant weeks after it was inserted into a patient.

The company revealed in a blog post that in the weeks following the patient’s surgery in January, a number of the implant’s connective threads retracted from the brain, causing a reduction in the signals the device could capture.

Neuralink provided few other details about the problem and did not disclose what might have caused the threads to retract.

It did say, however, that it modified an algorithm “to be more sensitive to neural population signals,” meaning it was able to improve how the patient’s brain signals were detected and translated.

It is not the first device implanted in the brain

 Decades of research and lofty ambitions to meld minds with computers





Neuralink also uses innovative robotic surgery, rather than a specialized neurosurgeon, to implant the device.

“That’s way different from what people have done before,” said Sergey Stavisky, an assistant professor in the Department of neurological surgery at the University of California, Davis, and co-director of the UC Davis Neuroprosthetics Lab.

Stavisky said automating the procedure with a robot could make it more efficient and effective down the road.

There are many other companies developing brain interfaces.

The process requires electrodes to be inserted in the brain, a communication link, a computer algorithm which interprets what a micro electroencephalogram detects from the brain.

The technique can also be applied to vision 






















Implant by Elon Musk's Neuralink suffers setback after threads retract from patient's brain

No One Truly Knows How AI Systems Work. A New Discovery Could Change That


Today’s artificial intelligence is often described as a “black box.” AI developers don’t write explicit rules for these systems; instead, they feed in vast quantities of data and the systems learn on their own to spot patterns. But the inner workings of the AI models remain opaque, and efforts to peer inside them to check exactly what is happening haven’t progressed very far. Beneath the surface, neural networks—today’s most powerful type of AI—consist of billions of artificial “neurons” represented as decimal-point numbers. Nobody truly understands what they mean, or how they work.

For those concerned about risks from AI, this fact looms large. If you don’t know exactly how a system works, how can you be sure it is safe?

Read More: Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says

On Tuesday, the AI lab Anthropic announced it had made a breakthrough toward solving this problem. Researchers developed a technique for essentially scanning the “brain” of an AI model, allowing them to identify collections of neurons—called “features”—corresponding to different concepts. And for the first time, they successfully used this technique on a frontier large language model, Anthropic’s Claude Sonnet, the lab’s second-most powerful system, .

In one example, Anthropic researchers discovered a feature inside Claude representing the concept of “unsafe code.” By stimulating those neurons, they could get Claude to generate code containing a bug that could be exploited to create a security vulnerability. But by suppressing the neurons, the researchers found, Claude would generate harmless code.

The findings could have big implications for the safety of both present and future AI systems. The researchers found millions of features inside Claude, including some representing bias, fraudulent activity, toxic speech, and manipulative behavior. And they discovered that by suppressing each of these collections of neurons, they could alter the model’s behavior.

As well as helping to address current risks, the technique could also help with more speculative ones. For years, the primary method available to researchers trying to understand the capabilities and risks of new AI systems has simply been to chat with them. This approach, sometimes known as “red-teaming,” can help catch a model being toxic or dangerous, allowing researchers to build in safeguards before the model is released to the public. But it doesn’t help address one type of potential danger that some AI researchers are worried about: the risk of an AI system becoming smart enough to deceive its creators, hiding its capabilities from them until it can escape their control and potentially wreak havoc.

“If we could really understand these systems—and this would require a lot of progress—we might be able to say when these models actually are safe, or whether they just appear safe,” Chris Olah, the head of Anthropic’s interpretability team who led the research, tells TIME.

“The fact that we can do these interventions on the model suggests to me that we're starting to make progress on what you might call an X-ray, or an MRI [of an AI model],” Anthropic CEO Dario Amodei adds. “Right now, the paradigm is: let's talk to the model, let's see what it does. But what we'd like to be able to do is look inside the model as an object—like scanning the brain instead of interviewing someone.”

The research is still in its early stages, Anthropic said in a summary of the findings. But the lab struck an optimistic tone that the findings could soon benefit its AI safety work. “The ability to manipulate features may provide a promising avenue for directly impacting the safety of AI models,” Anthropic said. By suppressing certain features, it may be possible to prevent so-called “jailbreaks” of AI models, a type of vulnerability where safety guardrails can be disabled, the company added.


Researchers in Anthropic’s “interpretability” team have been trying to peer into the brains of neural networks for years. But until recently, they had mostly been working on far smaller models than the giant language models currently being developed and released by tech companies.

One of the reasons for this slow progress was that individual neurons inside AI models would fire even when the model was discussing completely different concepts. “This means that the same neuron might fire on concepts as disparate as the presence of semicolons in computer programming languages, references to burritos, or discussion of the Golden Gate Bridge, giving us little indication as to which specific concept was responsible for activating a given neuron,” Anthropic said in its summary of the research.

To get around this problem, Olah’s team of Anthropic researchers zoomed out. Instead of studying individual neurons, they began to look for groups of neurons that would all fire in response to a specific concept. This technique worked—and allowed them to graduate from studying smaller “toy” models to larger models like Anthropic’s Claude Sonnet, which has billions of neurons. 

Although the researchers said they had identified millions of features inside Claude, they cautioned that this number was nowhere near the true number of features likely present inside the model. Identifying all the features, they said, would be prohibitively expensive using their current techniques, because doing so would require more computing power than it took to train Claude in the first place. (Costing somewhere in the tens or hundreds of millions of dollars.) The researchers also cautioned that although they had found some features they believed to be related to safety, more study would still be needed to determine whether those features could reliably be manipulated to improve a model’s safety.

For Olah, the research is a breakthrough that proves the utility of his esoteric field, interpretability, to the broader world of AI safety research. “Historically, interpretability has been this thing on its own island, and there was this hope that someday it would connect with [AI] safety—but that seemed far off,” Olah says. “I think that’s no longer true.”

 














https://time.com/6980210/anthropic-interpretability-ai-safety-research/

Friday, May 24, 2024

The Merger of AI voice input with the EHR

Physician burnout is often attributed to the additional time of entering clinical notes manually into an EHR.

Adding an EHR to your clinical practice can take away time from face-to-face interaction with patients and decrease physician-patient interaction. Some physicians take work home at night to enter data. 

There are now some EHR vendors that incorporate voice AI transcription in desktop applications and even mobile apps


There are vendors such as Nexgen who have integrated the Voice AI into their platform on the mobile app.

NexGen Mobile App Demo


The complete note can then be copied to your EHR seamlessly.

Download NexGen Brochure

The Future is Mobile (pdf)











https://play.goconsensus.com/b52e97d5c

Medical Forecasting - by Eric Topol - Ground Truths

In an outstanding publication by Erick Topol in the journal "Science Today"


Diagnostic prediction is one form of artificial intelligence now being investigated for health care.


Here is an overview of some key risk factors for cancer that can be analyzed using AI:

**Genetics and Family History**
- AI can analyze DNA sequencing and genetic data to identify inherited genetic mutations that increase cancer risk.
- Machine learning models can assess family medical histories to determine familial cancer risk patterns.

**Environmental Exposures**
- AI can process and interpret large datasets on environmental pollutants, chemicals, radiation, etc. to identify links to cancer development.
- Computer vision techniques can analyze satellite/aerial imagery to map environmental risk factors like air pollution levels.

**Lifestyle Factors**
- AI can process data from wearable devices, electronic health records, and surveys to model the impact of diet, physical activity, smoking, alcohol use, and other lifestyle behaviors on cancer risk.
- Natural language processing can be used to analyze social media and online data to understand how social determinants of health influence cancer risk.

**Early Detection**
- AI-powered image analysis of medical scans can aid in the early detection of cancerous tumors or pre-cancerous lesions.
- Machine learning models can integrate multiple data sources to provide personalized cancer screening recommendations.

**Treatment Response Prediction**
- AI algorithms can analyze tumor genomics, medical history, and other data to predict how individual patients may respond to different cancer therapies.
- This can help optimize and personalize cancer treatment plans.

What cancers thus far have been studied using A.I.?
AI has shown promising capabilities in predicting and assessing risk for a wide range of cancer types. Here are some of the key cancer types where AI is making significant advancements:

1. Breast Cancer
   - AI can analyze mammograms and other breast imaging data to detect early signs of tumors and microcalcifications.
   - Machine learning models can integrate genetic, lifestyle, and demographic data to estimate a patient's individualized breast cancer risk.
   - AI-powered digital pathology tools can assist pathologists in analyzing biopsy samples to guide treatment decisions.

2. Lung Cancer
   - Computer vision AI can detect subtle abnormalities in chest CT scans that may indicate early-stage lung cancer.
   - Predictive models can assess an individual's risk of lung cancer based on factors like smoking history, environmental exposures, and genetic markers.
   - AI is being used to improve lung cancer screening programs and enhance early detection efforts.

3. Prostate Cancer
   - AI can analyze MRI scans and pathology slides to identify cancerous lesions in the prostate with high accuracy.
   - Predictive algorithms can integrate PSA levels, family history, and other clinical data to determine a man's risk of developing prostate cancer.
   - AI-assisted tools are being used to guide prostate biopsy procedures and optimize treatment planning.

4. Colorectal Cancer
   - Computer vision AI can detect precancerous polyps and lesions during colonoscopy procedures with improved sensitivity.
   - Predictive models can assess an individual's colorectal cancer risk based on factors like diet, physical activity, family history, and genetic markers.
   - AI is being used to improve colorectal cancer screening adherence and optimize surveillance strategies.

5. Skin Cancer
   - AI-powered dermatology apps can analyze images of moles and lesions to detect potential signs of melanoma and other skin cancers.
   - Machine learning models can integrate patient demographics, medical history, and imaging data to estimate individualized skin cancer risk.
   - AI is being used to improve skin cancer screening, especially in underserved populations with limited access to dermatologists.

It should be noted many biological markers in the blood can assist in the risk of cancer

The ability of AI to rapidly process and find patterns in large, complex datasets makes it a powerful tool for advancing cancer prevention, early detection, and personalized risk assessment across a wide spectrum of cancer types. As the technology continues to evolve, the impact of AI on cancer care is likely to grow significantly.
The key is that AI systems can rapidly process and find patterns in massive, complex datasets that would be impossible for humans to analyze manually. This allows for more comprehensive, data-driven cancer risk assessment and prevention strategies.

It must be noted that these results are still early in the process of using AI, and these early results may be inaccurate.



Medical Forecasting - by Eric Topol - Ground Truths

Friday, May 17, 2024

AR/VR using Hololens in Neurosurgery


INTRODUCTION

Over the last century we have moved from plain x-rays cat skins and mris to what we think will be the final frontier which is in mixed reality my name is Osama Chowdhury i'm a neurosurgeon and one of the co-founders of metavis i'm Chris Morley co-founder OF MEDIVIS i'm a radiologist by training on the radiology side we're really interested in CT guided procedures

A recent study on ablation showed the average error rate in the placement of the catheter to that position in three-dimensional coordinate space was 2.2 centimeters that was in spite of using 10 CT scans over the course of the two and a half hour procedure with surgical ar on hololens we could potentially scan the patient just once and place the catheter with millimeter accuracy in a fraction of the time

Some of my favorite cases to do using the technology is what we call cerebrovascular bypass procedures one blood vessel or part of the brain isn't getting enough blood supply so you route in another blood vessel and stitch it in using the finest sutures possible before we were using ultrasound technology to find where that blood vessel was but here we could take a CT scan of a patient and overlay it directly onto them at least 200 operations have now been performed using this technology so that's what really excites us when we can do some of these routine procedures in just an inherently superior way so we can get our patients in and out of the operating room and safely back to their families i can't explain to you how fortunate we are as clinicians to be alive when this sort of technology is available like this is the work of science fiction.




More uses of Microsoft Teams and Hololens being used by NHS in U.K.

Introduction to mixed reality development

HOLOLENS 2 EMULATOR















https://youtu.be/C2V27QSv7O0?si=TxSgkyLvymDoA9ai

Wednesday, May 15, 2024

Can ovarian cancer be detected by genetic analysis of cervical cancer screening samples?





The best way to reduce the risk of dying from cancer is early detection and diagnosis. For cancers like colon and breast cancers, screening can often detect disease before perceivable symptoms, which often portend advanced cancer and hence a worse prognosis. However, many cancers do not have reliable screening methods, including ovarian cancer, one of the most lethal gynecologic cancers. Even when ovarian cancer becomes symptomatic, the symptoms are nonspecific, which often causes further delays in diagnosis. Indeed, most ovarian cancer is not diagnosed until stage III (when it has invaded the abdominal cavity) or stage IV (when it has become metastatic and spread to distant organs), at which point the five-year survival rate is less than 30%. In contrast, the five-year survival rate for stage I ovarian cancer is above 90%.

Clinical trials have tested whether modalities typically used to make ovarian cancer diagnoses (e.g., plasma biomarker cancer antigen 125, either alone or combined with transvaginal ultrasounds) could be used for screening, but these tests were not sufficiently sensitive to early-stage disease.1,2 Since earlier detection is crucial for improving ovarian cancer prognosis, a proof-of-principle study recently explored whether high-grade ovarian cancer could be detected years earlier by analyzing the DNA from cervical cancer screening Papanicolaou (Pap) tests.3 (Brace yourselves – we’re about to dive into some technical details of cancer development and screening test performance metrics. For a little more background in these areas, see AMA #56.)

Previous trials that used modalities typically used to make ovarian cancer diagnoses have failed to reduce ovarian cancer mortality because the methods of detection were not sensitive enough to early-stage disease.1,2 By contrast, the genomic analyses employed in this study were able to detect cancerous changes nearly a decade before diagnosis – an enormous advantage in combating a disease for which the five-year survival rate at stage I is over three times higher than the survival rate at stage IV. Since the DNA analysis can be done on samples that are already routinely collected for cervical cancer screening, this may be a feasible way to add ovarian cancer screening to the current standard of care in populations most vulnerable to this type of cancer. While the test, as it currently stands, is far too susceptible to false positives to justify use in the general female population of average ovarian cancer risk, its use as a screening method for women with especially high baseline risk has the potential to improve clinical outcomes associated with this disease. 










Can ovarian cancer be detected by genetic analysis of cervical cancer screening samples?

Saturday, May 11, 2024

New mRNA cancer vaccine triggers fierce immune response to fight malignant brain tumor-Glioblastoma


In a first-ever human clinical trial of four adult patients, an mRNA cancer vaccine developed at the University of Florida quickly reprogrammed the immune system to attack glioblastoma, the most aggressive and lethal brain tumor.

Glioblastoma is among the most devastating diagnoses, with median survival around 15 months. The current standard of care involves surgery, radiation and some combination of chemotherapy.

 Microscopy of Glioblatoma

 

There was and is some controversy about mRNA biochemistry.  mRNA has been used for many years, however the COVID 19 pandemic brought it to light.  It was an innovation for mass producing vaccines on short notice.

 Reported May 1 in the journal Cell, the discovery represents a potential new way to recruit the immune system to fight notoriously treatment-resistant cancers using an iteration of mRNA technology and lipid nanoparticles, similar to COVID-19 vaccines, but with two key differences: use of a patient's own tumor cells to create a personalized vaccine, and a newly engineered complex delivery mechanism within the vaccine.

"Instead of us injecting single particles, we're injecting clusters of particles that are wrapping around each other like onions, like a bag full of onions," said senior author Elias Sayour, M.D., Ph.D., a UF Health pediatric oncologist who pioneered the new vaccine, which like other immunotherapies attempts to "educate" the immune system that a tumor is foreign.

"And the reason we've done that in the context of cancer is these clusters alert the immune system in a much more profound way than single particles would."

Among the most impressive findings was how quickly the new method, delivered intravenously, spurred a vigorous immune-system response to reject the tumor, said Sayour, principal investigator of the RNA Engineering Laboratory within UF's Preston A. Wells Jr. Center for Brain Tumor Therapy and a UF Health Cancer Center and McKnight Brain Institute investigator who led the multi-institution research team.

"In less than 48 hours, we could see these tumors shifting from what we refer to as 'cold'—immune cold, very few , very silenced immune response—to 'hot,' very active immune response," he said.

"That was very surprising given how quick this happened, and what that told us is we were able to activate the early part of the immune system very rapidly against these cancers, and that's critical to unlock the later effects of the immune response."

Glioblastoma is among the most devastating diagnoses, with median survival around 15 months. The current standard of care involves surgery, radiation and some combination of chemotherapy.


 Reported May 1 in the journal Cellthe discovery represents a potential new way to recruit the immune system to fight notoriously treatment-resistant cancers using an iteration of mRNA technology and lipid nanoparticles, similar to COVID-19 vaccines, but with two key differences: use of a patient's own tumor cells to create a personalized vaccine, and a newly engineered complex delivery mechanism within the vaccine.














New mRNA cancer vaccine triggers fierce immune response to fight malignant brain tumor

Sunday, May 5, 2024

Doctors are getting on board with genAI, survey shows | Healthcare IT News

In a swift reversal since Open.ai was released in late September 2023 surveys of physicians reveal more acceptance of its use, while patients are less confident.

In an online survey of 100 practicing physicians who work in a large U.S. hospital or health system and use clinical decision support tools, four in five providers – 81% – agreed that generative artificial intelligence can improve care team interactions with patients.

The doctors surveyed by Wolter Kluwer also indicated high standards for selecting genAI tools – with 89% reporting they need vendors to be transparent about the sources of CDS data and want to be sure it comes from practicing medical experts before they use it for their clinical decisions. 

However, they overestimated American health consumers' openness to AI-supported medical advice compared to a previous genAI in healthcare survey of those consumers that the company conducted in November. 

The gap between physician and patient readiness for the role of artificial intelligence in care is noteworthy, Wolters Kluwer said in a statement.

Wolters Kluwer survey: Over two-thirds of U.S. physicians have changed their mind, now viewing GenAI as beneficial in healthcare.

Forty percent ready to use GenAI this year at point of care but 89% of doctors need content source transparency for confident adoption. 

A new Wolters Kluwer Health survey¹ released today finds that 40% of U.S. physicians are ready to use generative AI (GenAI) this year when interacting with patients at the point-of-care. The findings reflect a rapid acceptance of the new technology more broadly, with 68% saying they have changed their views over the last year, and are now more likely to think that GenAI would be beneficial to healthcare.

Physicians, however, are wary of which GenAI tools they would be comfortable using, with 91% of respondents saying they need to know the GenAI sourced materials were created by doctors and medical experts before using it in clinical decisions. Similarly, 89% report they need vendors to be transparent about where information came from, who created it, and how it was sourced.


Transformative tech: GenAI viewed as helping to save time and optimize care teams 

With healthcare facing challenges with staffing shortages and burnout, physicians see many benefits to applying GenAI in the care continuum. When asked how GenAI could support decision making or improve interactions at the point-of-care:

  • Four in five (81%) physicians say GenAI will improve care team interaction with patients.
  • Over half believe GenAI will save them 20% or more time. 
  • Over two-thirds (68%) say it can save time by quickly searching medical literature.
  • Three in five (59%) say it can save time by summarizing data about patients from the electronic health record (EHR).
  • Only 3% do not believe GenAI will improve interactions with patients.

Doctors and patients diverge on GenAI in care 

Comparing results of this survey with a Wolters Kluwer survey of U.S. consumers conducted in late 2023 shows that consumers have different views on the integration of GenAI into the physician/patient interaction. Two-thirds of physicians say that patients would be confident in GenAI results to make clinical decisions while just over half of patients report they would be confident. When physicians were asked if they believe patients would be concerned about the use of GenAI in a diagnosis, only one out of five physicians said yes. Conversely, when asked directly, four out of five Americans reported they would be concerned, suggesting a wide gap in perceptions about GenAI readiness among health consumers.

Doctors set high transparency and content source standards for GenAI guidelines

Physicians’ responses reflect a landscape that is still developing clear guidance or policies on using GenAI. Over a third (37%) say there are currently no guidelines in place at their organizations about using GenAI while almost half (46%) say they don’t know of any guidelines. 

Still, physicians have concerns about the source of content and want transparency. For the majority of physicians (58%), the number one most important factor when selecting a GenAI tool is knowing the content it is trained on was created by medical professionals.

Nine out 10 (89%) report they would be more likely to use GenAI in clinical decisions if the vendor was transparent about where the information came from, who created it, and how it was resourced. Knowing that the technology is from a well-known, trusted company was also a priority: 76% would be more comfortable using GenAI knowing it came from established vendors in the healthcare sector.

A responsible approach to Clinical GenAI

Wolters Kluwer Health recently expanded the beta of AI Labs, its collaborative solution to explore the experimental use of Clinical GenAI, to 100 U.S. hospitals. AI Labs has access to the complete set of UpToDate® evidence-based clinical content and graded recommendations across more than 25 medical specialties. It is the only large language model (LLM) exclusively powered by UpToDate trusted content. UpToDate is used by more than two million users at more than 44,000 healthcare organizations in over 190 countries. Watch this video to learn more about Wolters Kluwer’s mission for Clinical GenAI.


Hospitals report shortened stays when  Uptodate using the beta of AI Labs was used for patient care.



















Doctors are getting on board with genAI, survey shows | Healthcare IT News

How much time should a person spend exercising?


When it comes to exercise, we should only concern ourselves with duration insofar as it influences what we really care about: results. Exercise is not a goal in itself – rather, it is a means of achieving good cardiorespiratory fitness, strength, and metabolic health, and the ultimate indicators of sufficient exercise are therefore a good VO2 max and adequate muscle mass.

Why do we care about VO2 max and muscle mass?

The rationale for emphasizing VO2 max and muscle mass is simple: these are the metrics with the greatest implications for healthspan and lifespan.

VO2 max, a measure of the body’s maximal ability to utilize oxygen during intense exercise, is indicative of overall cardiorespiratory fitness and, as discussed in detail in a recent premium newsletter, has been shown in multiple large-scale studies to be strongly and inversely associated with all-cause mortality risk across all adult age groups. Indeed, a low VO2 max is reported to be a far better predictor of mortality than diabetes, cancer, cardiovascular disease, smoking, or even age¹. The strength of this association can in large part be attributed to the fact that, in contrast to many other performance metrics, a high VO2 max requires consistency in training over relatively long spans of time and reflects not only aerobic fitness but body composition as well, underscoring the importance of maintaining this metric at a high level throughout life.

In addition to VO2 max, muscle mass – and to a greater extent, muscle strength – is also inversely correlated with mortality. As explained in AMA #27, low muscle mass (meeting clinical definitions of sarcopenia) has been reported to be associated with a 60% increase in risk of mortality relative to the absence of sarcopenia². Inadequate muscle leads to metabolic dysfunction and frailty, both of which can limit both quality and duration of life, but unfortunately, gradual losses of muscle mass and muscle strength are features of the aging process that cannot fully be avoided. Therefore, the most effective strategy for preventing sarcopenia in our latter decades of life is to build and maintain as much muscle as possible leading up to (and including) those decades – in other words, to ensure that we have enough muscle to buffer against the inevitable losses over time.

Focus on outputs over inputs

Many research studies claim to identify an “optimal” amount of time to devote to exercise each day or week in order to maximize health and lifespan benefits, but in the short video below, I explain why these results can be deceiving – and why focusing on outcomes of exercise, i.e., VO2 max and muscle mass, is a better strategy for setting goals around exercise and determining what is or isn’t “enough.”


If the goal for exercise is to improve health and extend lifespan, then the metrics that matter are those most closely related to health and lifespan – i.e., VO2 max and muscle strength. Exercise duration is another step removed from that aim and is only one of the various “inputs” – along with exercise intensity and consistency over time, for instance – that impact the more relevant “outputs” of cardiorespiratory fitness and muscle mass. While setting goals for exercise duration may be valuable for some as a means of keeping oneself accountable on a day-to-day level, it’s critical to keep in mind that such short-term “goals” are merely stepping stones in a longer path; they are not ends in themselves. And to keep our eyes on that ultimate end of a longer, healthier life, we must focus instead on metrics that are indicative of how exercise is affecting our overall fitness and strength.