FDA Convenes Experts to Address Knowledge Gaps in Noninvasive Testing for NASH

“Silent killer” is an apt description for a potentially fatal disease that progresses slowly, without any obvious symptoms in the early stages. When you couple this insidious type of pathology with a rapidly increasing prevalence, the disease is often described in terms akin to an epidemic. Nonalcoholic steatohepatitis (NASH)—a disease characterized by fat accumulation in the liver, inflammation, and fibrosis—is one such assassin, with an estimated US prevalence of 5% (~17 million Americans).1 Alarmingly, most people don’t know they have NASH: an estimated 80% of individuals with NASH remain undiagnosed.2 Liver biopsy is the historical gold standard for diagnosis; however, the procedure is invasive, and rarely performed in clinical practice. So, NASH has a testing problem.

There is therefore considerable interest in developing noninvasive tests for the detection of fibrosis—including the precise staging of increasing severities of fibrosis and cirrhosis. These candidate noninvasive measures encompass a range of both blood tests—which measure circulating biomarkers—and imaging techniques.

The FDA’s timely workshop—“Use of Biomarkers for Diagnosing and Assessing Treatment Response in Noncirrhotic NASH Trials”—on September 18 and 19, 2023, was intended to engage the top academic minds in the field to discuss state-of-the-art biomarkers and noninvasive tests in the context of NASH clinical trials, as well as identifying ways to improve the accuracy and utility of liver biopsy. The focus was on noncirrhotic NASH patients with advanced liver fibrosis, with an additional discussion theme related to identifying “progression to cirrhosis” using biomarkers.

Asymptomatic Early Course Followed by Progressive Deterioration 

NASH is characterized by fat accumulation in the liver, leading to inflammation and liver cell damage. It progresses from nonalcoholic fatty liver disease (NAFLD) and worsens on a spectrum of deterioration starting with early-stage NASH, followed by NASH with liver fibrosis, and finally NASH with cirrhosis. Early-stage cirrhosis (compensated cirrhosis) can often be managed more effectively than late-stage cirrhosis (decompensated cirrhosis)—wherein complications are more severe and frequent, and can include liver failure, portal hypertension, variceal bleeding, ascites, hepatic encephalopathy, and an increased risk of hepatocellular carcinoma.

Need for NASH Biomarkers to Screen, Diagnose, Assess Prognosis, and Monitor

There is an unmet need for validated noninvasive biomarkers of NASH that encompass multiple utilities along the relatively long disease course: screening, diagnosis, and disease management phases.

Liver biopsy, the historical gold standard for diagnosing NASH, is invasive, carries procedural risks, and is prone to sampling variability. Moreover, biopsy fails to capture dynamic disease progression over time, an aspect crucial for ongoing management and understanding of the multifaceted nature of NASH.

The asymptomatic pathology of NASH in the early stages also compromises patient recruitment into clinical trials, leading to challenges in identifying and enrolling a population that is representative of the disease—potential participants may be unaware of their condition, and thus less likely to seek or be referred for trial participation.

Therefore, testing for NASH requires advances that facilitate (1) Screening (identify individuals at risk of developing a disease before symptoms appear), (2) Diagnosis (confirm or rule out the presence of the disease), (3) Prognosis (predict the disease’s progression), and (4) Monitoring (monitor disease progression, effectiveness of treatment, and safety).

Highlights From the FDA NASH Biomarker Workshop

The workshop aided the FDA in pinpointing knowledge gaps in utilizing noninvasive tests as diagnostic biomarkers and reasonably likely surrogates. It also offered a structure for gathering additional data to address these gaps. The FDA aimed to determine whether existing noninvasive tests meet the Agency’s standards for providing primary evidence of clinical efficacy.

Surrogate Endpoint Development and the Utility of Biomarkers

Measuring clinical outcomes as “endpoints” in a clinical trial is the most reliable way of evaluating the efficacy and safety of a new drug. Surrogate endpoints are measures of disease activity that are known to predict clinical outcomes. In NASH, clinical outcomes of interest include progression to liver cirrhosis, liver failure, or death. Surrogate endpoints are particularly useful for diseases that require long-term studies to observe clinical outcomes, as they can provide earlier indications of treatment efficacy. Biotech companies are increasingly using surrogate endpoints—biomarkers, imaging, physical signs, or other measures—of clinical benefit as primary efficacy endpoints for the approval of new drugs. The FDA has compiled a list of surrogate endpoints that have been used as the basis for drug approval or licensure.

The workshop concluded that surrogate endpoints must be carefully selected and validated to ensure they are on the causal pathway of NASH and are sensitive enough to detect changes in the disease state. The FDA is particularly interested in understanding the context of use for these surrogate endpoints in NASH. This involves a thorough evaluation of the endpoint’s predictive validity, reliability, and clinical relevance.

The agency’s ongoing dialogue with stakeholders in the NASH field aims to refine the use of surrogate endpoints, aligning them with regulatory standards and ensuring they meet the criteria for drug approval.

Fibrosis in NASH Is Complex and Dynamic

The progression of fibrosis in NASH is nonlinear, presenting difficulties in its evaluation. Certain stages may persist for varying durations, and there is also an observable fluctuation in NASH intensity over time—the workshop emphasized the need for an expanded scoring system to capture these nuances. A number of laboratory tests and imaging modalities have been developed to assess various features of liver fibrosis, and they are being used in clinical trials—and to a limited extent—in clinical practice.

Imaging Biomarkers Are Showing Promise, Circulating Biomarkers Are a Work-In-Progress

Measuring liver stiffness by vibration-controlled transient elastography (VCTE) and magnetic resonance elastography (MRE) are showing promise for assessing liver fibrosis. Regulatory agencies are in the process of qualifying these diagnostic enrichment tools. There is thus a need for prospective, longitudinal studies to validate these methods further.

In terms of circulating biomarkers: (1) FIB-4 (Fibrosis-4), a noninvasive scoring system based on age, liver enzymes, and platelet count (used to estimate the level of scarring in the liver) and (2) ELF (enhanced liver fibrosis), a test that measures certain biomarkers in the blood to assess liver fibrosis, garnered some consensus at the workshop as tests that are valuable in the early detection and monitoring of NASH.

Drug Approvals Will Enrich Large-Scale NASH Biomarker Projects

Looking toward the future, significant advancements in the development of noninvasive biomarkers for NASH are emerging, driven by large-scale biomarker initiatives and the expected approval of new drugs. At the forefront of these developments are the “Non-Invasive Biomarkers of Metabolic Liver Disease” (NIMBLE) consortium in the US and the Europe-based “Liver Investigation: Testing Marker Utility in Steatohepatitis” (LITMUS) project. Both NIMBLE and LITMUS are dedicated to identifying and validating biomarkers for NASH.

Artificial Intelligence in Liver Biopsy Interpretation

Although liver biopsy has its limitations in clinical practice, it persists as a gold standard in clinical trials. To increase the accuracy and reliability of interpreting liver biopsy, artificial intelligence (AI) and machine learning (ML) tools are emerging. Although these tools offer the potential to analyze tissue samples with high precision and consistency, there are still hurdles to overcome.

One such hurdle is establishing what is called a “ground truth” for these AI algorithms: the absolute, verified reality against which the performance of these AI tools can be measured. Think of it like an answer key at the back of a textbook; just as students’ answers are compared against this key to check for correctness, AI predictions are compared against the ground truth to assess their accuracy.

In the context of liver tissue samples, this ground truth would typically come from expert pathologists who provide definitive diagnoses and analyses. By establishing a reliable ground truth, researchers aim to train AI and ML tools more effectively, ensuring they can replicate and, ideally, enhance the diagnostic accuracy of human experts.

The FDA has demonstrated that it is increasingly open to surrogate endpoint development and noninvasive testing in the field of NASH. The expected approval of drugs such as resmetirom by Madrigal Pharmaceuticals—poised to be the first targeted therapy approved specifically for NASH with fibrosis—will answer a significant unmet need in the clinic. Resmetirom holds the potential to significantly alter the disease course of NASH, particularly with a toolkit of noninvasive tests that accurately identify patients for treatment. In addition, data from long-term outcomes trials for resmetirom—and ultimately postapproval data from the clinic—will provide a rich source of information on noninvasive biomarkers, enriching projects such as NIMBLE and LITMUS.


  1. Younossi ZM, Golabi P, Paik JM, Henry A, Van Dongen C, Henry L. The global epidemiology of nonalcoholic fatty liver disease (NAFLD) and nonalcoholic steatohepatitis (NASH): a systematic review. Hepatology 2023;77(4):1335-1347. DOI: 10.1097/HEP.0000000000000004.
  2. Alexander M, Loomis AK, Fairburn-Beech J, et al. Real-world data reveal a diagnostic gap in non-alcoholic fatty liver disease. BMC Med 2018;16(1):130. DOI: 10.1186/s12916-018-1103-x.

Muzamil Saleem, PhD
Associate Scientific Director

Muz is passionate about helping biotech and pharma companies communicate the journey of their groundbreaking medicine from lab bench to patient. He combines diverse experience from a neuroscience research career, scientific consulting, and a tenure in healthcare equity research on Wall Street into his current scientific services role at ProEd Regulatory. A philosophy of positioning effective scientific communication to all types of audiences is central to Muz’s work.

Connect with Muz on LinkedIn.


PNAS Spearheads Effort to Streamline Authorship Transparency

Authorship is a hot topic in the scientific and medical publishing world. Who qualifies as an author? Who is the senior author? What are the responsibilities of the corresponding author? Opinions vary across disciplines and cultures. Whereas medical publications generally follow the recommendations of the International Committee of Medical Journal Editors (ICMJE; http://www.icmje.org/icmje-recommendations.pdf),1 academic publications may follow other guidance, or none at all. Is there a way to impose universal authorship criteria and quantify the work of authors so that their actual contributions can be tracked, giving them more than just their name on an article in the modern publish-or-perish environment?

A recent article by McNutt et al2 in Proceedings of the National Academy of Sciences of the United States of America (PNAS) seeks to create a framework for doing just that. As part of the global push toward greater transparency, with the goal of increasing integrity and trust in scientific publications, this article proposes that journals develop standardized authorship requirements and reporting, documented through ORCID identifiers (https://orcid.org) and the CRediT system (http://docs.casrai.org/CRediT).

Continue reading

What the New ICMJE Requirement for Data Sharing Statements Really Means for Data Sharing

As of July 1, 2018, manuscripts submitted to International Committee of Medical Journal Editors (ICMJE)-member journals must be accompanied by a data sharing statement. What is the new requirement, how did it evolve, and what does it mean for data sharing?

In January 2016, the ICMJE proposed that authors of all clinical trial manuscripts published in member journals share de-identified individual-patient data (IPD) underlying their results within 6 months of publication.1 The proposal included data in tables, figures, appendices, and supplemental materials. The ICMJE invited comments on its proposal and a firestorm ensued. Although many individuals and groups applauded the ICMJE proposal, others raised legitimate concerns. Some were concerned about inappropriate analyses of data without statistical rigor, and authors were concerned about competition and losing control and/or credit for their work. Others voiced concerns about the practical aspects of how to share the huge amounts of data generated by some studies, particularly large, phase 3, randomized trials. Still others raised persistent concerns about patients’ right to privacy, particularly in the rare disease setting, where, despite de-identification efforts, patients still might be identifiable. Continue reading

The Target Product Profile—Your Blueprint for Drug Development

When utilized to its full potential, the Target Product Profile (TPP) is a dynamic, living document that ensures all stakeholders—clinical, regulatory, quality and manufacturing, commercial, market access, and medical affairs—are working from the same blueprint. Unfortunately, the TPP often has a bad rap within industry because many people think it is too rigid for today’s drug development environment. But often that reflects a failure to truly collaborate or a tendency to let the TPP get stale. To be effective, the TPP must be continually updated based on changes in the data and the competitive landscape. When companies take a balanced approach to developing the TPP and have a dynamic process that allows them to monitor and adapt it, as needed, they build agility into their drug development program that allows them to make critical go/no-go decisions or course corrections when necessary.

By using the TPP to ensure everyone is on the same page, drug developers can avoid costly delays when, for example, manufacturing isn’t ready to scale up to commercial production when the phase 3 data comes in ahead of schedule. Keeping a close eye on the evolving therapeutic landscape helps the development team anticipate what data will be needed to support labeling claims that may serve as a key differentiator from the competition and provide added value in the marketplace. So let’s look at how a dynamic TPP—one that is proactively updated—can help achieve the critical success factors introduced in the last installment. Continue reading

Common Protocol Template—Streamlining Protocol Implementation

Study protocols are required for every clinical trial. Approximately 20,000 are submitted and posted to www.clinicaltrials.gov every year1—each one different. The format and core content can vary from sponsor to sponsor, costing the US Food and Drug Administration (FDA) time and resources to interpret, review, and ultimately, approve each uniquely complex protocol. This process, as it stands, slows down progress for new drug development. Clearly, there is a need to accelerate the pace at which protocols are approved so that new clinical studies can be initiated. In a world where technology continues to offer a platform for efficiency and accuracy, the development of the Common Protocol Template (CPT) is a welcome addition to the medical field. Common Protocol Templates can lead to faster review time, simplified trial startup, and prompt execution of clinical trials. Although the use of a CPT is not required for all new clinical trials, it is only a matter of time before its use becomes commonplace in drug development. Continue reading

Mapping a Successful Path to Label Optimization

Bringing a drug to market is a long and expensive process. An analysis by the Tufts Center for the Study of Drug Development estimated the total cost of development from discovery to commercialization at $2.6 billion over the course of about 10 years (based primarily on big pharma companies). This represents more than a 10‑fold increase since the 1970s, when a drug could be developed from bench to bedside for less than $200 million (Figure 1).1 Others estimate the cost to commercialize a drug to be much lower (< $1 billion when they consider small biotech companies),2 yet it is generally accepted that the cost of drug development is on the rise. A major driver of those rising costs is the money spent on drug candidates that never make it to market because of safety concerns or lack of efficacy. The bottom line is that there is no room for costly mistakes, miscalculations, or inefficiencies in the drug development process.
Continue reading

How To: 5 Critical Steps to a Successful Scientific Symposium

5-steps-symposium-blog-graphic-v1Scientific symposia at medical conferences are a great way to educate physicians on the current treatment landscape and on how new agents can improve patient care. But your symposium is often competing with many others for attendees’ limited time and attention. If you want your content to be seen, you need to find creative ways to draw an audience and offer a meaningful educational experience. Since 1991, ProEd Communications has helped medical affairs teams create novel and engaging scientific symposia. Success requires creativity, diligent preparation, and outstanding organizational skills. Below are my top 5 steps to creating a successful symposium.

1. Develop a theme that will capture attention

A symposium can only be successful if it reaches its target audience, and that requires capturing the interest of congress attendees. So the number one success factor is to come up with a creative theme that will grab the attention of potential attendees. Continue reading

GPP3: Is It a Better Guidance?

blog header linkedin - GPP3 v3 FINAL WP

The International Society for Medical Publication Professionals (ISMPP) recently released its latest guidance—GPP3, or Good Publication Practice 3. This is the first update of the ISMPP guidance since GPP2 was released in 2009. A steering committee first met to draft the guidance, and then ProEd colleagues, Laura McCormick, PhD; Heather Hlousek, and Jim Cozzarin, ELS, had the privilege, with 91 reviewers (from agencies and sponsors), to provide critical feedback before the new guidance was published in Annals of Internal Medicine.1

So, what are the important changes from GPP2? In addition to being more user-friendly than its predecessor—with an overall simplification of language and format, a new Guiding Principles section, and quick reference tables that address guidance on authorship criteria and common authorship issues—GPP3 also reflects some important updates and new elements1:

  • Updated International Committee of Medical Journal Editors (ICMJE) 2013 authorship criteria
  • Common issues regarding authorship
  • Improved clarity on author payment and reimbursement
  • Additional clarity on what constitutes ghost or guest authorship
  • Expanded information on the role and benefit of professional medical writers
  • Guidance for appropriate data sharing

Continue reading