CGC 2022 Key Takeaways

Last week, we were in St. Louis to attend the Cancer Genomics Consortium Annual Meeting. It was great to be back in person, listening to all the presentations and discussions about work and advancements being made in the field of cancer genomics.

Here are the key takeaways we took from the meeting:

Interpretation and reporting is one of the key bottlenecks of NGS testing A common issue brought up in a few presentations is the challenge of next-generation sequencing (NGS) data reporting, as well as the interpretation of reports. Kilannin Krysiak from Clinic Interpretations of Variants in Cancer (CIViC) gave a presentation on the need to update interpretation resources because of the complex nature of variant relationships. Not only is the data complex, but clinical guidelines are constantly being updated. Therefore, knowledge bases also need regular updating to ensure proper clinical reporting.

In addition to data reporting, the interpretation of these reports poses their own challenges. This can be attributed to the individual expertise of the person interpreting the data, but also to the fact that not everyone is using the same data source. Valerie Barbie from the Swiss Institute of Bioinformatics provided an overview of a government funded project in Switzerland to build a clinical infrastructure that would allow research to leverage data. The goal is for hospitals to be onboarded and gain access to this central database. They have also developed the Swiss Variant Interpretation Platform (SVIP) for Oncology in an effort to help clinicians. This platform does not just push information out to clinicians, but rather takes input from a panel of experts during a review cycle. In her presentation, Valerie Barbie explains that they recognized the value of the clinicians’ experience, but that their expertise and knowledge was not being documented or captured in any way. The platform enables clinicians to challenge and complement the data for better interpretation.

Is whole genome sequencing (WGS) the future?
It’s difficult to say if WGS will be the future testing modality for cancer care, but several attendees of the meeting believe so. Not only do they believe this to be the case, but they are already performing WGS for solid tumor profiling. When asked, “why not whole exome sequencing first?”, their response was that a whole exome sequencing (WES) test would only give incrementally more information than a 500 gene panel, and they believe a sub-1,000 gene panel would capture the majority of short range coding region level variations that may impact cancer. On the other hand, WGS would provide an important view into structural and large-scale variations that could better inform clinicians about the patient’s cancer that WES cannot. Marcin Imielińsk from New York Genome Center demonstrated how his team’s novel bioinformatics approaches were able to identify 90% of structural variants using short read sequencing technology, so it’s clear that the bag of tools for making sense of WGS data is growing.

Advancements in the last five years is making in-house testing more desirable The topic of in-house vs. send out testing was covered in a presentation by Ravindra Kolhe from Augusta University. NGS testing has advanced significantly in the last five years and as more cancer centers and oncologists get comfortable with data, there is a greater desire to establish precision medicine or precision oncology in-house. The cost of sequencing has also gone down and clinical utility continues to grow, making the case for bringing testing in-house. One point that Ravindra Kolhe made in his presentation is that NGS testing is no longer a cost center and that many institutions are realizing that there is revenue to be made by bringing testing in-house.

Thuy Phung from the University of South Alabama outlined why they made the decision to work with Imagia Canexia Health to bring cancer testing in-house. In addition to being more cost effective, they avoid long wait times associated with sending tests out. Internal testing and processing capabilities at a local health center also simplifies the process and benefits underserved populations by bringing testing to a community that may not otherwise have access to the test, bridging the health equity gap.

To learn more about our solution and how we can help your organization bring liquid biopsy testing in-house, contact us at

Health Canada Authorizes Imagia Canexia Health Insights Platform to be Manufactured and Distributed Across Canada

Imagia Canexia Health (ICH), a genomics-based cancer treatment testing company that accelerates access to precision care by combining AI expertise with advanced molecular biopsy solutions, today announced that Health Canada now permits ICHIP—held under ICH’s Imagia Healthcare subsidiary—to be manufactured and deployed across Canada. Imagia Canexia Health offers a unique clinical solution for oncologists to quickly generate reports, which includes therapeutic and clinical trial recommendations. The ICHIP furthers their ability by providing intricate molecular and computational genome analysis, from targeted next-generation sequencing (NGS) data, for individual cancer patients. This new approval advances Imagia Canexia Health’s mission to combine groundbreaking genomics, oncology, artificial intelligence, and informatics so health systems can provide cost-effective testing in-house.

Clinicians receive next-generation sequencing (NGS) data sourced from human tissue or blood samples. These insights are collected, as well as processed, on Illumina’s NextSeq and MiSeq devices. Then, ICHIP uses AI to detect and analyze genomic variants, match interpretations, identify potential clinical trials, as well as generate a cancer-treatment results report to augment treatment decisions by oncologists. This information is used in conjunction with other clinical and diagnostic findings to make the most informed care-management decisions.

The ability to now implement ICHIP locally at our Cancer Center means clinicians will have a report informed by the clinical context, creating even more robust real-time-data insights to support their expertise,

said Dr. Bryan Lo, the Medical Director, Molecular Oncology Diagnostics Lab, at The Ottawa Hospital and Eastern Ontario Regional Laboratory Association, who is currently using ICH’s software to report out liquid biopsy cases.

Cancer is a constant fight against time, and novel technology like the ICHIP which helps make faster decisions is invaluable to our patients’ lives.

ICH is proud of our Health Canada Medical Device Establish License, as it is important for commercializing ICHIP, which can now be manufactured and distributed across the country,

said Imagia Canexia Health CEO Geralyn Ochab.

This milestone reinforces ICH’s commitment to having our products reach patients and improve Canadian’s lives.

About Imagia Canexia Health

Imagia Canexia Health (ICH) is a genomics-based cancer treatment testing company that accelerates access to precision care by combining AI expertise with advanced molecular biopsy solutions. Leveraging AI-based informatics for treatment selection and monitoring, oncologists now have leading clinical decision support right at their fingertips. With a network of over 20 hospitals and reference labs worldwide, ICH ensures that doctors have the right insights to deliver cost-effective cancer testing to patients no matter where they seek treatment. Join ICH in closing the health-equity gap in cancer:

Peter Weltman
(415) 340-2040‬

Imagia Canexia Health names Molecular Biologist Vincent Funari PHD as Chief Science Officer

Funari appointment helms Imagia Canexia Health with a distinguished biology and data science leader to further the company’s precision cancer care mission.

Imagia Canexia Health, a genomics-based cancer treatment testing company that accelerates access to precision care by combining AI expertise with advanced molecular biopsy solutions, today announced Vincent Funari Ph.D. as the company’s new Chief Science Officer (CSO). Vincent will apply his two-decades worth of unique proficiency—leading the intersection of molecular biology and advanced data science technology—to deliver precision cancer treatments for patients no matter where they live.

Throughout his career, he co-authored more than 45 peer-reviewed genomics papers in publications including: Nature Immunology, JAMA, Science, Nature Genetics, AJHG, and Genomics. Vincent joins ICH to realize their vision to combine advanced genomics, oncology, artificial intelligence, and informatics so health systems can provide cost-effective cancer testing in-house.

I’ve spent my entire career leveraging technology to create the biggest changes in bioinformatics, and it’s with great pleasure that I bring this experience to Imagia Canexia Health as the company’s Chief Science Officer,” said Vincent Funari, Ph.D. “Cancer remains one of the biggest battle grounds of our time, and leading ICH’s scientific efforts to provide treatment-testing for more people is incredibly motivating.

Vincent brings an unprecedented level of institutional knowledge to Imagia Canexia Health, especially his influential work at the intersection of science and technology,” said Imagia Canexia Health CEO Geralyn Ochab. “As we continue to close the health equity gap through data-driven cancer care, it’s with great confidence that we have Vincent leading our scientific strategy.

About Imagia Canexia Health
Imagia Canexia Health (ICH) is a genomics-based cancer treatment testing company that accelerates access to precision care by combining AI expertise with advanced molecular biopsy solutions. Leveraging AI-based informatics for treatment selection and monitoring, oncologists now have leading clinical decision support right at their fingertips. With a network of over 20 hospitals and reference labs worldwide, ICH ensures that doctors have the right insights to deliver cost-effective cancer testing to patients no matter where they seek treatment. Join ICH in closing the health-equity gap in cancer:

Peter Weltman
(415) 340-2040‬

The Evolution of Digital Health



Health is becoming digital health, encompassing everything from electronic patient records to telemedicine and mobile health — spurred on by the pandemic. But the next evolution will involve artificial intelligence. While the potential of AI is enormous, there are still a number of challenges to delivering impactful solutions for clinical adoption.

While digital health isn’t new, there’s still a big gap in adoption, which has been slow and disjointed. We can already do a lot today, from digital diagnostics and remote patient monitoring to software-based therapeutic interventions. So where does AI fit in?

Artificial intelligence refers to the ability of a computer system to make a prediction or decision on a specific task that historically would require human cognition. Most of the capabilities available today can be categorized as Artificial Narrow Intelligence (ANI), which means it can assist with or take over specific focused tasks, without the ability to self-expand functionality.

On the other hand, machine learning (ML) is a category within AI that allows a computer system to act without being explicitly programmed, acquiring knowledge through data, observations and interactions that allow it to generalize to new settings.

How AI fits into digital health

As part of the AI-driven collaborative discovery process, health-care organizations need to first access data from silos across departments. This data then requires AI-assisted contextualization, exploration and annotation so it can be used for data-driven study design and AI model development. It’s critical to standardize this discovery process, making it repeatable and reproducible. Throughout the process, health-care organizations should consider privacy and potential bias.

Each of these steps, however, has its challenges. In preparing medical imaging data for machine learning, for example, there are challenges with availability of annotated data and potential biases that could affect generalizability of AI algorithms, according to an article in Radiology. New approaches such as federated learning and interactive reporting may ultimately improve data availability for AI applications in imaging.

In the U.S., there’s been a big push for the clinical adoption of electronic health records (EHRs), which starts with digitizing health records to provide insights at a patient level and, eventually, at a population level. Recommendations can then be pushed back to EHRs for clinical decision support.

In Canada, we’re further behind; EHRs aren’t widely used in all aspects of care. One of the biggest barriers to the more widespread use of EHRs is that physicians spend more than half their time on data entry and administrative tasks rather than face-to-face visits with patients, which results in declining quality of care. But these digital tools are becoming more user friendly, particularly as the pandemic accelerates the transition to digital health.

With mobile health, we’re also getting self-serve tools into the hands of patients — so we’re moving from encounter-based medicine to patient-centric care. For example, a monitoring tool on a patient’s wearable device could monitor blood pressure 24/7 in between appointments.

But these types of digital tools produce a lot of data. And there are related challenges. What’s the context of that data? What’s the quality of that data? Are there inherent biases? Digitalization of big data creates new challenges when it comes to interpreting data and making predictions or decisions.


The challenges ahead

Humans can only consider five to 15 metrics when making a decision. So with three months’ worth of data and millions of data points, it’s beyond the capacity of a single individual to make an informed recommendation. AI is trained on specific data and ‘learns’ from new data, providing a level of automation that’s narrow in scope but extremely high speed.

That’s the promise of AI: to offload the manual data crunching and provide high-speed recommendations on multiple variables, ultimately providing more patient-centric care. But we’re not there yet. Health-care institutes have an abundance of new data, but they’re unsure of its value. And it could reduce health equity because not everyone has access to it.

While the quality and quantity of AI research in health care is rapidly growing, the number and scope of usable products are still limited. When we consider how much of that research is being translated into physician use or patient care, we’ve seen a very limited number of FDA-approved algorithms. Of those, the majority have a very narrow spectrum of utility. And they’ve already been flagged for risks because there’s a known lack of complete data, meaning they’re not diverse enough for the real world.



While we’re seeing interesting applications of AI across industries, in health care it’s not only lagging but there are fundamental issues that still need to be addressed. According to an article in Digital Medicine about the “inconvenient truth” of AI in health care, to realize the potential of AI across health systems two fundamental issues need to be addressed: the issues of data ownership and trust, and the investment in data infrastructure to work at scale.


Data ownership, trust and scale

We need to strengthen data quality, governance, security and interoperability for AI systems to work. But the infrastructure to realize this at scale doesn’t currently exist. Data health components are sitting in silos; they’re not interoperable and they’re of varying quality. Because of the variability that exists, it’s difficult for physicians to ‘mine’ that data and make equitable, patient-centred decisions.

A deep learning health-care system first requires a digital knowledge base (including patient demographics and clinical data, as well as past clinical outcomes), followed by AI analyses for diagnosis and treatment selection, as well as clinical decision support where the recommendations are discussed between patient and clinician. This data is then added to the knowledge base to continue the process.

But there are several issues with this process. On the data side, scientific data isn’t ‘fair,’ which means AI models have an inherent bias toward the data set from the institute where the parameters were applied — without a mechanism to ‘train’ the AI somewhere else or let different systems learn from each other. As a result, it’s not able to overcome the inherent biases in the model.


From a business perspective, it’s also hard to sell institutional transformation. Most health-care institutes are relegated to using a software-as-a-service solution with a pre-trained model, which applies to a very limited data set. These algorithms have a utility in a particular setting — but that’s where the buck stops. And that means there’s no resulting structural or long-term change within health-care institutes.


Adopting a deep learning approach

For organizations to truly adopt a deep learning approach, it needs to be deeply embedded in their infrastructure to answer multiple narrow questions in a scalable way. Data needs to be accessible, searchable and usable, whether on-premise or in the cloud. It requires quality control, structuring and labelling.

But each of these steps is slow and labour-intensive. To train a single AI model, it’s necessary to first ingest relevant data, process it to make it accessible and searchable, and then allow users to annotate and contextualize it. While there are AI-assisted mechanisms that can speed up this process, those mechanisms need to be part of the infrastructure.

On top of that, data in health-care institutes is typically low quality; it’s not anonymized and has no context, so it’s not always usable. When designing an AI model at a single site where they’re de-risking institutional or geographical bias, they need a way to repeat this process at other institutes and allow the model to train and learn from that diversity.

It’s a big challenge, to say the least. While each of these tasks can be addressed by technology, if they’re not standardized and interoperable — across technologies and institutes — then they’re not scalable. And that’s where we are today, which means many of these FDA-approved algorithms are failing in the wild.


Overcoming bias in AI models

So how do you overcome these limitations and ensure you’re not introducing bias into your models? Our approach is to provide a collaborative framework, bringing the tools to the experts and allowing the AI models to overcome current limitations or friction points at each step in the process.

At each health-care institute, we provide a data hub that ingests and indexes data, making it searchable and accessible. We use language processing to sift through the data, contextualize it and make it easier and faster for clinicians to search for an appropriate group of patients they want to use in their studies.

When clinicians are looking at this data, they’re also being asked for their expertise on a particular use case — and that knowledge becomes available to everyone else. This allows institutes to leverage their expertise and translate the knowledge of domain experts, while at the same time speeding up data maturation.

And because this entire process is standardized and reproducible across different institutions, two different hospitals — even in two different languages — are able to benefit. If we allow the learnings to be exchanged, rather than the data itself, we’re able to maintain patient privacy and data ownership, solving two critical issues with AI in health-care settings.

Through these learnings, health-care institutes can develop a meta model that performs a task in a way that allows them to see the variability of a patient population. This meta model not only understands bias, but it can be redeployed with parameters that can be adjusted for a particular practice.

This, in turn, can help to address the issue of digital health equity. Clinical trials are typically run out of a few Centres of Excellence, which means data is only collected on people within a certain radius of those centres. In a distributed learning framework, infrastructure is provided to all institutes, reducing the bias of those Centres of Excellence. That means if health data is captured in Nunavut, for example, it can be included in the learnings, even without AI experts based in Nunavut.


The future of AI in healthcare

When it comes to AI, there’s still a big delta between the most advanced institutes and the average institute. But the pandemic has brought to light many of the inefficiencies in our health-care system. Many departments still have to manually calculate the best way to deal with supply and demand, optimize schedules and deal with backlogs.

We’re already seeing the use of statistical or machine learning models by insurance companies to predict things such as hospital readmission risks or understand high-risk patients based on socioeconomic factors. This can help to ensure patients get the specialty care they need and don’t get bounced around until they land on the right set of care providers.

We’re just starting to tap the potential of AI in health-care settings. In an emergency room, for example, it can be used to triage high-risk patients faster. During appointments, it can be used by clinicians to get a more complete picture of exam results to ensure nothing is missed. This reduces risk, and also helps us move toward more personalized health care.

Artificial intelligence is not meant to replace clinicians, but rather to help them focus on what matters: patients, rather than manual data entry and administrative tasks. When properly implemented, it can help clinicians better serve their patients while reducing burnout.

But AI in health care isn’t a magic bullet. It’s more of a digital elixir: a medical solution that brings together data science, machine learning and deep learning that can help clinicians transform data into better patient care.

How AI can help unlock the clinical power of genomic data


Article’s message in a nutshell

Genomic data has the potential to be clinically useful, but its use today is very limited – this potential has not been realized. Imagia Canexia Health is filling this gap by applying cutting edge AI/ML/DL technologies to enable multi-site analysis of genomic data, contextualized in patients’ real life clinical journey, while respecting their privacy.


The power of genomics


From my uncontrollable distaste for Cilantro, to my family’s increased risk of developing breast cancer, a lot of information is encoded in my genome. This 3.2 billion letters-long sequence of A, T, C, and Gs contained in every single cell in my body not only impacts how I taste food, and my health risks, but is also both completely unique to me, and can therefore be used to identify me. It is also partly shared within my family, the ethnic groups I belong to, and the entire human population. Because this information is so unique, and so powerful, it was thought that accessing it would have the potential to eradicate disease altogether. So how is genetic data used in healthcare today ?


Genetic testing is available in a variety of clinical scenarios. From prenatal genetic testing, to newborn screening, hereditary cancer screening, rare disease diagnosis, or determining a patient’s likelihood of responding well to specific treatments, genetics has permeated many domains of medicine. In my family for instance, where many women were affected by breast and other cancers, doctors decided to investigate if genetics played a role in our family’s health. They first identified a BRCA-2 genetic mutation in my great aunt, which gave her an increased risk of developing Breast or Ovarian Cancer. They recommended that all women in the family get tested, and those who tested positive for the mutation were offered frequent follow ups, and even preventative surgeries to minimize their risk to develop the disease. The predictive power of this genetic information is just one of the ways in which genomics can impact cancer care. Indeed, all cancerous cells harbor genetic alterations that, if identified and understood properly, can help us detect cancer early, predict how a specific tumor will respond to a treatment, and match a patient with a specific drug.


Genetic sequencing technologies are most commonly used in oncology, cardiology and immunology, and are continuously improved. From testing for a specific letter change or “single nucleotide variant” in a precise location of the genome, to the analysis of the 3.2 billion letters or “base pairs” that compose the entire human genome, technologies have improved dramatically, and the cost (in time and money) of producing this data has dropped at a remarkable rate. To demonstrate this, we geneticists like to compare what it took to first sequence the human genome in the 1980s (over 2 billion dollars, an international team of hundreds of scientists, and a total of 13 years[1]) to what it takes today (a whole genome sequence can be produced on one machine in a couple of days, for less than 2.000 dollars).


However, one may argue that there is still a lot of progress to be made. Indeed, we are far from having solved all major health issues, and the prognosis for most patients diagnosed with cancer today is still grim. Of course, genetics can’t solve it all, and many other factors – such as our environment, diet and lifestyle – play a major role in our likelihood of developing diseases. Still, we have virtually no understanding of what the majority of our genome actually does (the sum of all 20,000 genes represents only around 2% of our full genome sequence !), and have only scratched the surface of how our genes interact with each other and with other elements in our bodies. So, let’s recap. Each of the 30 trillion cells in a human body contains a 3,2 billion letter code, composed of 4 letters, within which are contained 20,000 genes, and 64 million letters outside of genes… Every second, each cell activates a specific combination of genes to perform its core functions. When errors accumulate in cells, it can produce tumors and lead to cancer… In the end, it seems that understanding genomics is a “big data” problem, so, could AI help move the needle?



Current limitations, and how we at Imagia Canexia Health are addressing them


First, even though it is becoming more common in Canada, generating clinical grade genomic data is still expensive, not part of routine clinical care for most patients, and available only in large research hospitals. The data produced is so large (several GB per patient for a whole genome sequence) that it is not stored in hospital Electronic Health Records (EHR), and is therefore not readily available  for research. The genetic information stored in patients’ health records is often in the form of a text report from a clinical geneticist describing the presence or absence of genetic mutations, and an interpretation of how this affects patient care.

To address this gap, at we have launched research projects that look at ways to infer genetic status by analysing standard-of care clinical images. For instance, we are analysing Computed Tomography (CT) and Positron Emission Tomography (PET) scan images of patients with Lung Cancer who have had a genetic test (RNAseq, or the sequencing of RNA, which is the product of active genes). In these cancers, which are the most common in adults in Canada[2], genetic tests are used in the clinic to define what treatment is most appropriate. However, it requires a biopsy of the tumor, which is an invasive procedure, and results can be lengthy to obtain. If our machine learning algorithms can find markers on the image that predict genetic test results, this could allow a faster, more efficient matching of patients with the best treatment. Being able to generate genetic insights without ever having to run a full range of expensive genetic tests, could mean increasing access to personalized medicine in Canada.


Second, clinically generated genomic data that is accessible for research is not only scarce, but it also critically lacks diversity. Indeed, most genomic data generated to date is from people of european ancestry[3] and this heavily impacts our ability to interpret genetic mutations, which frequency and mechanism of action sometimes differs across populations[4].



Just like in the genomics community, the issue of biases is also heavily discussed in the Artificial Intelligence community, and researchers globally are grappling with this problem[5]. One of the ways to solve this issue is to share data – because more powerful, reproducible and generalizable results can be achieved if more, and more diverse data are produced and shared across institutions, and across provinces.


However, two issues arise when you want to share patient data: competition is fierce, and there are privacy and other legal concerns. Indeed, sharing patient data broadly can be perceived as risky:  there are complex federal and provincial regulations at play in order to protect patient privacy, especially if it contains personal information, or is considered “identifiable”, such as a whole genome sequence. As data custodians, healthcare institutions are in charge of ensuring this data is secured and appropriately protected, which sometimes generates a reluctance to share. For companies, this data may also contain information and knowledge protected by Intellectual Property provisions. And for researchers, who rely on scarce and extremely competitive funding to produce data and generate publishable results, sharing data can mean losing one’s competitive advantage.


To address these problems, Imagia has developed a technological solution: Our EVIDENS platform is based on the concept of federated learning, where raw patient data always remains within the institution in which they have been produced, and only insights on the data are shared. Clinicians and researchers are able to collaborate across multiple institutes without ever sharing any raw data, which allows us to overcome this lack of diversity and sample size while alleviating major privacy concerns (for more details you can refer to our previous blog post)


This innovative approach is also used in the new Digital Health and Discovery Platform, a federally funded, pancanadian initiative co-led by Imagia and the Terry Fox Research Institute. The DHDP aims to accelerate precision medicine by bringing together leaders in the fields of Artificial Intelligence and healthcare. Partners in the DHDP are also developing ways to engage public and private partners in mutually beneficial projects to stimulate innovation and commercialization of clinical products.


Third, there is a lack of standardization in genomic data generation, analysis and interpretation. Although a great majority of genomic data is produced to date on machines engineered by the global industry leader Illumina, the way clinicians and researchers go from sequencing machine outputs to clinical interpretation varies greatly. This is not to say that there are no standardization efforts in progress, the most notable being led by the Global Alliance for Genomics and Health or GA4GH. Imagia is actively participating in this effort, by working directly with Illumina on a project aiming notably at generating and testing the efficiency of standard genomic pipelines. (see our press release here). The problem with standards is that even when they exist, it is challenging to stimulate large groups to use them. In order to incentivize the community to use a standardised approach, state of the art technological and software tools that we and others are developing will be baked directly into our DHDP platform, and we will help fund projects that use them, giving Canadian researchers a strong incentive to include them in their research practices.


Finally, genomic data is most useful if interpreted in the context of a patient’s clinical journey. Genetic data alone is often not enough to gain a full understanding of a patient’s condition, which can only be achieved when combining multiple sources of data: patient records, clinician reports, medical test results (e.g laboratory blood tests), imaging data, etc…



As a response to this challenge, our EVIDENS platform supports ingestion of multiple sources of data, and we have developed advanced Artificial Intelligence methods to efficiently and reliably combine these rich datasets. For instance, we are  working on a project to develop a machine learning (ML) algorithm that can process a combination of clinical data, pathology report, genomic data and clinical imaging data in lung cancer patients. This allows us to generate more powerful models and increases our potential for discoveries.


Our hope for the future


Our vision at Imagia Canexia Health is that genomic data, combined with other clinical data, and analysed via cutting edge AI/ML technologies, has the potential to help more patients affected by high burden diseases in Canada. In order to take on this challenge, we are partnering with Canadian and global leaders in genomics. Because patients are at the center of everything we do, our team has developed technological solutions to ensure that patient data is always secure and protected, and that their privacy is respected throughout our pipeline. We are actively developing methods to generate discoveries that will be translated into better diagnosis/treatment for all patients, even if they have not had a genetic test. There is still a long way to go until whole genome sequencing is a routine clinical practice, but in the meantime, we believe that AI/ML methods can help unlock the clinical potential of genomic data.


[2] Canadian Cancer Statistics Advisory Committee. Canadian Cancer Statistics: A 2020 special report on lung cancer. Toronto, ON: Canadian Cancer Society; 2020. Available at: (accessed [March 26, 2021]).

[3] Abul-Husn NS, Kenny EE. Personalized Medicine and the Power of Electronic Health Records. Cell. 2019 Mar 21;177(1):58-69. doi: 10.1016/j.cell.2019.02.039. PMID: 30901549; PMCID: PMC6921466.

[4] Bien SA, Wojcik GL, Hodonsky CJ, Gignoux CR, Cheng I, Matise TC, Peters U, Kenny EE, North KE. The Future of Genomic Studies Must Be Globally Representative: Perspectives from PAGE. Annu Rev Genomics Hum Genet. 2019 Aug 31;20:181-200.

[5] Reproducibility in machine learning for health research: Still a ways to go. Matthew B. A. Mcdermott, Shirly Wang, Nikki Marinsek, Rajesh Ranganath, Luca Foschini, Marzyeh Ghassemi, Science Translational Medicine, 24 Mar 2021

eBook: Delivering Cancer Care Equity via In-House NGS Testing

Ensuring all cancer patients have access to next-generation cancer treatment selection and recurrence monitoring requires a fundamental shift in how and where care is delivered. We need to move resources and capabilities to where the majority of patients receive care—within their own communities—by bringing the technology to deliver innovative testing right into their institution at the point of care. 


An eBook published by Inside Precision Medicine provides an in-depth look at how to deliver precision oncology close to home, where patients live, without needing to refer to third parties or academic hospitals in major urban centers. 


Featured in the publication:


  • Interviews with directors at USA Health in Alabama, Sutter Health in California, and Protean BioDiagnostics in Florida about their efforts to bring care to community settings;
  • A how-to guide for labs wanting to bring liquid biopsy testing in-house;
  • A primer on assessing the real world accuracy of ctDNA diagnostics by Dr. Samuel Aparicio; and,  
  • Expert insights from Dr. David Huntsman, whose vision for equitable testing access continues to propel Canexia Health.


Download the eBook here.

AMP 2021 Live Workshop: Establishing an In-House Molecular Profiling Program for Cancer Treatment Selection

Molecular profiling is quickly becoming a critical tool in identifying genomic mutations that indicate a potential response to targeted cancer therapy. Recently, Canexia Health hosted a workshop at the Association of Molecular Pathology Annual Conference (AMP2021) focused on local strategies and experiences from USA Health in Alabama to bring oncology molecular capabilities in-house.

Dr. Thuy Phung, MD, PhD, discussed the requirements for successful set-up and implementation of cancer molecular profiling at USA Health, where she is the medical director of molecular pathology. She explained how faster turnaround time and better management of patient data were among advantages, as well as improved access to clinical trials, which enables precision oncology to be broadly accessible within community practices where most cancer patients are treated.

Key takeaways from the workshop:

  • Recent studies have shown that less than 20% of patients are tested for seven recommended biomarkers for non-small cell lung cancers. And, closer to 85% of those diagnosed are not receiving their test results in a timely manner. This is where community-based testing approaches can really be supportive and helpful to patients. 
  • At USA Health, the process for selecting and implementing an in-house oncology test that can deliver the molecular information oncologists want and is meaningful to the patient community began with consulting with internal stakeholders — from the CEO to medical technologists. Dr. Thuy called this a “top-down, vertical, and bottom-up approach,” followed by researching the market for the organization’s needs, and evaluating the clinical settings of their environment.
  • The focus should be on tests that are high-yield, high-volume, and high-need, with reasonable or lower costs, and that can have a direct impact on clinical management. The next step is choosing the best testing technology or platform to provide molecular data, whether it is single gene, PCR, NGS, or other modalities.
  • Formulating a business plan and an operational strategy is critical and should include capital costs, reagents, staffing, reimbursement, and return of investment over a two- or three-year period. 
  • Finally, a partnership with a company with a proven track record for helping other labs bring NGS testing in-house contributes to the success of in-house implementation, in terms of cost, efficiencies, and expertise.

 To learn more, 

Video: Canexia Panel at PMWC21 on In-house Molecular Profiling

The FDA is now approving biomarker-informed therapies on a routine basis, yet there is significant disparity in patient access to molecular profiling that would match cancer patients to these targeted treatments. 


Canexia Health’s Brady Davis recently moderated a panel during the Precision World Medicine Conference (PMWC) in Philadelphia, leading a discussion around providing equitable access to molecular testing within health system settings, from making the business case to reimbursement.


The panelists were:


  • Thuy Phung  | University of South Alabama
  • Michael B Datto M.D., Ph.D.| DUHS Clinical Labs, Duke University Health System, Clinical Laboratories
  • Peter Hulick, M.D. FACMG | Mark R Neaman Ctr. For Personalized Medicine, Northshore University Health System


For key takeaways, read our blog post or watch the full discussion here. 

Canexia Panel at PMWC21: Barriers & Solutions to Implementing Molecular Profiling In-house

Everyone should have access to the benefits of precision oncology. However, bringing molecular profiling into clinical care within community settings remains a challenge. Although more and more biomarker-informed therapies are being developed, FDA-approved, and launched in the market, not all patients have access due to numerous barriers, ranging from accessing genetic testing to the uneven reimbursement landscape.

To further explore these issues, as well as solutions that health systems have started implementing, Canexia Health’s Brady Davis recently moderated a panel during the Precision World Medicine Conference (PMWC) in Philadelphia, leading the discussion around providing equitable access to molecular testing, from making the business case to reimbursement.

The panelists were:

  • Thuy Phung  | University of South Alabama
  • Michael B Datto M.D., Ph.D.| DUHS Clinical Labs, Duke University Health System, Clinical Laboratories
  • Peter Hulick, M.D. FACMG | Mark R Neaman Ctr. For Personalized Medicine, Northshore University Health System

Key takeaways from these experts include:


  • Patients living in rural areas or smaller cities have much less access to genomic testing: one panelist found that bringing testing in-house increased chances of getting tested to 85% (versus send-out testing at 55%).
  • Education and awareness are critical within every institution. Molecular Tumor Boards can be part of the solution and can be a powerful tool for informing decisions around what assays to bring in-house.
  • While bringing in large panels can be tempting, it’s a challenge for resource limited institutions. One panelist’s health system was willing and able to adopt a more focused and cost-effective panel to get started.
  • Integration of NGS data into EMR systems can take time, for one panelist almost 9 months. That integration delays the validation process. Speeding up EMR integration would be of great value.
  • Regarding reimbursement, payers are starting to understand that data is impactful, but more is needed to make the case for consistent policies.


At the end of the day, institutions looking to bring testing in-house must start with a clearly defined mission. Be strategic and don’t try to chase all of the different innovations in this rapidly evolving field. Gain support from the top, horizontally, and from the bottom up. And don’t be afraid to try. “It’s a dirt path, and you’re paving it as you go.”


For a deeper dive, watch Canexia’s Dr. Melissa McConechy discuss “Best Practices for Bringing Plasma Biopsy In-house” here.

Canexia Health to Expand Project ACTT Globally Starting with Thailand Collaboration

Canexia Health CEO, Michael Ball, joined Bio Asia Pacific 2021 to discuss global expansion of the company’s liquid biopsy initiative, Project ACTT (Access to Cancer Testing & Treatment). Beginning with a collaboration in Thailand, expansion will build upon the success of the program in Canada over the last year. 


Through Project ACTT Canada, more than 2,000 cancer patients have received biomarker testing via a simple blood draw to identify targeted treatment options or clinical trials during the pandemic, helping overcome tissue biopsy delays and minimizing risk of exposure to COVID-19.


The Bio Asia Pacific 2021 panel followed the signing of a Memorandum of Understanding between the Chulabhorn Royal Academy and the Canadian Commercial Corporation, wherein Thailand and Canada will explore opportunities to collaborate to facilitate cancer research. Project ACTT Thailand is expected to be the first initiative in this collaboration.


“We are happy to announce Project ACTT Global as we work to democratize access to cancer testing, and leverage our experience with ACTT in a first implementation outside of Canada with the support of Thailand,” said Ball. “We think that this will be a program that not will not only be successful in Thailand, but in other geographies in the future.”


Participants in the panel also included:


  • Prof. Chirayu Auewarakul, MD, PhD, Dean, Faculty of Medicine and Public Health,  Chulabhorn Royal Academy
  • Barbara Melosky, MD, PhD, Clinical Professor, Department of Medicine, University of British Columbia
  • Sirasak Teparkum, PhD, CEO, Thailand Center of Excellence for Life Sciences (TCELS)


“Liquid biopsy is not currently available in Thailand. If we can set it up at our cancer center, it will be a first in the country,” said Professor Auewarakul. “I think in the first phase we will test a few thousand patients, focusing on the most common cancer types in Thailand. This is also a good opportunity to do research, together with Canexia Health, to explore use with other types of cancer prevalent in Thailand.”


Added Ball, “Access to local liquid biopsy testing means that patients will have shorter wait times, patient samples don’t leave the country, and countries can build the precision medicine infrastructure necessary to sustain testing during and beyond pandemic conditions. We believe this will be transformative for patients and for healthcare systems.”


For an excerpt of the discussion, watch here.

To watch the full video, click here.