Dynamic Graph Representation for WSI Classification: A Knowledge-Aware Attention Mechanism for Enhanced Computational Pathology
Computational pathologists are now concentrating on developing methods for histopathological WSI analysis using deep learning. Methods using Transformer as a foundation are widely debated in the present paradigm, which is largely built on multiple instances learning (MIL). By using WSI sequence toke...
Uloženo v:
| Vydáno v: | 2025 International Conference on Cognitive Computing in Engineering, Communications, Sciences and Biomedical Health Informatics (IC3ECSBHI) s. 1084 - 1088 |
|---|---|
| Hlavní autoři: | , , , , |
| Médium: | Konferenční příspěvek |
| Jazyk: | angličtina |
| Vydáno: |
IEEE
16.01.2025
|
| Témata: | |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Computational pathologists are now concentrating on developing methods for histopathological WSI analysis using deep learning. Methods using Transformer as a foundation are widely debated in the present paradigm, which is largely built on multiple instances learning (MIL). By using WSI sequence tokens to represent patches; these methods transform WSI tasks into sequence tasks. Gigapixel scale and high heterogeneity both contribute to feature complexity, which in turn causes Transformer-based MIL to struggle with issues including sluggish inference speed, large memory usage, and poor performance. Furthermore, the intricate structural relationships bet to en biological entities (such as the varied interactions bet to en various cell types) in the WSI cannot be mined using graph-based approaches. The research suggests a solution by outlining an ideal algorithm for dynamic graph representation that views WSIs as a subset of the knowledge graph structure. In particular, the research uses the head-tail interactions bet to en instances to build directed edge embeddings and neighbours dynamically. Afterwards, to come up with a knowledge-aware attention mechanism that can refresh the features of the head node by acquiring the combined attention score of every neighbour and edge. At last, the Puma optimiser algorithm performs the fine-tuning, and thanks to the updated head's global pooling process, to have an embedding at the graph level that can be used as an implicit illustration for WSI classification. Our system significantly outperforms state-of-the-art tactics on multiple tasks, as shown by extensive testing on three public TCGA benchmark datasets. |
|---|---|
| DOI: | 10.1109/IC3ECSBHI63591.2025.10991229 |