Dynamic fog computing for enhanced LLM execution in medical applications
The ability of large language models (LLMs) to process, interpret, and comprehend vast amounts of heterogeneous data presents a significant opportunity to enhance data-driven care delivery. However, the sensitive nature of protected health information (PHI) raises concerns about data privacy and tru...
Gespeichert in:
| Veröffentlicht in: | Smart health (Amsterdam) Jg. 36; S. 100577 |
|---|---|
| Hauptverfasser: | , , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Elsevier Inc
01.06.2025
|
| Schlagworte: | |
| ISSN: | 2352-6483 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | The ability of large language models (LLMs) to process, interpret, and comprehend vast amounts of heterogeneous data presents a significant opportunity to enhance data-driven care delivery. However, the sensitive nature of protected health information (PHI) raises concerns about data privacy and trust in remote LLM platforms. Additionally, the cost of cloud-based artificial intelligence (AI) services remains a barrier to widespread adoption. To address these challenges, we propose shifting the LLM execution environment from centralized, opaque cloud providers to a decentralized and dynamic fog computing architecture. By running open-weight LLMs in more trusted environments, such as a user’s edge device or a fog layer within a local network, we aim to mitigate the privacy, trust, and financial concerns associated with cloud-based LLMs. We introduce SpeziLLM, an open-source framework designed to streamline LLM execution across multiple layers, facilitating seamless integration into digital health applications. To demonstrate its versatility, we showcase SpeziLLM across six digital health applications, highlighting its broad applicability in various healthcare settings.
[Display omitted]
•Cloud-based LLMs raise privacy, trust, and cost concerns, especially in healthcare.•Fog computing enables more decentralized, trusted, and cost-effective LLM execution.•SpeziLLM manages complexity with a unified LLM interface across edge, fog, and cloud.•Dynamically allocate inference tasks to layers based on complexity and sensitivity.•Evaluated SpeziLLM’s versatility across six diverse digital health applications. |
|---|---|
| ISSN: | 2352-6483 |
| DOI: | 10.1016/j.smhl.2025.100577 |