Dynamic fog computing for enhanced LLM execution in medical applications
The ability of large language models (LLMs) to process, interpret, and comprehend vast amounts of heterogeneous data presents a significant opportunity to enhance data-driven care delivery. However, the sensitive nature of protected health information (PHI) raises concerns about data privacy and tru...
Uloženo v:
| Vydáno v: | Smart health (Amsterdam) Ročník 36; s. 100577 |
|---|---|
| Hlavní autoři: | , , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Elsevier Inc
01.06.2025
|
| Témata: | |
| ISSN: | 2352-6483 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | The ability of large language models (LLMs) to process, interpret, and comprehend vast amounts of heterogeneous data presents a significant opportunity to enhance data-driven care delivery. However, the sensitive nature of protected health information (PHI) raises concerns about data privacy and trust in remote LLM platforms. Additionally, the cost of cloud-based artificial intelligence (AI) services remains a barrier to widespread adoption. To address these challenges, we propose shifting the LLM execution environment from centralized, opaque cloud providers to a decentralized and dynamic fog computing architecture. By running open-weight LLMs in more trusted environments, such as a user’s edge device or a fog layer within a local network, we aim to mitigate the privacy, trust, and financial concerns associated with cloud-based LLMs. We introduce SpeziLLM, an open-source framework designed to streamline LLM execution across multiple layers, facilitating seamless integration into digital health applications. To demonstrate its versatility, we showcase SpeziLLM across six digital health applications, highlighting its broad applicability in various healthcare settings.
[Display omitted]
•Cloud-based LLMs raise privacy, trust, and cost concerns, especially in healthcare.•Fog computing enables more decentralized, trusted, and cost-effective LLM execution.•SpeziLLM manages complexity with a unified LLM interface across edge, fog, and cloud.•Dynamically allocate inference tasks to layers based on complexity and sensitivity.•Evaluated SpeziLLM’s versatility across six diverse digital health applications. |
|---|---|
| ISSN: | 2352-6483 |
| DOI: | 10.1016/j.smhl.2025.100577 |