TV-CCANM: a transformer variational inference in confounding cascade additive noise model for causal effect estimation

Understanding causal mechanisms in complex systems requires methodologies capable of disentangling non-linear interactions across high-dimensional variables. While the Confounding Cascade Nonlinear Additive Noise Model (CCANM) coupled with variational autoencoders (VAEs) provides a foundation for la...

Full description

Saved in:
Bibliographic Details
Published in:Journal of statistical computation and simulation Vol. 95; no. 14; pp. 3048 - 3076
Main Authors: Ahmad, Sohail, Wang, Hong
Format: Journal Article
Language:English
Published: Taylor & Francis 22.09.2025
Subjects:
ISSN:0094-9655, 1563-5163
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Understanding causal mechanisms in complex systems requires methodologies capable of disentangling non-linear interactions across high-dimensional variables. While the Confounding Cascade Nonlinear Additive Noise Model (CCANM) coupled with variational autoencoders (VAEs) provides a foundation for latent causal structure learning, its capacity to model sophisticated dependency patterns remains constrained. In this study, we introduce TV-CCANM, a transformer-enhanced variational architecture that synergizes the CCANM framework with hierarchical self-attention mechanisms. Unlike conventional sequential encoding, our encoder-decoder design employs transformer layers to concurrently capture cross-variable dependencies and non-additive confounding effects, enabling precise recovery of latent causal graphs. Simulation studies demonstrate that both CCANM and the proposed TV-CCANM consistently recover the ground-truth causal direction ( $ X\rightarrow Y $ X → Y ), with TV-CCANM exhibiting enhanced convergence rates and stability across training epochs. When applied to benchmark stock market datasets, TV-CCANM outperforms CCANM in interpretability and precision, achieving causal direction assignments that align closely with established domain knowledge.
ISSN:0094-9655
1563-5163
DOI:10.1080/00949655.2025.2516793