🔀
Causal Inference
Bayesian networks with epistemic types for causal discovery and treatment effects
Causal Inference with Sounio
Sounio provides tools for causal inference with explicit uncertainty quantification for treatment effect estimation and causal discovery.
Why Epistemic Types for Causality?
Causal inference involves deep uncertainties:
- Is the causal structure correct?
- Are confounders identified?
- How certain are effect estimates?
- What are the bounds on counterfactuals?
Sounio makes these uncertainties explicit.
Treatment Effect Estimation
use std::causal::{CausalModel, Treatment, Outcome}
use std::epistemic::{Knowledge, Provenance}
fn estimate_ate(
model: &CausalModel,
treatment: &Treatment,
outcome: &Outcome,
data: &DataFrame
) -> Knowledge<f64> with IO {
// Propensity score estimation
let propensity = model.estimate_propensity(treatment, data)
// IPW estimator with uncertainty
let ate = inverse_propensity_weighting(
treatment: treatment,
outcome: outcome,
weights: propensity,
data: data
)
// Check overlap assumption
if propensity.overlap_violation > 0.10 {
perform IO::warn(
"Positivity violation detected: " +
(propensity.overlap_violation * 100.0).to_string() +
"% of units have extreme propensity scores"
)
}
// Return with full uncertainty
Knowledge {
value: ate.point_estimate,
uncertainty: ate.standard_error,
confidence: confidence_from_overlap(propensity.overlap_violation),
provenance: vec!["ipw_estimator", model.specification],
}
}
Causal Discovery
fn discover_structure(
data: &DataFrame,
prior: Option<StructuralPrior>
) -> Knowledge<CausalGraph> with IO, Prob {
// PC algorithm with bootstrap uncertainty
var edge_probabilities = HashMap::new()
for _ in 0..1000 {
let bootstrap_sample = data.bootstrap()
let graph = pc_algorithm(bootstrap_sample, alpha: 0.05)
for edge in graph.edges {
edge_probabilities.entry(edge)
.or_insert(0)
.increment()
}
}
// Include only confident edges
let confident_edges = edge_probabilities
.filter(|(_, count)| count > 500) // >50% bootstrap support
.map(|(edge, count)| (edge, count as f64 / 1000.0))
if confident_edges.len() < edge_probabilities.len() / 2 {
perform IO::warn("High structural uncertainty - many edges not confirmed")
}
Knowledge {
value: CausalGraph::from_edges(confident_edges.keys()),
uncertainty: structural_entropy(edge_probabilities),
confidence: confident_edges.values().mean(),
provenance: vec!["pc_bootstrap"],
}
}
Counterfactual Reasoning
fn counterfactual_outcome(
model: &CausalModel,
unit: &Individual,
intervention: Intervention
) -> Knowledge<f64> with Prob {
// Abduction: infer latent variables
let latents = model.infer_latents(unit)
// Action: apply intervention
let intervened_model = model.do(intervention)
// Prediction: compute counterfactual
let cf_outcome = intervened_model.predict(unit, latents)
// Uncertainty from latent inference and model
Knowledge {
value: cf_outcome.value,
uncertainty: (
latents.uncertainty.pow(2) +
intervened_model.structural_uncertainty.pow(2)
).sqrt(),
confidence: latents.confidence * intervened_model.confidence,
provenance: unit.provenance.append("counterfactual"),
}
}
Sensitivity Analysis
fn sensitivity_to_confounding(
ate: Knowledge<f64>,
model: &CausalModel
) -> SensitivityResult with IO {
// E-value for unmeasured confounding
let e_value = compute_e_value(ate)
// Rosenbaum bounds
let gamma_bounds = rosenbaum_sensitivity(model, gamma_range: 1.0..3.0)
perform IO::log(
"E-value: " + e_value.to_string() +
" (confounder must be this strong to explain away effect)"
)
if e_value < 2.0 {
perform IO::warn(
"Effect could be explained by modest unmeasured confounding"
)
}
SensitivityResult {
e_value: e_value,
gamma_at_zero: gamma_bounds.null_crossing,
robust_to_confounding: e_value > 2.5,
}
}
Bayesian Causal Networks
fn bayesian_network_inference(
network: &BayesianNetwork,
evidence: HashMap<Node, Value>,
query: Node
) -> Knowledge<Distribution> with Prob {
// Inference with parameter uncertainty
let posterior = network.query(query, given: evidence)
// Uncertainty from finite data
let parameter_uncertainty = network.parameter_posterior_variance()
Knowledge {
value: posterior,
uncertainty: posterior.entropy() + parameter_uncertainty,
confidence: network.fit_quality,
provenance: evidence.keys().map(|k| k.to_string()).collect(),
}
}
Features for Causal Inference
- Effect bounds: Automatic partial identification
- Sensitivity analysis: Built-in E-values and Rosenbaum bounds
- Structure uncertainty: Edge confidence in DAGs
- Counterfactual uncertainty: Propagates through reasoning
Get Started
sounio new causal-project --template causal
cd causal-project
sounio run examples/treatment_effect.sio