VI API¶
deepuq.methods.vi ¶
Variational-inference primitives for Bayes-by-Backprop.
The implementation here follows a mean-field setup: - each Bayesian parameter tensor has an independent Gaussian posterior, - Bayesian layers sample weights during stochastic forward passes, - KL(q || p) against a Gaussian prior is computed analytically, - the ELBO helper combines data-fit loss + scaled KL term.
BayesByBackpropMLP ¶
Bases: Module
Convenience MLP composed from BayesianLinear layers.
BayesianLinear ¶
Bases: Module
Fully-connected layer with Bayesian weights and biases.
During sample=True forward passes, weights are sampled from the posterior. During sample=False passes, posterior means are used.
kl ¶
Analytic KL(q || p) for diagonal Gaussian posterior vs Gaussian prior.
For each scalar component: KL = log(sigma_p / sigma_q) + (sigma_q^2 + (mu_q - mu_p)^2) / (2 * sigma_p^2) - 1/2 Here mu_p = 0 so the squared-mean term becomes mu_q^2.
GaussianPosterior ¶
Bases: Module
Diagonal Gaussian variational posterior over one parameter tensor.
The posterior is parameterized by mu and rho. We transform rho with softplus to obtain sigma and guarantee strictly positive standard deviations.
GaussianPrior ¶
Isotropic Gaussian prior used by Bayesian layers.
predict_vi_uq ¶
predict_vi_uq(
model: Module,
x: Tensor,
n_samples: int = 50,
apply_softmax: bool = False,
aleatoric_var: Optional[Tensor] = None,
) -> UQResult
Monte Carlo predictive summary for Bayes-by-Backprop models.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model | Module | Bayesian model supporting | required |
x | Tensor | Inputs. | required |
n_samples | int | Number of stochastic weight samples. | 50 |
apply_softmax | bool | If True, treat outputs as logits and return probability moments. | False |
aleatoric_var | Optional[Tensor] | Optional additive aleatoric variance term for regression. | None |
vi_elbo_step ¶
vi_elbo_step(
model,
x,
y,
num_batches: Optional[int] = None,
n_batches: Optional[int] = None,
criterion=None,
kl_weight: float = 1.0,
mc_samples: int = 1,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
Compute one Bayes-by-Backprop ELBO step.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model | Bayesian model exposing | required | |
x | Mini-batch inputs and targets. | required | |
y | Mini-batch inputs and targets. | required | |
num_batches | Optional[int] | Canonical number of optimizer steps per epoch, usually | None |
n_batches | Optional[int] | Deprecated alias for | None |
criterion | Data-fit loss. Defaults to | None | |
kl_weight | float | Multiplicative weight for the scaled KL term. | 1.0 |
mc_samples | int | Number of stochastic forward passes used to Monte Carlo-average NLL and KL for a lower-variance ELBO estimate. | 1 |
Returns:
| Type | Description |
|---|---|
(loss, nll, kl): |
|