Continuous Families

Families of continuous-valued distributions.

families

Parameterized distribution families as continuous morphisms.

Each family is a ContinuousMorphism whose codomain is a continuous space and whose conditional distribution p(y | x) belongs to a specific parametric family. The parameters are learnable functions of x:

  • For discrete domains (FinSet): parameters are looked up from a table.
  • For continuous domains (ContinuousSpace): parameters are produced by a small neural network.

This module wraps every reparameterizable distribution in torch.distributions as a conditional morphism, plus custom families (TruncatedNormal, MultivariateNormal, etc.).

Architecture

Most per-dimension-independent distributions are built on a shared generic base _IndependentConditional that handles the parameter source, transform, and torch.distributions plumbing. The _make_family class factory generates named classes from a specification. Distributions that need special handling (MultivariateNormal, Dirichlet, TruncatedNormal, etc.) are implemented as standalone classes.

ConditionalNormal

ConditionalNormal(domain: AnySpace, codomain: ContinuousSpace, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional normal (Gaussian) distribution.

For each input x, produces an independent normal distribution on each dimension of the codomain:

y_i ~ Normal(mu_i(x), sigma_i(x))

Parameters are learnable: mu and log(sigma) are functions of x, implemented as lookup tables (discrete domain) or neural networks (continuous domain).

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target space.

TYPE: Euclidean

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Examples:

>>> from quivers import FinSet
>>> from quivers.continuous.spaces import Euclidean
>>> A = FinSet(name="context", cardinality=5)
>>> Y = Euclidean(name="response", dim=3)
>>> f = ConditionalNormal(A, Y)
>>> x = torch.tensor([0, 1, 2])
>>> samples = f.rsample(x)  # shape (3, 3)
Source code in src/quivers/continuous/families.py
266
267
268
269
270
271
272
273
274
275
276
277
def __init__(
    self,
    domain: AnySpace,
    codomain: ContinuousSpace,
    hidden_dim: int = 64,
) -> None:
    super().__init__(domain, codomain)
    d = codomain.dim

    # param_dim = d (mu) + d (log_sigma)
    self.param_source = _make_source(domain, 2 * d, hidden_dim)
    self._d = d

ConditionalLogitNormal

ConditionalLogitNormal(domain: AnySpace, codomain: ContinuousSpace, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional logit-normal distribution on (0, 1)^d.

If z ~ Normal(mu(x), sigma(x)), then y = sigmoid(z) ~ LogitNormal. Useful for modeling probabilities and bounded quantities.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target space (should have bounds [0, 1]).

TYPE: Euclidean

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
342
343
344
345
346
347
348
349
350
351
def __init__(
    self,
    domain: AnySpace,
    codomain: ContinuousSpace,
    hidden_dim: int = 64,
) -> None:
    super().__init__(domain, codomain)
    d = codomain.dim
    self.param_source = _make_source(domain, 2 * d, hidden_dim)
    self._d = d

ConditionalBeta

ConditionalBeta(domain: AnySpace, codomain: ContinuousSpace, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional beta distribution on (0, 1)^d.

For each input x, produces an independent Beta(alpha_i(x), beta_i(x)) on each dimension of the codomain.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target space (should have bounds [0, 1]).

TYPE: Euclidean

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
416
417
418
419
420
421
422
423
424
425
def __init__(
    self,
    domain: AnySpace,
    codomain: ContinuousSpace,
    hidden_dim: int = 64,
) -> None:
    super().__init__(domain, codomain)
    d = codomain.dim
    self.param_source = _make_source(domain, 2 * d, hidden_dim)
    self._d = d

ConditionalTruncatedNormal

ConditionalTruncatedNormal(domain: AnySpace, codomain: Euclidean, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional truncated normal on [low, high]^d.

A normal distribution restricted to a bounded interval. Uses rejection-free sampling via the inverse CDF (Phi-based) method.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target space (must be bounded).

TYPE: Euclidean

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
def __init__(
    self,
    domain: AnySpace,
    codomain: Euclidean,
    hidden_dim: int = 64,
) -> None:
    if codomain.low is None or codomain.high is None:
        raise ValueError("ConditionalTruncatedNormal requires a bounded codomain")

    super().__init__(domain, codomain)
    d = codomain.dim
    self.param_source = _make_source(domain, 2 * d, hidden_dim)
    self._d = d
    self._low = codomain.low
    self._high = codomain.high

ConditionalDirichlet

ConditionalDirichlet(domain: AnySpace, codomain: ContinuousSpace, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional Dirichlet distribution on a probability simplex.

For each input x, produces a Dirichlet(alpha(x)) distribution on the simplex.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target simplex.

TYPE: Simplex

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
555
556
557
558
559
560
561
562
563
564
def __init__(
    self,
    domain: AnySpace,
    codomain: ContinuousSpace,
    hidden_dim: int = 64,
) -> None:
    super().__init__(domain, codomain)
    d = codomain.dim
    self.param_source = _make_source(domain, d, hidden_dim)
    self._d = d

ConditionalUniform

ConditionalUniform(domain: AnySpace, codomain: ContinuousSpace, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional uniform distribution on a learnable interval.

Parameterized as Uniform(loc - width/2, loc + width/2) where loc is unconstrained and width is positive. This ensures low < high is always satisfied.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target space.

TYPE: ContinuousSpace

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
729
730
731
732
733
734
735
736
737
738
739
def __init__(
    self,
    domain: AnySpace,
    codomain: ContinuousSpace,
    hidden_dim: int = 64,
) -> None:
    super().__init__(domain, codomain)
    d = codomain.dim
    # param_dim = d (loc) + d (raw_width)
    self.param_source = _make_source(domain, 2 * d, hidden_dim)
    self._d = d

ConditionalMultivariateNormal

ConditionalMultivariateNormal(domain: AnySpace, codomain: ContinuousSpace, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional multivariate normal with full covariance.

Parameterized via Cholesky factor: the parameter source outputs loc (d values) and the lower-triangular entries of L (d*(d+1)/2 values), where Sigma = L @ L^T.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target space (d-dimensional).

TYPE: ContinuousSpace

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
784
785
786
787
788
789
790
791
792
793
794
795
def __init__(
    self,
    domain: AnySpace,
    codomain: ContinuousSpace,
    hidden_dim: int = 64,
) -> None:
    super().__init__(domain, codomain)
    d = codomain.dim
    n_tril = d * (d + 1) // 2
    self.param_source = _make_source(domain, d + n_tril, hidden_dim)
    self._d = d
    self._n_tril = n_tril

ConditionalLowRankMVN

ConditionalLowRankMVN(domain: AnySpace, codomain: ContinuousSpace, rank: int = 2, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional low-rank multivariate normal.

Parameterized as loc + low-rank factor + diagonal: Sigma = W @ W^T + diag(d)

This is more parameter-efficient than full MVN for high dimensions.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target space (d-dimensional).

TYPE: ContinuousSpace

rank

Rank of the low-rank factor W.

TYPE: int DEFAULT: 2

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
def __init__(
    self,
    domain: AnySpace,
    codomain: ContinuousSpace,
    rank: int = 2,
    hidden_dim: int = 64,
) -> None:
    super().__init__(domain, codomain)
    d = codomain.dim
    self._d = d
    self._rank = rank

    # loc (d) + factor (d * rank) + diag (d)
    total = d + d * rank + d
    self.param_source = _make_source(domain, total, hidden_dim)

ConditionalRelaxedBernoulli

ConditionalRelaxedBernoulli(domain: AnySpace, codomain: ContinuousSpace, temperature: float = 0.5, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional relaxed Bernoulli (concrete) distribution.

Outputs continuous values in (0, 1) that approximate Bernoulli samples. The temperature controls the relaxation: lower temperature = closer to discrete.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target space (should be 1-d per Bernoulli component).

TYPE: ContinuousSpace

temperature

Relaxation temperature.

TYPE: float DEFAULT: 0.5

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
922
923
924
925
926
927
928
929
930
931
932
933
def __init__(
    self,
    domain: AnySpace,
    codomain: ContinuousSpace,
    temperature: float = 0.5,
    hidden_dim: int = 64,
) -> None:
    super().__init__(domain, codomain)
    d = codomain.dim
    self.param_source = _make_source(domain, d, hidden_dim)
    self._d = d
    self._temperature = temperature

ConditionalRelaxedOneHotCategorical

ConditionalRelaxedOneHotCategorical(domain: AnySpace, codomain: ContinuousSpace, temperature: float = 0.5, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional relaxed one-hot categorical (Gumbel-Softmax).

Outputs continuous vectors on the simplex that approximate one-hot categorical samples.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target space (simplex or d-dimensional).

TYPE: ContinuousSpace

temperature

Relaxation temperature.

TYPE: float DEFAULT: 0.5

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
975
976
977
978
979
980
981
982
983
984
985
986
def __init__(
    self,
    domain: AnySpace,
    codomain: ContinuousSpace,
    temperature: float = 0.5,
    hidden_dim: int = 64,
) -> None:
    super().__init__(domain, codomain)
    d = codomain.dim
    self.param_source = _make_source(domain, d, hidden_dim)
    self._d = d
    self._temperature = temperature

ConditionalWishart

ConditionalWishart(domain: AnySpace, codomain: ContinuousSpace, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional Wishart distribution over positive-definite matrices.

Produces random d x d positive-definite matrices. Parameterized by degrees of freedom df(x) and a scale matrix V(x).

The codomain dimension is interpreted as d, and outputs are d x d matrices flattened to d^2.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target space. dim is the matrix size d (output is d x d).

TYPE: ContinuousSpace

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
def __init__(
    self,
    domain: AnySpace,
    codomain: ContinuousSpace,
    hidden_dim: int = 64,
) -> None:
    super().__init__(domain, codomain)
    d = codomain.dim
    n_tril = d * (d + 1) // 2
    # df (1) + lower-triangular scale (n_tril)
    self.param_source = _make_source(domain, 1 + n_tril, hidden_dim)
    self._d = d
    self._n_tril = n_tril

ConditionalBernoulli

ConditionalBernoulli(domain: AnySpace, codomain: AnySpace, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional Bernoulli: continuous probability -> discrete truth value.

Takes a continuous input x and produces learnable logits that parameterize a Bernoulli distribution. The output is a discrete sample in {0, 1}, returned as a LongTensor.

This is the key bridge used in PDS (Grove & White) for the Bern x pattern, where a LogitNormal draw x in (0,1) parameterizes a Bernoulli over truth values.

The codomain must be a FinSet of size 2 (representing {False, True} or {0, 1}).

Note

Sampling from Bernoulli is NOT reparameterizable. Gradients do not flow through the discrete samples. Use score function estimators (REINFORCE) or the Gumbel-Softmax trick if differentiable samples are needed.

PARAMETER DESCRIPTION
domain

Source space (typically UnitInterval or a FinSet).

TYPE: SetObject or ContinuousSpace

codomain

Target FinSet of size 2.

TYPE: SetObject

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
def __init__(
    self,
    domain: AnySpace,
    codomain: AnySpace,
    hidden_dim: int = 64,
) -> None:
    from quivers.core.objects import SetObject

    if not isinstance(codomain, SetObject) or codomain.size != 2:
        raise ValueError(
            f"ConditionalBernoulli requires a FinSet(2) codomain, got {codomain!r}"
        )

    super().__init__(domain, codomain)

    # one logit per input
    self.param_source = _make_source(domain, 1, hidden_dim)

log_prob

log_prob(x: Tensor, y: Tensor) -> Tensor

Log-probability of discrete output y given input x.

PARAMETER DESCRIPTION
x

Input tensor.

TYPE: Tensor

y

Discrete output in {0, 1}. Shape (batch,).

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

Log-probabilities. Shape (batch,).

Source code in src/quivers/continuous/families.py
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
def log_prob(self, x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
    """Log-probability of discrete output y given input x.

    Parameters
    ----------
    x : torch.Tensor
        Input tensor.
    y : torch.Tensor
        Discrete output in {0, 1}. Shape (batch,).

    Returns
    -------
    torch.Tensor
        Log-probabilities. Shape (batch,).
    """
    probs = self._get_probs(x)
    dist = D.Bernoulli(probs=probs)
    return dist.log_prob(y.float())

rsample

rsample(x: Tensor, sample_shape: Size = Size()) -> Tensor

Sample from Bernoulli (not reparameterizable).

PARAMETER DESCRIPTION
x

Input tensor.

TYPE: Tensor

sample_shape

Additional leading sample dimensions.

TYPE: Size DEFAULT: Size()

RETURNS DESCRIPTION
Tensor

Discrete samples in {0, 1}. Shape (*sample_shape, batch).

Source code in src/quivers/continuous/families.py
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
def rsample(
    self,
    x: torch.Tensor,
    sample_shape: torch.Size = torch.Size(),
) -> torch.Tensor:
    """Sample from Bernoulli (not reparameterizable).

    Parameters
    ----------
    x : torch.Tensor
        Input tensor.
    sample_shape : torch.Size
        Additional leading sample dimensions.

    Returns
    -------
    torch.Tensor
        Discrete samples in {0, 1}. Shape (*sample_shape, batch).
    """
    probs = self._get_probs(x)
    dist = D.Bernoulli(probs=probs)
    return dist.sample(sample_shape).long()

ConditionalCategorical

ConditionalCategorical(domain: AnySpace, codomain: AnySpace, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional Categorical: continuous input -> discrete category.

Generalizes ConditionalBernoulli to k > 2 categories. Takes a continuous input and produces learnable logits over k categories. The output is a discrete sample in {0, ..., k-1}.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target FinSet of size k.

TYPE: SetObject

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
def __init__(
    self,
    domain: AnySpace,
    codomain: AnySpace,
    hidden_dim: int = 64,
) -> None:
    from quivers.core.objects import SetObject

    if not isinstance(codomain, SetObject):
        raise ValueError(
            f"ConditionalCategorical requires a FinSet codomain, got {codomain!r}"
        )

    super().__init__(domain, codomain)
    self._k = codomain.size
    self.param_source = _make_source(domain, self._k, hidden_dim)

log_prob

log_prob(x: Tensor, y: Tensor) -> Tensor

Log-probability of discrete output y given input x.

PARAMETER DESCRIPTION
x

Input tensor.

TYPE: Tensor

y

Discrete output in {0, ..., k-1}. Shape (batch,).

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

Log-probabilities. Shape (batch,).

Source code in src/quivers/continuous/families.py
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
def log_prob(self, x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
    """Log-probability of discrete output y given input x.

    Parameters
    ----------
    x : torch.Tensor
        Input tensor.
    y : torch.Tensor
        Discrete output in {0, ..., k-1}. Shape (batch,).

    Returns
    -------
    torch.Tensor
        Log-probabilities. Shape (batch,).
    """
    logits = self._get_logits(x)
    dist = D.Categorical(logits=logits)
    return dist.log_prob(y.long())

rsample

rsample(x: Tensor, sample_shape: Size = Size()) -> Tensor

Sample from Categorical (not reparameterizable).

PARAMETER DESCRIPTION
x

Input tensor.

TYPE: Tensor

sample_shape

Additional leading sample dimensions.

TYPE: Size DEFAULT: Size()

RETURNS DESCRIPTION
Tensor

Discrete samples in {0, ..., k-1}. Shape (*sample_shape, batch).

Source code in src/quivers/continuous/families.py
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
def rsample(
    self,
    x: torch.Tensor,
    sample_shape: torch.Size = torch.Size(),
) -> torch.Tensor:
    """Sample from Categorical (not reparameterizable).

    Parameters
    ----------
    x : torch.Tensor
        Input tensor.
    sample_shape : torch.Size
        Additional leading sample dimensions.

    Returns
    -------
    torch.Tensor
        Discrete samples in {0, ..., k-1}. Shape (*sample_shape, batch).
    """
    logits = self._get_logits(x)
    dist = D.Categorical(logits=logits)
    return dist.sample(sample_shape).long()

ConditionalGeneralizedPareto

ConditionalGeneralizedPareto(domain: AnySpace, codomain: ContinuousSpace, hidden_dim: int = 64)

Bases: ContinuousMorphism

Conditional generalized Pareto distribution.

PARAMETER DESCRIPTION
domain

Source space.

TYPE: SetObject or ContinuousSpace

codomain

Target space.

TYPE: ContinuousSpace

hidden_dim

Hidden layer width for neural parameter source.

TYPE: int DEFAULT: 64

Source code in src/quivers/continuous/families.py
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
def __init__(
    self,
    domain: AnySpace,
    codomain: ContinuousSpace,
    hidden_dim: int = 64,
) -> None:
    super().__init__(domain, codomain)
    d = codomain.dim
    # loc + scale + concentration
    self.param_source = _make_source(domain, 3 * d, hidden_dim)
    self._d = d