probula

package probula

Members list

Type members

Classlikes

class Bernoulli[T](val name: Name)(p: Double, success: T, failure: T) extends Dist[T], HasDensity[T]

Attributes

Companion
object
Supertypes
trait HasDensity[T]
trait Dist[T]
trait CanSample[T]
trait Named
class Object
trait Matchable
class Any
Show all
object Bernoulli

Attributes

Companion
class
Supertypes
class Object
trait Matchable
class Any
Self type
Bernoulli.type
trait CanSample[T]

Attributes

Supertypes
class Object
trait Matchable
class Any
Known subtypes
trait Dist[T]
class Bernoulli[T]
class Dirac[T]
trait DistD
class Gaussian
class UniformC
class Uniform[T]
Show all
class Dirac[T](val name: Name)(val value: T) extends Dist[T], HasDensity[T]

Attributes

Companion
object
Supertypes
trait HasDensity[T]
trait Dist[T]
trait CanSample[T]
trait Named
class Object
trait Matchable
class Any
Show all
object Dirac

Attributes

Companion
class
Supertypes
class Object
trait Matchable
class Any
Self type
Dirac.type
trait Dist[T] extends Named, CanSample[T]

A representation of probabilistic models as multivariate distributions, effectively hierarchical Bayesian models.

A representation of probabilistic models as multivariate distributions, effectively hierarchical Bayesian models.

Attributes

Companion
object
Supertypes
trait CanSample[T]
trait Named
class Object
trait Matchable
class Any
Known subtypes
class Bernoulli[T]
class Dirac[T]
trait DistD
class Gaussian
class UniformC
class Uniform[T]
Show all
Self type
Dist[T]
object Dist

Attributes

Companion
trait
Supertypes
class Object
trait Matchable
class Any
Self type
Dist.type
trait DistD extends Dist[Double]

Base trait for distributions over doubles.

Base trait for distributions over doubles.

Attributes

Supertypes
trait Dist[Double]
trait CanSample[Double]
trait Named
class Object
trait Matchable
class Any
Show all
Known subtypes
class Gaussian
class UniformC
class Doubles(val from: Double, val to: Double, val step: Double) extends IndexedSeq[Double]

An inclusive range of Double values with a fixed step.

An inclusive range of Double values with a fixed step.

Usage:

Doubles(0.0 -> 1.0) by 0.1
Doubles(0.0, 1.0) points 100
50 doubles (0.0 -> 1.0)

Attributes

Companion
object
Supertypes
object Doubles

Attributes

Companion
class
Supertypes
class Object
trait Matchable
class Any
Self type
Doubles.type
class Gaussian(val name: Name)(mean: Double, stdDev: Double) extends DistD, HasDensity[Double]

Attributes

Companion
object
Supertypes
trait DistD
trait Dist[Double]
trait CanSample[Double]
trait Named
class Object
trait Matchable
class Any
Show all
object Gaussian

Attributes

Companion
class
Supertypes
class Object
trait Matchable
class Any
Self type
Gaussian.type
trait HasDensity[T]

Attributes

Supertypes
class Object
trait Matchable
class Any
Known subtypes
class Bernoulli[T]
class Dirac[T]
class Gaussian
class Uniform[T]
class UniformC
class IData[+T](val name: Name, val chain: Chain[T]) extends Named

Representation of Inference Data. The intention is to make it compatible with the inference data in the Python world (one day...)

Representation of Inference Data. The intention is to make it compatible with the inference data in the Python world (one day...)

Attributes

Supertypes
trait Named
class Object
trait Matchable
class Any
object LogScore

Attributes

Supertypes
class Object
trait Matchable
class Any
Self type
LogScore.type
enum Name

Attributes

Supertypes
trait Enum
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
trait Named

Attributes

Supertypes
class Object
trait Matchable
class Any
Known subtypes
trait Dist[T]
class Bernoulli[T]
class Dirac[T]
trait DistD
class Gaussian
class UniformC
class Uniform[T]
class IData[T]
Show all

Witness that all components of T can be decomposed into Doubles. Works for bare numeric scalars (via Numeric) and tuples of numerics (recursively).

Witness that all components of T can be decomposed into Doubles. Works for bare numeric scalars (via Numeric) and tuples of numerics (recursively).

Attributes

Companion
object
Supertypes
class Object
trait Matchable
class Any
Known subtypes

Attributes

Companion
trait
Supertypes
class Object
trait Matchable
class Any
Self type
object Probula

The Probula object is used as an entry point to the framework. One starts building a probula probabilistic model by using factory methods from this object. Once the model is initiated (the first variable is created), one can use the model methods (@see probula.Dist) to add more new variables, hierarchical dependencies, and observations.

The Probula object is used as an entry point to the framework. One starts building a probula probabilistic model by using factory methods from this object. Once the model is initiated (the first variable is created), one can use the model methods (@see probula.Dist) to add more new variables, hierarchical dependencies, and observations.

Attributes

Supertypes
class Object
trait Matchable
class Any
Self type
Probula.type
abstract class Uniform[T](val name: Name) extends Dist[T], HasDensity[T]

Attributes

Companion
object
Supertypes
trait HasDensity[T]
trait Dist[T]
trait CanSample[T]
trait Named
class Object
trait Matchable
class Any
Show all
object Uniform

Attributes

Companion
class
Supertypes
class Object
trait Matchable
class Any
Self type
Uniform.type
class UniformC(val name: Name)(lower: Double, upper: Double) extends DistD, HasDensity[Double]

Attributes

Companion
object
Supertypes
trait DistD
trait Dist[Double]
trait CanSample[Double]
trait Named
class Object
trait Matchable
class Any
Show all
object UniformC

Attributes

Companion
class
Supertypes
class Object
trait Matchable
class Any
Self type
UniformC.type

Types

opaque type Chain[+T]
opaque type LogScore
opaque type Prob
type RNG = Generator
opaque type SampleSize
opaque type Scored[+T]

Value members

Concrete methods

def chain[T](l: Seq[Scored[T]]): Chain[T]
def scored[T](value: T, logScore: LogScore): Scored[T]
def unScored[T](value: T): Scored[T]

Extensions

Extensions

extension (idata: IData[_])
def csv: String

Export inference data as a CSV string. Columns: sample, one per variable, log_weight. Variable names are derived from this IData's Name.

Export inference data as a CSV string. Columns: sample, one per variable, log_weight. Variable names are derived from this IData's Name.

Attributes

extension (n: SampleSize)
infix def *(m: Int): SampleSize
def toInt: Int
extension (n: Int)
extension (n: Int)
infix def doubles(range: (Double, Double)): Doubles

50 doubles (0.0 -> 1.0)

50 doubles (0.0 -> 1.0)

Attributes

extension (names: (Name, Name))
def toName: Name
extension (names: (Name, Name, Name))
def toName: Name
extension (p: Double)
def pr: Prob
extension (p: Prob)
infix def +(q: Prob): Prob
infix def -(q: Prob): Prob
def about(q: Prob, ε: Prob): Boolean
def log: LogScore
extension (p: LogScore)
infix def +(q: LogScore): LogScore
infix def -(q: LogScore): LogScore
def about(q: LogScore, ε: Double): Boolean
def exp: Double

Holds iff the value is zero

Holds iff the value is zero

Attributes

def pr: Prob

The double value of this log score

The double value of this log score

Attributes

extension (s: String)
def toName: Name
extension [S, T](self: Dist[(S, T)])
def _1: Dist[S]
def _2: Dist[T]
extension [T](self: Chain[T])
def expectedValue[S >: T : Numeric]: Double

Same as mean.

Same as mean.

Attributes

def mean[S >: T](using num: Numeric[S]): Double

Compute a mean for a numeric sample. A numerically stable way to compute this, similar to LogSumExp, but exploiting that max cancels out in the numerator and denominator.

Compute a mean for a numeric sample. A numerically stable way to compute this, similar to LogSumExp, but exploiting that max cancels out in the numerator and denominator.

Attributes

def median[S >: T : Ordering]: S

Compute a median of a sample with a defined Ordering.

Compute a median of a sample with a defined Ordering.

For simplicity we drop one element if the sample is of even length. Typically to be used on a univariate sample of numbers (then the ordering exists).

It requires that the chain is finite! Otherwise it will crash.

Attributes

def percentile[S >: T](q: Double)(using num: Numeric[S]): Double

Compute a weighted percentile for a numeric sample. Returns the value below which fraction q of the weighted mass falls. Linearly interpolates between the two bracketing values when the threshold falls between samples.

Compute a weighted percentile for a numeric sample. Returns the value below which fraction q of the weighted mass falls. Linearly interpolates between the two bracketing values when the threshold falls between samples.

Uses the same weighted cumulative mass approach as median, generalized to an arbitrary threshold.

Value parameters

q

the quantile in [0, 1] (e.g. 0.5 for the median, 0.055 for the 5.5th percentile)

Attributes

def stdDev[S >: T : Numeric]: Double

Compute the standard deviation for a numeric sample. This is the square root of the variance.

Compute the standard deviation for a numeric sample. This is the square root of the variance.

Attributes

def variance[S >: T](using num: Numeric[S]): Double

Compute a sample variance for a numeric sample. Uses the unbiased weighted estimator (a generalization of Bessel's correction for weighted samples, for example see reliability weights in https://en.wikipedia.org/wiki/Weighted_arithmetic_mean #Related_concepts): Var = sum wi(xi - mu)^2 * V1 / (V1^2 - V2) where V1 = sum wi, V2 = sum wi^2. For uniform weights this reduces to sum(xi - mu)^2/(n-1). Uses the max-subtraction trick (as in mean) for numerical stability when log-scores are very negative.

Compute a sample variance for a numeric sample. Uses the unbiased weighted estimator (a generalization of Bessel's correction for weighted samples, for example see reliability weights in https://en.wikipedia.org/wiki/Weighted_arithmetic_mean #Related_concepts): Var = sum wi(xi - mu)^2 * V1 / (V1^2 - V2) where V1 = sum wi, V2 = sum wi^2. For uniform weights this reduces to sum(xi - mu)^2/(n-1). Uses the max-subtraction trick (as in mean) for numerical stability when log-scores are very negative.

Attributes

extension [T1, T2, T3](self: Dist[(T1, T2, T3)])
def _1: Dist[T1]
def _2: Dist[T2]
def _3: Dist[T3]
extension [T](self: Scored[T])

The log score of this sample

The log score of this sample

Attributes

def map[S](f: T => S): Scored[S]
def reScored(adjustment: LogScore): Scored[T]

This scored sample adjusted with a new score (logScores are added, scores are multiplied)

This scored sample adjusted with a new score (logScores are added, scores are multiplied)

Attributes

def score: Double

The non-log score of this sample

The non-log score of this sample

Attributes

def tupled: (T, LogScore)

extract a value-score pair from a scored value

extract a value-score pair from a scored value

Attributes

def value: T
extension [T1, T2, T3, T4](self: Dist[(T1, T2, T3, T4)])
def _1: Dist[T1]
def _2: Dist[T2]
def _3: Dist[T3]
def _4: Dist[T4]
extension [T](self: Chain[T])
def drop(n: SampleSize): Chain[T]
def exists(p: T => Boolean): Boolean
def filter(p: T => Boolean): Chain[T]
def forall(p: T => Boolean): Boolean
def head: T
def headScored: Scored[T]
def map[S](f: T => S): Chain[S]

Does not rescore in any way

Does not rescore in any way

Attributes

def mapScored[S](f: Scored[T] => Scored[S]): Chain[S]
def reScored[S](f: Scored[T] => LogScore): Chain[T]

Rescore the chain by adding the log score returned by f to the log score of each sample. Corresponds to multiplying/scaling in the linear space.

Rescore the chain by adding the log score returned by f to the log score of each sample. Corresponds to multiplying/scaling in the linear space.

Attributes

Reduce the chain by combining the scores of the same values. The values are grouped by their value, and the scores are summed using logSumExp.

Reduce the chain by combining the scores of the same values. The values are grouped by their value, and the scores are summed using logSumExp.

Attributes

Returns the sample size. Note that this may be wrong if the chain has been reduced by Value (This is easy to fix by storing the size in the Chain object; one day, if we see that it matters)

Returns the sample size. Note that this may be wrong if the chain has been reduced by Value (This is easy to fix by storing the size in the Chain object; one day, if we see that it matters)

Attributes

def take(n: SampleSize): Chain[T]
def zip[S](that: Chain[S]): Chain[(T, S)]
extension [S, T](self: IData[(S, T)])
def _1: IData[S]
def _2: IData[T]
extension [T1, T2, T3](self: IData[(T1, T2, T3)])
def _1: IData[T1]
def _2: IData[T2]
def _3: IData[T3]
extension [T1, T2, T3, T4](self: IData[(T1, T2, T3, T4)])
def _1: IData[T1]
def _2: IData[T2]
def _3: IData[T3]
def _4: IData[T4]
extension [T](self: IData[T])
def expectedValue[S >: T : Numeric]: Double

Same as mean.

Same as mean.

Attributes

def mean[S >: T : Numeric]: Double

Delegator to Chain.mean.

Delegator to Chain.mean.

Attributes

def median[S >: T : Ordering]: S

Delegator to Chain.median.

Delegator to Chain.median.

Attributes

def percentile[S >: T : Numeric](q: Double): Double

Delegator to Chain.percentile.

Delegator to Chain.percentile.

Attributes

def stdDev[S >: T : Numeric]: Double

Delegator to Chain.stdDev.

Delegator to Chain.stdDev.

Attributes

def variance[S >: T : Numeric]: Double

Delegator to Chain.variance.

Delegator to Chain.variance.

Attributes

extension [T](self: Chain[T])(using nd: NumericDecomposition[T])
def project(i: Int): Chain[Double]

Extract the i-th numeric variable as a Chain[Double], preserving log-scores.

Extract the i-th numeric variable as a Chain[Double], preserving log-scores.

Attributes

extension [T](self: IData[T])(using nd: NumericDecomposition[T])
def precis(histogram: Boolean): String

A summary table of the posterior, showing mean, standard deviation, the 89% percentile interval (5.5% to 94.5%), and a sparkline histogram for each variable. Inspired by McElreath's precis.

A summary table of the posterior, showing mean, standard deviation, the 89% percentile interval (5.5% to 94.5%), and a sparkline histogram for each variable. Inspired by McElreath's precis.

Value parameters

histogram

whether to include a sparkline histogram column (default: true)

Attributes

extension [T <: Tuple](self: Dist[T] & HasDensity[T])
def bernoulli[U](name: String)(p: Double, success: U, failure: U)(using NotGiven[U <:< Tuple]): Dist[Append[T, U]] & HasDensity[Append[T, U]]
def bernoulli[U](p: Double, success: U, failure: U)(using NotGiven[U <:< Tuple]): Dist[Append[T, U]] & HasDensity[Append[T, U]]
def gaussian(name: String)(mean: Double, stdDev: Double): Dist[Append[T, Double]] & HasDensity[Append[T, Double]]
def gaussian(mean: Double, stdDev: Double): Dist[Append[T, Double]] & HasDensity[Append[T, Double]]
def likelihood[D](data: Iterable[D])(f: D => T => LogScore): Dist[T] & HasDensity[T]
def uniform[U](name: String)(values: U*)(using NotGiven[U <:< Tuple]): Dist[Append[T, U]] & HasDensity[Append[T, U]]
def uniform[U](values: U*)(using NotGiven[U <:< Tuple]): Dist[Append[T, U]] & HasDensity[Append[T, U]]
def uniformC(name: String)(lower: Double, upper: Double): Dist[Append[T, Double]] & HasDensity[Append[T, Double]]
def uniformC(lower: Double, upper: Double): Dist[Append[T, Double]] & HasDensity[Append[T, Double]]
extension [T](self: Dist[T] & HasDensity[T])