Opinion Pooling
PortfolioOptimisers.LinearOpinionPooling Type
struct LinearOpinionPooling <: OpinionPoolingAlgorithm endLinear opinion pooling algorithm for consensus prior estimation.
LinearOpinionPooling is a concrete subtype of OpinionPoolingAlgorithm that combines multiple prior probability distributions using a weighted linear average. This algorithm produces a consensus prior by averaging the input opinions according to their specified weights, resulting in a pooled distribution that reflects the collective beliefs of all contributors.
Details
The consensus weights are computed as a linear combination of the individual prior weights, weighted by opinion confidence.
Is the weighted arithmetic mean of the individual opinions.
Suitable for scenarios where opinions are independent and additive.
The only way to force a zero in the final opinion for all opinions to assign it a zero probability.
Related
sourcePortfolioOptimisers.LogarithmicOpinionPooling Type
struct LogarithmicOpinionPooling <: OpinionPoolingAlgorithm endLogarithmic opinion pooling algorithm for consensus prior estimation.
LogarithmicOpinionPooling is a concrete subtype of OpinionPoolingAlgorithm that combines multiple prior probability distributions using a weighted geometric mean. This algorithm produces a consensus prior by minimising the average Kullback-Leibler divergence from the individual opinions to the pooled distribution, resulting in an information-theoretically optimal consensus.
Details
The consensus weights are computed as the weighted geometric mean of the individual prior weights, weighted by opinion confidence.
Robust to extremes, as it down-weights divergent or extreme views.
If any opinion assigns zero probability to an event, the pooled opinion will also assign zero probability.
Minimises the average Kullback-Leibler divergence from the individual opinions to the consensus.
Related
sourcePortfolioOptimisers.OpinionPoolingPrior Type
struct OpinionPoolingPrior{T1, T2, T3, T4, T5, T6, T7} <: AbstractLowOrderPriorEstimator_AF
pes::T1
pe1::T2
pe2::T3
p::T4
w::T5
alg::T6
threads::T7
endOpinion pooling prior estimator for asset returns.
OpinionPoolingPrior is a low order prior estimator that computes the mean and covariance of asset returns by combining multiple prior estimations into a consensus prior using opinion pooling algorithms. It supports both linear and logarithmic pooling, flexible weighting of opinions, and optional pre- and post-processing estimators.
Fields
pes: Vector of prior estimators to be pooled.pe1: Optional pre-processing prior estimator.pe2: Post-processing prior estimator.p: Penalty parameter for penalising opinions which deviate from the consensus.w: Vector of opinion probabilities.alg: Opinion pooling algorithm.threads: Parallel execution strategy.
Constructor
OpinionPoolingPrior(; pes, pe1, pe2, p, w, alg, threads)Keyword arguments correspond to the fields above. All arguments are validated for type and value consistency.
Validation
pesmust be a non-empty vector of prior estimators.If
wis provided,!isempty(w),length(w) == length(pes),all(x -> 0 <= x <= 1, w), andsum(w) <= 1.If
pis provided,p > 0.
Details
If
wis provided, andsum(w) < 1, the remaining weight is assigned to the uniform prior. Otherwise, all opinions are assumed to be equally weighted.If
pisnothing, the the opinion probabilities are used as given. Else they are adjusted according to their Kullback-Leibler divergence from the consensus.
Examples
julia> sets = AssetSets(; key = "nx", dict = Dict("nx" => ["A", "B", "C"]));
julia> OpinionPoolingPrior(;
pes = [EntropyPoolingPrior(; sets = sets,
mu_views = LinearConstraintEstimator(;
val = ["A == 0.03",
"B + C == 0.04"])),
EntropyPoolingPrior(; sets = sets,
mu_views = LinearConstraintEstimator(;
val = ["A == 0.05",
"B + C >= 0.06"]))])
OpinionPoolingPrior
pes ┼ EntropyPoolingPrior{EmpiricalPrior{PortfolioOptimisersCovariance{Covariance{SimpleExpectedReturns{Nothing}, GeneralCovariance{StatsBase.SimpleCovariance, Nothing}, Full}, DefaultMatrixProcessing{Posdef{UnionAll}, Nothing, Nothing, Nothing}}, SimpleExpectedReturns{Nothing}, Nothing}, LinearConstraintEstimator{Vector{String}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, AssetSets{String, Dict{String, Vector{String}}}, Nothing, Nothing, OptimEntropyPooling{Tuple{}, @NamedTuple{}, Int64, Float64, ExpEntropyPooling}, Nothing, H1_EntropyPooling}[EntropyPoolingPrior
│ pe ┼ EmpiricalPrior
│ │ ce ┼ PortfolioOptimisersCovariance
│ │ │ ce ┼ Covariance
│ │ │ │ me ┼ SimpleExpectedReturns
│ │ │ │ │ w ┴ nothing
│ │ │ │ ce ┼ GeneralCovariance
│ │ │ │ │ ce ┼ StatsBase.SimpleCovariance: StatsBase.SimpleCovariance(true)
│ │ │ │ │ w ┴ nothing
│ │ │ │ alg ┴ Full()
│ │ │ mp ┼ DefaultMatrixProcessing
│ │ │ │ pdm ┼ Posdef
│ │ │ │ │ alg ┴ UnionAll: NearestCorrelationMatrix.Newton
│ │ │ │ denoise ┼ nothing
│ │ │ │ detone ┼ nothing
│ │ │ │ alg ┴ nothing
│ │ me ┼ SimpleExpectedReturns
│ │ │ w ┴ nothing
│ │ horizon ┴ nothing
│ mu_views ┼ LinearConstraintEstimator
│ │ val ┴ Vector{String}: ["A == 0.03", "B + C == 0.04"]
│ var_views ┼ nothing
│ cvar_views ┼ nothing
│ sigma_views ┼ nothing
│ sk_views ┼ nothing
│ kt_views ┼ nothing
│ rho_views ┼ nothing
│ var_alpha ┼ nothing
│ cvar_alpha ┼ nothing
│ sets ┼ AssetSets
│ │ key ┼ String: "nx"
│ │ dict ┴ Dict{String, Vector{String}}: Dict("nx" => ["A", "B", "C"])
│ ds_opt ┼ nothing
│ dm_opt ┼ nothing
│ opt ┼ OptimEntropyPooling
│ │ args ┼ Tuple{}: ()
│ │ kwargs ┼ @NamedTuple{}: NamedTuple()
│ │ sc1 ┼ Int64: 1
│ │ sc2 ┼ Float64: 1000.0
│ │ alg ┴ ExpEntropyPooling()
│ w ┼ nothing
│ alg ┴ H1_EntropyPooling()
│ , EntropyPoolingPrior
│ pe ┼ EmpiricalPrior
│ │ ce ┼ PortfolioOptimisersCovariance
│ │ │ ce ┼ Covariance
│ │ │ │ me ┼ SimpleExpectedReturns
│ │ │ │ │ w ┴ nothing
│ │ │ │ ce ┼ GeneralCovariance
│ │ │ │ │ ce ┼ StatsBase.SimpleCovariance: StatsBase.SimpleCovariance(true)
│ │ │ │ │ w ┴ nothing
│ │ │ │ alg ┴ Full()
│ │ │ mp ┼ DefaultMatrixProcessing
│ │ │ │ pdm ┼ Posdef
│ │ │ │ │ alg ┴ UnionAll: NearestCorrelationMatrix.Newton
│ │ │ │ denoise ┼ nothing
│ │ │ │ detone ┼ nothing
│ │ │ │ alg ┴ nothing
│ │ me ┼ SimpleExpectedReturns
│ │ │ w ┴ nothing
│ │ horizon ┴ nothing
│ mu_views ┼ LinearConstraintEstimator
│ │ val ┴ Vector{String}: ["A == 0.05", "B + C >= 0.06"]
│ var_views ┼ nothing
│ cvar_views ┼ nothing
│ sigma_views ┼ nothing
│ sk_views ┼ nothing
│ kt_views ┼ nothing
│ rho_views ┼ nothing
│ var_alpha ┼ nothing
│ cvar_alpha ┼ nothing
│ sets ┼ AssetSets
│ │ key ┼ String: "nx"
│ │ dict ┴ Dict{String, Vector{String}}: Dict("nx" => ["A", "B", "C"])
│ ds_opt ┼ nothing
│ dm_opt ┼ nothing
│ opt ┼ OptimEntropyPooling
│ │ args ┼ Tuple{}: ()
│ │ kwargs ┼ @NamedTuple{}: NamedTuple()
│ │ sc1 ┼ Int64: 1
│ │ sc2 ┼ Float64: 1000.0
│ │ alg ┴ ExpEntropyPooling()
│ w ┼ nothing
│ alg ┴ H1_EntropyPooling()
│ ]
pe1 ┼ nothing
pe2 ┼ EmpiricalPrior
│ ce ┼ PortfolioOptimisersCovariance
│ │ ce ┼ Covariance
│ │ │ me ┼ SimpleExpectedReturns
│ │ │ │ w ┴ nothing
│ │ │ ce ┼ GeneralCovariance
│ │ │ │ ce ┼ StatsBase.SimpleCovariance: StatsBase.SimpleCovariance(true)
│ │ │ │ w ┴ nothing
│ │ │ alg ┴ Full()
│ │ mp ┼ DefaultMatrixProcessing
│ │ │ pdm ┼ Posdef
│ │ │ │ alg ┴ UnionAll: NearestCorrelationMatrix.Newton
│ │ │ denoise ┼ nothing
│ │ │ detone ┼ nothing
│ │ │ alg ┴ nothing
│ me ┼ SimpleExpectedReturns
│ │ w ┴ nothing
│ horizon ┴ nothing
p ┼ nothing
w ┼ nothing
alg ┼ LinearOpinionPooling()
threads ┴ Transducers.ThreadedEx{@NamedTuple{}}: Transducers.ThreadedEx()Related
sourcePortfolioOptimisers.prior Function
prior(pe::OpinionPoolingPrior, X::AbstractMatrix;
F::Union{Nothing, <:AbstractMatrix} = nothing, dims::Int = 1, strict::Bool = false,
kwargs...)Compute opinion pooling prior moments for asset returns.
prior estimates the mean and covariance of asset returns by combining multiple prior estimations into a consensus prior using opinion pooling algorithms. Supports both linear and logarithmic pooling, robust opinion probability adjustment, and optional pre- and post-processing estimators.
Arguments
pe: Opinion pooling prior estimator.X: Asset returns matrix (observations × assets).F: Optional factor matrix (default:nothing).dims: Dimension along which to compute moments (1= columns/assets,2= rows). Default is1.strict: Iftrue, throws error for missing assets; otherwise, issues warnings. Default isfalse.kwargs...: Additional keyword arguments passed to underlying estimators and solvers.
Returns
pr::LowOrderPrior: Result object containing asset returns, posterior mean vector, posterior covariance matrix, consensus weights, entropy, Kullback-Leibler divergence, opinion probabilities, and optional factor moments.
Validation
dims in (1, 2).
Details
Optional pre-processing estimator
pe.pe1is applied to asset returns before pooling, else the original returns are used.Each prior estimator in
pe.pesis applied to the asset returns, producing individual prior weights.Opinion probabilities
oware initialised frompe.wor set uniformly if it isnothing; if their sum is less than 1, the remainder is assigned to a uniform prior.Robust opinion probabilities are computed using
robust_probabilitiesif a penalty parameterpe.pis provided.Consensus posterior weights are computed using
compute_poolingaccording to the specified pooling algorithmpe.alg.Post-processing estimator
pe.pe2is applied using the consensus weights.The result includes the effective number of scenarios, Kullback-Leibler divergence to each opinion, robust opinion probabilities, and optional factor moments.
Related
PortfolioOptimisers.OpinionPoolingAlgorithm Type
abstract type OpinionPoolingAlgorithm <: AbstractAlgorithm endAbstract supertype for opinion pooling algorithms.
OpinionPoolingAlgorithm is the base type for all algorithms that combine multiple prior estimations into a consensus prior using opinion pooling. All concrete opinion pooling algorithms should subtype this type to ensure a consistent interface for consensus formation in portfolio optimisation workflows.
Related
sourcePortfolioOptimisers.robust_probabilities Function
robust_probabilities(ow::AbstractVector, args...)
robust_probabilities(ow::AbstractVector, pw::AbstractMatrix, p::Real)Compute robust opinion probabilities for consensus formation in opinion pooling.
robust_probabilities adjusts the vector of opinion probabilities (ow) used in opinion pooling algorithms to account for robustness against outlier or extreme opinions. If a penalty parameter p is provided, the method penalises opinions that diverge from the consensus by down-weighting them according to their Kullback-Leibler divergence from the pooled distribution. If no penalty parameter is set, the original opinion probabilities are returned unchanged.
Arguments
ow: Vector of opinion probabilities (length = number of opinions).pw: Matrix of prior weights for each opinion (observations × opinions).p: Robustness penalty parameter.
Returns
ow::AbstractVector: Opinion probabilities for pooling.
Details
If
pisnothing, i.e. the method withargs..., returns the original opinion probabilities.If
pis provided, computes the consensus distribution, calculates the Kullback-Leibler divergence for each opinion, and applies an exponential penalty to each probability. The adjusted probabilities are normalised to sum to 1.Used internally by
OpinionPoolingPriorto ensure robust aggregation of opinions.
Related
sourcePortfolioOptimisers.compute_pooling Function
compute_pooling(::LinearOpinionPooling, ow::AbstractVector, pw::AbstractMatrix)
compute_pooling(::LogarithmicOpinionPooling, ow::AbstractVector, pw::AbstractMatrix)Compute the consensus posterior return distribution from individual prior distributions using opinion pooling.
compute_pooling aggregates multiple prior probability distributions (pw) into a single consensus posterior distribution according to the specified opinion pooling algorithm and opinion probabilities (ow). Supports both linear and logarithmic pooling.
Arguments
alg: Opinion pooling algorithm (LinearOpinionPoolingorLogarithmicOpinionPooling).ow: Vector of opinion probabilities (length = number of opinions).pw: Matrix of prior weights for each opinion (observations × opinions).
Returns
w::ProbabilityWeights: Consensus posterior probability weights.
Details
For
LinearOpinionPooling, computes the weighted arithmetic mean of the individual prior weights:w = pw * ow.For
LogarithmicOpinionPooling, computes the weighted geometric mean of the individual prior weights:w = exp.(log.(pw) * ow - logsumexp(log.(pw) * ow)).Used internally by
OpinionPoolingPriorto form the consensus prior distribution.
Related
source