The source files for all examples can be found in /examples.
Example 6: Multiple risk measures
This example shows how to use multiple risk measures.
using PortfolioOptimisers, PrettyTables
# Format for pretty tables.
tsfmt = (v, i, j) -> begin
if j == 1
return Date(v)
else
return v
end
end;
resfmt = (v, i, j) -> begin
if j == 1
return v
else
return isa(v, Number) ? "$(round(v*100, digits=3)) %" : v
end
end;1. ReturnsResult data
We will use the same data as the previous example.
using CSV, TimeSeries, DataFrames
X = TimeArray(CSV.File(joinpath(@__DIR__, "SP500.csv.gz")); timestamp = :Date)[(end - 252):end]
pretty_table(X[(end - 5):end]; formatters = [tsfmt])
# Compute the returns
rd = prices_to_returns(X)ReturnsResult
nx ┼ 20-element Vector{String}
X ┼ 252×20 Matrix{Float64}
nf ┼ nothing
F ┼ nothing
ts ┼ 252-element Vector{Dates.Date}
iv ┼ nothing
ivpa ┴ nothing2. Preparatory steps
We'll provide a vector of continuous solvers as a failsafe.
using Clarabel
slv = [Solver(; name = :clarabel1, solver = Clarabel.Optimizer,
settings = Dict("verbose" => false),
check_sol = (; allow_local = true, allow_almost = true)),
Solver(; name = :clarabel3, solver = Clarabel.Optimizer,
settings = Dict("verbose" => false, "max_step_fraction" => 0.9),
check_sol = (; allow_local = true, allow_almost = true)),
Solver(; name = :clarabel5, solver = Clarabel.Optimizer,
settings = Dict("verbose" => false, "max_step_fraction" => 0.8),
check_sol = (; allow_local = true, allow_almost = true)),
Solver(; name = :clarabel7, solver = Clarabel.Optimizer,
settings = Dict("verbose" => false, "max_step_fraction" => 0.70),
check_sol = (; allow_local = true, allow_almost = true))];3. Multiple risk measures
3.1 Equally weighted sum
Some risk measures can use precomputed prior statistics which take precedence over the ones in PriorResult. We can make use of this to minimise the variance with different covariance matrices simultaneously.
We will also precompute the prior statistics to minimise redundant work. First let's create a vector of Variances onto which we will push the different variances. We'll use 5 variance estimators, and their equally weighted sum.
Denoised covariance using the spectral algorithm.
Gerber 1 covariance.
Smyth Broby 1 covariance.
Mutual Information covariance.
Distance covariance.
Equally weighted sum of all the above covariances.
For the multi risk measure optimisation, we will weigh each risk measure equally. It should give the same result as adding all covariances together, but not the same as averaging the weights of the individual optimisations.
pr = prior(HighOrderPriorEstimator(), rd.X)
ces = [PortfolioOptimisersCovariance(;
mp = DenoiseDetoneAlgMatrixProcessing(;
dn = Denoise(;
alg = SpectralDenoise()))),
PortfolioOptimisersCovariance(; ce = GerberCovariance()),
PortfolioOptimisersCovariance(; ce = SmythBrobyCovariance(; alg = SmythBroby1())),
PortfolioOptimisersCovariance(; ce = MutualInfoCovariance()),
PortfolioOptimisersCovariance(; ce = DistanceCovariance())]5-element Vector{PortfolioOptimisersCovariance}:
PortfolioOptimisersCovariance
ce ┼ Covariance
│ me ┼ SimpleExpectedReturns
│ │ w ┴ nothing
│ ce ┼ GeneralCovariance
│ │ ce ┼ SimpleCovariance: SimpleCovariance(true)
│ │ w ┴ nothing
│ alg ┴ Full()
mp ┼ DenoiseDetoneAlgMatrixProcessing
│ pdm ┼ Posdef
│ │ alg ┼ UnionAll: NearestCorrelationMatrix.Newton
│ │ kwargs ┴ @NamedTuple{}: NamedTuple()
│ dn ┼ Denoise
│ │ alg ┼ SpectralDenoise()
│ │ args ┼ Tuple{}: ()
│ │ kwargs ┼ @NamedTuple{}: NamedTuple()
│ │ kernel ┼ typeof(AverageShiftedHistograms.Kernels.gaussian): AverageShiftedHistograms.Kernels.gaussian
│ │ m ┼ Int64: 10
│ │ n ┼ Int64: 1000
│ │ pdm ┼ Posdef
│ │ │ alg ┼ UnionAll: NearestCorrelationMatrix.Newton
│ │ │ kwargs ┴ @NamedTuple{}: NamedTuple()
│ dt ┼ nothing
│ alg ┼ nothing
│ order ┴ DenoiseDetoneAlg()
PortfolioOptimisersCovariance
ce ┼ GerberCovariance
│ ve ┼ SimpleVariance
│ │ me ┼ SimpleExpectedReturns
│ │ │ w ┴ nothing
│ │ w ┼ nothing
│ │ corrected ┴ Bool: true
│ pdm ┼ Posdef
│ │ alg ┼ UnionAll: NearestCorrelationMatrix.Newton
│ │ kwargs ┴ @NamedTuple{}: NamedTuple()
│ t ┼ Float64: 0.5
│ alg ┴ Gerber1()
mp ┼ DenoiseDetoneAlgMatrixProcessing
│ pdm ┼ Posdef
│ │ alg ┼ UnionAll: NearestCorrelationMatrix.Newton
│ │ kwargs ┴ @NamedTuple{}: NamedTuple()
│ dn ┼ nothing
│ dt ┼ nothing
│ alg ┼ nothing
│ order ┴ DenoiseDetoneAlg()
PortfolioOptimisersCovariance
ce ┼ SmythBrobyCovariance
│ me ┼ SimpleExpectedReturns
│ │ w ┴ nothing
│ ve ┼ SimpleVariance
│ │ me ┼ SimpleExpectedReturns
│ │ │ w ┴ nothing
│ │ w ┼ nothing
│ │ corrected ┴ Bool: true
│ pdm ┼ Posdef
│ │ alg ┼ UnionAll: NearestCorrelationMatrix.Newton
│ │ kwargs ┴ @NamedTuple{}: NamedTuple()
│ t ┼ Float64: 0.5
│ c1 ┼ Float64: 0.5
│ c2 ┼ Float64: 0.5
│ c3 ┼ Int64: 4
│ n ┼ Int64: 2
│ alg ┼ SmythBroby1()
│ ex ┴ Transducers.ThreadedEx{@NamedTuple{}}: Transducers.ThreadedEx()
mp ┼ DenoiseDetoneAlgMatrixProcessing
│ pdm ┼ Posdef
│ │ alg ┼ UnionAll: NearestCorrelationMatrix.Newton
│ │ kwargs ┴ @NamedTuple{}: NamedTuple()
│ dn ┼ nothing
│ dt ┼ nothing
│ alg ┼ nothing
│ order ┴ DenoiseDetoneAlg()
PortfolioOptimisersCovariance
ce ┼ MutualInfoCovariance
│ ve ┼ SimpleVariance
│ │ me ┼ SimpleExpectedReturns
│ │ │ w ┴ nothing
│ │ w ┼ nothing
│ │ corrected ┴ Bool: true
│ bins ┼ HacineGharbiRavier()
│ normalise ┴ Bool: true
mp ┼ DenoiseDetoneAlgMatrixProcessing
│ pdm ┼ Posdef
│ │ alg ┼ UnionAll: NearestCorrelationMatrix.Newton
│ │ kwargs ┴ @NamedTuple{}: NamedTuple()
│ dn ┼ nothing
│ dt ┼ nothing
│ alg ┼ nothing
│ order ┴ DenoiseDetoneAlg()
PortfolioOptimisersCovariance
ce ┼ DistanceCovariance
│ dist ┼ Distances.Euclidean: Distances.Euclidean(0.0)
│ args ┼ Tuple{}: ()
│ kwargs ┼ @NamedTuple{}: NamedTuple()
│ w ┼ nothing
│ ex ┴ Transducers.ThreadedEx{@NamedTuple{}}: Transducers.ThreadedEx()
mp ┼ DenoiseDetoneAlgMatrixProcessing
│ pdm ┼ Posdef
│ │ alg ┼ UnionAll: NearestCorrelationMatrix.Newton
│ │ kwargs ┴ @NamedTuple{}: NamedTuple()
│ dn ┼ nothing
│ dt ┼ nothing
│ alg ┼ nothing
│ order ┴ DenoiseDetoneAlg()Let's define a vector of variance risk measure using each of the different covariance matrices.
rs = [Variance(; sigma = cov(ce, rd.X)) for ce in ces]
all_sigmas = zeros(length(rd.nx), length(rd.nx))
for r in rs
all_sigmas .+= r.sigma
end
push!(rs, Variance(; sigma = all_sigmas))6-element Vector{Variance{RiskMeasureSettings{Float64, Nothing, Bool}, Matrix{Float64}, Nothing, Nothing, SquaredSOCRiskExpr}}:
Variance
settings ┼ RiskMeasureSettings
│ scale ┼ Float64: 1.0
│ ub ┼ nothing
│ rke ┴ Bool: true
sigma ┼ 20×20 Matrix{Float64}
chol ┼ nothing
rc ┼ nothing
alg ┴ SquaredSOCRiskExpr()
Variance
settings ┼ RiskMeasureSettings
│ scale ┼ Float64: 1.0
│ ub ┼ nothing
│ rke ┴ Bool: true
sigma ┼ 20×20 Matrix{Float64}
chol ┼ nothing
rc ┼ nothing
alg ┴ SquaredSOCRiskExpr()
Variance
settings ┼ RiskMeasureSettings
│ scale ┼ Float64: 1.0
│ ub ┼ nothing
│ rke ┴ Bool: true
sigma ┼ 20×20 Matrix{Float64}
chol ┼ nothing
rc ┼ nothing
alg ┴ SquaredSOCRiskExpr()
Variance
settings ┼ RiskMeasureSettings
│ scale ┼ Float64: 1.0
│ ub ┼ nothing
│ rke ┴ Bool: true
sigma ┼ 20×20 Matrix{Float64}
chol ┼ nothing
rc ┼ nothing
alg ┴ SquaredSOCRiskExpr()
Variance
settings ┼ RiskMeasureSettings
│ scale ┼ Float64: 1.0
│ ub ┼ nothing
│ rke ┴ Bool: true
sigma ┼ 20×20 Matrix{Float64}
chol ┼ nothing
rc ┼ nothing
alg ┴ SquaredSOCRiskExpr()
Variance
settings ┼ RiskMeasureSettings
│ scale ┼ Float64: 1.0
│ ub ┼ nothing
│ rke ┴ Bool: true
sigma ┼ 20×20 Matrix{Float64}
chol ┼ nothing
rc ┼ nothing
alg ┴ SquaredSOCRiskExpr()We'll minimise the variance for each individual risk measure and then we'll minimise the equally weighted sum of all risk measures.
results = [optimise(MeanRisk(; r = r, opt = JuMPOptimiser(; pr = pr, slv = slv)))
for r in rs]
mean_w = zeros(length(results[1].w))
for res in results[1:5]
mean_w .+= res.w
end
mean_w ./= 5
res = optimise(MeanRisk(; r = rs, opt = JuMPOptimiser(; pr = pr, slv = slv)))
pretty_table(DataFrame(:assets => rd.nx, :denoise => results[1].w, :gerber1 => results[2].w,
:smyth_broby1 => results[3].w, :mutual_info => results[4].w,
:distance => results[5].w, :mean_w => mean_w,
:sum_covs => results[6].w, :multi_risk => res.w);
formatters = [resfmt])┌────────┬──────────┬──────────┬──────────────┬─────────────┬──────────┬────────
│ assets │ denoise │ gerber1 │ smyth_broby1 │ mutual_info │ distance │ mea ⋯
│ String │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Floa ⋯
├────────┼──────────┼──────────┼──────────────┼─────────────┼──────────┼────────
│ AAPL │ 0.0 % │ 0.0 % │ 0.0 % │ 1.263 % │ 0.0 % │ 0.25 ⋯
│ AMD │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0. ⋯
│ BAC │ 0.0 % │ 2.988 % │ 0.0 % │ 2.166 % │ 2.279 % │ 1.48 ⋯
│ BBY │ 0.0 % │ 0.0 % │ 0.0 % │ 0.74 % │ 0.0 % │ 0.14 ⋯
│ CVX │ 17.488 % │ 9.462 % │ 15.048 % │ 4.007 % │ 9.961 % │ 11.19 ⋯
│ GE │ 0.0 % │ 2.287 % │ 0.0 % │ 2.702 % │ 4.347 % │ 1.86 ⋯
│ HD │ 0.0 % │ 0.0 % │ 0.0 % │ 2.713 % │ 3.792 % │ 1.30 ⋯
│ JNJ │ 76.031 % │ 23.934 % │ 56.706 % │ 17.458 % │ 17.28 % │ 38.28 ⋯
│ JPM │ 0.0 % │ 0.0 % │ 0.0 % │ 2.859 % │ 1.284 % │ 0.82 ⋯
│ KO │ 0.0 % │ 14.145 % │ 0.0 % │ 9.807 % │ 9.243 % │ 6.63 ⋯
│ LLY │ 0.0 % │ 0.0 % │ 0.0 % │ 4.874 % │ 0.241 % │ 1.02 ⋯
│ MRK │ 0.0 % │ 17.816 % │ 0.0 % │ 14.056 % │ 18.66 % │ 10.10 ⋯
│ MSFT │ 0.0 % │ 0.0 % │ 0.0 % │ 0.805 % │ 0.0 % │ 0.16 ⋯
│ PEP │ 0.0 % │ 13.315 % │ 28.245 % │ 8.543 % │ 8.089 % │ 11.63 ⋯
│ PFE │ 0.0 % │ 1.735 % │ 0.0 % │ 4.166 % │ 0.0 % │ 1.1 ⋯
│ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋱
└────────┴──────────┴──────────┴──────────────┴─────────────┴──────────┴────────
3 columns and 5 rows omittedFor extra credit we can do the same but maximising the risk-adjusted return ratio.
results = [optimise(MeanRisk(; r = r, obj = MaximumRatio(),
opt = JuMPOptimiser(; pr = pr, slv = slv))) for r in rs]
mean_w = zeros(length(results[1].w))
for res in results[1:5]
mean_w .+= res.w
end
mean_w ./= 5
res = optimise(MeanRisk(; r = rs, obj = MaximumRatio(),
opt = JuMPOptimiser(; pr = pr, slv = slv)))
pretty_table(DataFrame(:assets => rd.nx, :denoise => results[1].w, :gerber1 => results[2].w,
:smyth_broby1 => results[3].w, :mutual_info => results[4].w,
:distance => results[5].w, :mean_w => mean_w,
:sum_covs => results[6].w, :multi_risk => res.w);
formatters = [resfmt])┌────────┬──────────┬──────────┬──────────────┬─────────────┬──────────┬────────
│ assets │ denoise │ gerber1 │ smyth_broby1 │ mutual_info │ distance │ mea ⋯
│ String │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Floa ⋯
├────────┼──────────┼──────────┼──────────────┼─────────────┼──────────┼────────
│ AAPL │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0. ⋯
│ AMD │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0. ⋯
│ BAC │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0. ⋯
│ BBY │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0. ⋯
│ CVX │ 0.0 % │ 3.321 % │ 0.0 % │ 9.888 % │ 0.0 % │ 2.64 ⋯
│ GE │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0. ⋯
│ HD │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0. ⋯
│ JNJ │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0. ⋯
│ JPM │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0. ⋯
│ KO │ 0.0 % │ 0.002 % │ 0.0 % │ 5.688 % │ 0.0 % │ 1.13 ⋯
│ LLY │ 0.0 % │ 8.099 % │ 0.0 % │ 14.783 % │ 1.936 % │ 4.96 ⋯
│ MRK │ 67.803 % │ 59.727 % │ 69.393 % │ 46.763 % │ 50.398 % │ 58.81 ⋯
│ MSFT │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0. ⋯
│ PEP │ 0.0 % │ 0.001 % │ 0.0 % │ 0.002 % │ 0.0 % │ 0.00 ⋯
│ PFE │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0. ⋯
│ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋱
└────────┴──────────┴──────────┴──────────────┴─────────────┴──────────┴────────
3 columns and 5 rows omitted3.2 Different weights and scalarisers
All optimisations accept multiple risk measures in the same way. We can also provide different weights for each measure and four different scalarisers, SumScalariser, MaxScalariser, LogSumExpScalariser which work for all optimisation estimators, and MinScalariser which only works for hierarchical ones.
For clustering optimisations, the scalarisers apply to each sub-optimisation, so what may be the choice of risk to "minimise" for one cluster may not be the minimal risk for others, or the overall portfolio. This inconsistency is unavoidable but should not be a problem in practice as the point of hierarchical optimisations is not to provide the absolute minimum risk, but a good trade-off between risk and diversification.
It is also possible to mix any and all compatible risk measures. We will demonstrate this by mixing the variance with the negative skewness.
In this example we have tuned the weight of the negative skewness to demonstrate how clusters may end up with different risk measures due to the choice of scalariser.
We will use the heirarchical equal risk contribution optimisation, precomputing the clustering results using the direct bubble hierarchy tree algorithm.
The [HierarchicalEqualRiskContribution]-(@ref) optimisation estimator accepts inner and outer risk measures and inner and outer scalarisers.
clr = clusterise(ClustersEstimator(; alg = DBHT()), pr.X)
r = [Variance(), NegativeSkewness(; settings = RiskMeasureSettings(; scale = 0.1))]
results = [optimise(HierarchicalEqualRiskContribution(; ri = r[1],# inner (intra-cluster) risk measure
ro = r[1], # outer (inter-cluster) risk measure
opt = HierarchicalOptimiser(; pr = pr,
clr = clr))),
optimise(HierarchicalEqualRiskContribution(; ri = r[2], ro = r[2],
opt = HierarchicalOptimiser(; pr = pr,
clr = clr))),
optimise(HierarchicalEqualRiskContribution(; ri = r, ro = r,#
scai = SumScalariser(),# inner (intra-cluster)
scao = SumScalariser(),# outer (inter-cluster)
opt = HierarchicalOptimiser(; pr = pr,
clr = clr))),
optimise(HierarchicalEqualRiskContribution(; ri = r, ro = r,
scai = MaxScalariser(),
scao = MaxScalariser(),
opt = HierarchicalOptimiser(; pr = pr,
clr = clr))),
optimise(HierarchicalEqualRiskContribution(; ri = r, ro = r,
scai = MinScalariser(),
scao = MinScalariser(),
opt = HierarchicalOptimiser(; pr = pr,
clr = clr))),
optimise(HierarchicalEqualRiskContribution(; ri = r, ro = r,
scai = LogSumExpScalariser(),
scao = LogSumExpScalariser(),
opt = HierarchicalOptimiser(; pr = pr,
clr = clr)))]
pretty_table(DataFrame(:assets => rd.nx, :variance => results[1].w,
:neg_skew => results[2].w, :sum_sca => results[3].w,
:max_sca => results[4].w, :min_sca => results[5].w,
:log_sum_exp => results[6].w); formatters = [resfmt])┌────────┬──────────┬──────────┬──────────┬─────────┬──────────┬─────────────┐
│ assets │ variance │ neg_skew │ sum_sca │ max_sca │ min_sca │ log_sum_exp │
│ String │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │
├────────┼──────────┼──────────┼──────────┼─────────┼──────────┼─────────────┤
│ AAPL │ 1.847 % │ 4.367 % │ 2.949 % │ 3.286 % │ 2.571 % │ 3.152 % │
│ AMD │ 0.627 % │ 2.3 % │ 1.056 % │ 1.115 % │ 1.354 % │ 1.069 % │
│ BAC │ 2.221 % │ 6.575 % │ 3.636 % │ 3.95 % │ 3.871 % │ 3.789 % │
│ BBY │ 1.138 % │ 2.166 % │ 1.781 % │ 2.024 % │ 1.275 % │ 1.942 % │
│ CVX │ 4.525 % │ 4.41 % │ 5.338 % │ 6.436 % │ 3.246 % │ 11.606 % │
│ GE │ 1.924 % │ 2.356 % │ 2.923 % │ 3.423 % │ 1.387 % │ 3.284 % │
│ HD │ 2.386 % │ 2.995 % │ 3.629 % │ 4.244 % │ 1.763 % │ 4.071 % │
│ JNJ │ 10.746 % │ 7.487 % │ 10.587 % │ 7.821 % │ 10.413 % │ 6.708 % │
│ JPM │ 2.623 % │ 5.682 % │ 4.153 % │ 4.666 % │ 3.345 % │ 4.476 % │
│ KO │ 13.86 % │ 7.188 % │ 9.593 % │ 7.509 % │ 13.431 % │ 9.748 % │
│ LLY │ 4.388 % │ 7.217 % │ 4.727 % │ 7.539 % │ 4.253 % │ 2.739 % │
│ MRK │ 8.205 % │ 7.77 % │ 8.283 % │ 8.117 % │ 7.951 % │ 5.122 % │
│ MSFT │ 1.886 % │ 4.4 % │ 3.008 % │ 3.356 % │ 2.59 % │ 3.219 % │
│ PEP │ 14.175 % │ 6.616 % │ 9.727 % │ 6.911 % │ 13.736 % │ 9.969 % │
│ PFE │ 4.473 % │ 5.51 % │ 4.639 % │ 5.756 % │ 4.334 % │ 2.792 % │
│ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │
└────────┴──────────┴──────────┴──────────┴─────────┴──────────┴─────────────┘
5 rows omittedWhen the weights are different enough that one risk measure domintes over the other in all contexts, then the results of the max and min scalarisers will be as expected, i.e. they will be as if only one risk measure was used.
r = [Variance(), NegativeSkewness()]
results = [optimise(HierarchicalEqualRiskContribution(; ri = r[1],# inner (intra-cluster) risk measure
ro = r[1], # outer (inter-cluster) risk measure
opt = HierarchicalOptimiser(; pr = pr,
clr = clr))),
optimise(HierarchicalEqualRiskContribution(; ri = r[2], ro = r[2],
opt = HierarchicalOptimiser(; pr = pr,
clr = clr))),
optimise(HierarchicalEqualRiskContribution(; ri = r, ro = r,#
scai = SumScalariser(),# inner (intra-cluster)
scao = SumScalariser(),# outer (inter-cluster)
opt = HierarchicalOptimiser(; pr = pr,
clr = clr))),
optimise(HierarchicalEqualRiskContribution(; ri = r, ro = r,
scai = MaxScalariser(),
scao = MaxScalariser(),
opt = HierarchicalOptimiser(; pr = pr,
clr = clr))),
optimise(HierarchicalEqualRiskContribution(; ri = r, ro = r,
scai = MinScalariser(),
scao = MinScalariser(),
opt = HierarchicalOptimiser(; pr = pr,
clr = clr))),
optimise(HierarchicalEqualRiskContribution(; ri = r, ro = r,
scai = LogSumExpScalariser(),
scao = LogSumExpScalariser(),
opt = HierarchicalOptimiser(; pr = pr,
clr = clr)))]
pretty_table(DataFrame(:assets => rd.nx, :variance => results[1].w,
:neg_skew => results[2].w, :sum_sca => results[3].w,
:max_sca => results[4].w, :min_sca => results[5].w,
:log_sum_exp => results[6].w); formatters = [resfmt])┌────────┬──────────┬──────────┬─────────┬─────────┬──────────┬─────────────┐
│ assets │ variance │ neg_skew │ sum_sca │ max_sca │ min_sca │ log_sum_exp │
│ String │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │
├────────┼──────────┼──────────┼─────────┼─────────┼──────────┼─────────────┤
│ AAPL │ 1.847 % │ 4.367 % │ 3.946 % │ 4.367 % │ 1.847 % │ 3.155 % │
│ AMD │ 0.627 % │ 2.3 % │ 1.73 % │ 2.3 % │ 0.627 % │ 1.07 % │
│ BAC │ 2.221 % │ 6.575 % │ 5.377 % │ 6.575 % │ 2.221 % │ 3.792 % │
│ BBY │ 1.138 % │ 2.166 % │ 2.181 % │ 2.166 % │ 1.138 % │ 1.943 % │
│ CVX │ 4.525 % │ 4.41 % │ 4.959 % │ 4.41 % │ 4.525 % │ 11.589 % │
│ GE │ 1.924 % │ 2.356 % │ 3.063 % │ 2.356 % │ 1.924 % │ 3.286 % │
│ HD │ 2.386 % │ 2.995 % │ 3.833 % │ 2.995 % │ 2.386 % │ 4.074 % │
│ JNJ │ 10.746 % │ 7.487 % │ 8.942 % │ 7.487 % │ 10.746 % │ 6.714 % │
│ JPM │ 2.623 % │ 5.682 % │ 5.356 % │ 5.682 % │ 2.623 % │ 4.479 % │
│ KO │ 13.86 % │ 7.188 % │ 7.697 % │ 7.188 % │ 13.86 % │ 9.745 % │
│ LLY │ 4.388 % │ 7.217 % │ 5.76 % │ 7.217 % │ 4.388 % │ 2.742 % │
│ MRK │ 8.205 % │ 7.77 % │ 7.869 % │ 7.77 % │ 8.205 % │ 5.127 % │
│ MSFT │ 1.886 % │ 4.4 % │ 4.002 % │ 4.4 % │ 1.886 % │ 3.221 % │
│ PEP │ 14.175 % │ 6.616 % │ 7.491 % │ 6.616 % │ 14.175 % │ 9.967 % │
│ PFE │ 4.473 % │ 5.51 % │ 4.935 % │ 5.51 % │ 4.473 % │ 2.795 % │
│ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │
└────────┴──────────┴──────────┴─────────┴─────────┴──────────┴─────────────┘
5 rows omittedNote how the max scalariser produced the same weights as the negative skewness and the min scalariser produced the same weights as the variance. This is because in all cases, the same the value of the negative skewness was greater than that of the variance. A similar behaviour can be observed with other clustering optimisers. [NearOptimalCentering]-(@ref) can also have unintuitive behaviour when computing the risk bounds with an effective frontier MaxScalariser and MinScalariser due to the fact that each point in the efficient frontier can have a different risk measure dominating the others.
This page was generated using Literate.jl.