The source files for all examples can be found in /examples.
Example 10: Cross validation
Cross validation is a powerful technique to evaluate the performance of a model on unseen data. In this example, we will showcase the different cross validation methods available in PortfolioOptimisers.jl and how to use them to evaluate the performance of our portfolio optimization models.
Cross validation can be used as a standalone method to evaluate the performance of a model, or it can be used in conjunction with other techniques like hyperparameter tuning or model selection. They can also be used in [NestedClustered]-(@ref) and [Stacking]-(@ref) optimisation estimators to optimise the outer estimator on the out-of-sample performance of the inner estimators.
This example will only focus on showcasing the different cross validation methods, with examples on how to use them and what metrics can be computed. Further analysis like plots or grid searches have not been implemented yet, but are the top priority of future development.
using PortfolioOptimisers, PrettyTables
# Format for pretty tables.
tsfmt = (v, i, j) -> begin
if j == 1
return Date(v)
else
return v
end
end;
resfmt = (v, i, j) -> begin
if j == 1
return v
else
return isa(v, Number) ? "$(round(v*100, digits=3)) %" : v
end
end;1. Setting up
For this example, we will use 5 years of daily data. This is so that we have enough data to perform cross validation on significant amounts of data for both training and testing.
Cross validation cannot have precomputed values like we have done in previous examples. This is because the training and testing sets are generated on the fly, and the performance metrics are computed based on the results of the optimization on these sets.
using CSV, TimeSeries, DataFrames, Clarabel, Statistics
X = TimeArray(CSV.File(joinpath(@__DIR__, "SP500.csv.gz")); timestamp = :Date)[(end - 252 * 5):end]
pretty_table(X[(end - 5):end]; formatters = [tsfmt])
# Compute the returns
rd = prices_to_returns(X)
slv = [Solver(; name = :clarabel1, solver = Clarabel.Optimizer,
settings = Dict("verbose" => false),
check_sol = (; allow_local = true, allow_almost = true)),
Solver(; name = :clarabel2, solver = Clarabel.Optimizer,
settings = Dict("verbose" => false, "max_step_fraction" => 0.95),
check_sol = (; allow_local = true, allow_almost = true)),
Solver(; name = :clarabel3, solver = Clarabel.Optimizer,
settings = Dict("verbose" => false, "max_step_fraction" => 0.9),
check_sol = (; allow_local = true, allow_almost = true)),
Solver(; name = :clarabel4, solver = Clarabel.Optimizer,
settings = Dict("verbose" => false, "max_step_fraction" => 0.85),
check_sol = (; allow_local = true, allow_almost = true)),
Solver(; name = :clarabel5, solver = Clarabel.Optimizer,
settings = Dict("verbose" => false, "max_step_fraction" => 0.8),
check_sol = (; allow_local = true, allow_almost = true)),
Solver(; name = :clarabel6, solver = Clarabel.Optimizer,
settings = Dict("verbose" => false, "max_step_fraction" => 0.75),
check_sol = (; allow_local = true, allow_almost = true)),
Solver(; name = :clarabel7, solver = Clarabel.Optimizer,
settings = Dict("verbose" => false, "max_step_fraction" => 0.70),
check_sol = (; allow_local = true, allow_almost = true))];┌────────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬──────
│ timestamp │ AAPL │ AMD │ BAC │ BBY │ CVX │ GE │ ⋯
│ Date │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Flo ⋯
├────────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼──────
│ 2022-12-20 │ 131.916 │ 65.05 │ 31.729 │ 77.371 │ 169.497 │ 62.604 │ 310 ⋯
│ 2022-12-21 │ 135.057 │ 67.68 │ 32.212 │ 78.729 │ 171.49 │ 64.67 │ 314 ⋯
│ 2022-12-22 │ 131.846 │ 63.86 │ 31.927 │ 78.563 │ 168.918 │ 63.727 │ 311 ⋯
│ 2022-12-23 │ 131.477 │ 64.52 │ 32.005 │ 79.432 │ 174.14 │ 63.742 │ 314 ⋯
│ 2022-12-27 │ 129.652 │ 63.27 │ 32.065 │ 79.93 │ 176.329 │ 64.561 │ 314 ⋯
│ 2022-12-28 │ 125.674 │ 62.57 │ 32.301 │ 78.279 │ 173.728 │ 63.883 │ 31 ⋯
└────────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┴──────
14 columns omittedFor this tutorial we will use the basic [MeanRisk]-(@ref) estimator, but the cross validation works for all optimisation estimators, even when computing pareto fronts.
mr = MeanRisk(; opt = JuMPOptimiser(; slv = slv))MeanRisk
opt ┼ JuMPOptimiser
│ pe ┼ EmpiricalPrior
│ │ ce ┼ PortfolioOptimisersCovariance
│ │ │ ce ┼ Covariance
│ │ │ │ me ┼ SimpleExpectedReturns
│ │ │ │ │ w ┼ nothing
│ │ │ │ │ idx ┴ nothing
│ │ │ │ ce ┼ GeneralCovariance
│ │ │ │ │ ce ┼ SimpleCovariance: SimpleCovariance(true)
│ │ │ │ │ w ┼ nothing
│ │ │ │ │ idx ┴ nothing
│ │ │ │ alg ┴ Full()
│ │ │ mp ┼ DenoiseDetoneAlgMatrixProcessing
│ │ │ │ pdm ┼ Posdef
│ │ │ │ │ alg ┼ UnionAll: NearestCorrelationMatrix.Newton
│ │ │ │ │ kwargs ┴ @NamedTuple{}: NamedTuple()
│ │ │ │ dn ┼ nothing
│ │ │ │ dt ┼ nothing
│ │ │ │ alg ┼ nothing
│ │ │ │ order ┴ DenoiseDetoneAlg()
│ │ me ┼ SimpleExpectedReturns
│ │ │ w ┼ nothing
│ │ │ idx ┴ nothing
│ │ horizon ┴ nothing
│ slv ┼ 7-element Vector{Solver{Symbol, UnionAll, T3, @NamedTuple{allow_local::Bool, allow_almost::Bool}, Bool} where T3}
│ wb ┼ WeightBounds
│ │ lb ┼ Float64: 0.0
│ │ ub ┴ Float64: 1.0
│ bgt ┼ Float64: 1.0
│ sbgt ┼ nothing
│ lt ┼ nothing
│ st ┼ nothing
│ lcse ┼ nothing
│ cte ┼ nothing
│ gcarde ┼ nothing
│ sgcarde ┼ nothing
│ smtx ┼ nothing
│ sgmtx ┼ nothing
│ slt ┼ nothing
│ sst ┼ nothing
│ sglt ┼ nothing
│ sgst ┼ nothing
│ tn ┼ nothing
│ fees ┼ nothing
│ sets ┼ nothing
│ tr ┼ nothing
│ ple ┼ nothing
│ ret ┼ ArithmeticReturn
│ │ ucs ┼ nothing
│ │ lb ┼ nothing
│ │ mu ┴ nothing
│ sca ┼ SumScalariser()
│ ccnt ┼ nothing
│ cobj ┼ nothing
│ sc ┼ Int64: 1
│ so ┼ Int64: 1
│ ss ┼ nothing
│ card ┼ nothing
│ scard ┼ nothing
│ nea ┼ nothing
│ l1 ┼ nothing
│ l2 ┼ nothing
│ linf ┼ nothing
│ lp ┼ nothing
│ strict ┴ Bool: false
r ┼ Variance
│ settings ┼ RiskMeasureSettings
│ │ scale ┼ Float64: 1.0
│ │ ub ┼ nothing
│ │ rke ┴ Bool: true
│ sigma ┼ nothing
│ chol ┼ nothing
│ rc ┼ nothing
│ alg ┴ SquaredSOCRiskExpr()
obj ┼ MinimumRisk()
wi ┼ nothing
fb ┴ nothing2. Cross validation
2.1 KFold
The simplest form of cross validation is KFold. This method splits the data into K folds, and then iteratively trains on K-1 folds and tests on the remaining fold. This process is repeated K times, with each fold being used as the test set once.
The `KFold`` indices can be generated independently of the optimisation. Let's say we want to perform 5-fold cross validation, this works out to be roughly one per year.
kfold = KFold(; n = 5)KFold
n ┼ Int64: 5
purged_size ┼ Int64: 0
embargo_size ┴ Int64: 0For demonstration purposes we can generate the splits using the [split]-(@ref) method. This is not necessary as the cross validation will generate them internally.
kfold_res = split(kfold, rd)
show(kfold_res.train_idx)
show(kfold_res.test_idx)[[253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069, 1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1078, 1079, 1080, 1081, 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091, 1092, 1093, 1094, 1095, 1096, 1097, 1098, 1099, 1100, 1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117, 1118, 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127, 1128, 1129, 1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1139, 1140, 1141, 1142, 1143, 1144, 1145, 1146, 1147, 1148, 1149, 1150, 1151, 1152, 1153, 1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, 1162, 1163, 1164, 1165, 1166, 1167, 1168, 1169, 1170, 1171, 1172, 1173, 1174, 1175, 1176, 1177, 1178, 1179, 1180, 1181, 1182, 1183, 1184, 1185, 1186, 1187, 1188, 1189, 1190, 1191, 1192, 1193, 1194, 1195, 1196, 1197, 1198, 1199, 1200, 1201, 1202, 1203, 1204, 1205, 1206, 1207, 1208, 1209, 1210, 1211, 1212, 1213, 1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1223, 1224, 1225, 1226, 1227, 1228, 1229, 1230, 1231, 1232, 1233, 1234, 1235, 1236, 1237, 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1247, 1248, 1249, 1250, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069, 1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1078, 1079, 1080, 1081, 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091, 1092, 1093, 1094, 1095, 1096, 1097, 1098, 1099, 1100, 1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117, 1118, 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127, 1128, 1129, 1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1139, 1140, 1141, 1142, 1143, 1144, 1145, 1146, 1147, 1148, 1149, 1150, 1151, 1152, 1153, 1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, 1162, 1163, 1164, 1165, 1166, 1167, 1168, 1169, 1170, 1171, 1172, 1173, 1174, 1175, 1176, 1177, 1178, 1179, 1180, 1181, 1182, 1183, 1184, 1185, 1186, 1187, 1188, 1189, 1190, 1191, 1192, 1193, 1194, 1195, 1196, 1197, 1198, 1199, 1200, 1201, 1202, 1203, 1204, 1205, 1206, 1207, 1208, 1209, 1210, 1211, 1212, 1213, 1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1223, 1224, 1225, 1226, 1227, 1228, 1229, 1230, 1231, 1232, 1233, 1234, 1235, 1236, 1237, 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1247, 1248, 1249, 1250, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069, 1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1078, 1079, 1080, 1081, 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091, 1092, 1093, 1094, 1095, 1096, 1097, 1098, 1099, 1100, 1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117, 1118, 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127, 1128, 1129, 1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1139, 1140, 1141, 1142, 1143, 1144, 1145, 1146, 1147, 1148, 1149, 1150, 1151, 1152, 1153, 1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, 1162, 1163, 1164, 1165, 1166, 1167, 1168, 1169, 1170, 1171, 1172, 1173, 1174, 1175, 1176, 1177, 1178, 1179, 1180, 1181, 1182, 1183, 1184, 1185, 1186, 1187, 1188, 1189, 1190, 1191, 1192, 1193, 1194, 1195, 1196, 1197, 1198, 1199, 1200, 1201, 1202, 1203, 1204, 1205, 1206, 1207, 1208, 1209, 1210, 1211, 1212, 1213, 1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1223, 1224, 1225, 1226, 1227, 1228, 1229, 1230, 1231, 1232, 1233, 1234, 1235, 1236, 1237, 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1247, 1248, 1249, 1250, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069, 1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1078, 1079, 1080, 1081, 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091, 1092, 1093, 1094, 1095, 1096, 1097, 1098, 1099, 1100, 1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117, 1118, 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127, 1128, 1129, 1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1139, 1140, 1141, 1142, 1143, 1144, 1145, 1146, 1147, 1148, 1149, 1150, 1151, 1152, 1153, 1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, 1162, 1163, 1164, 1165, 1166, 1167, 1168, 1169, 1170, 1171, 1172, 1173, 1174, 1175, 1176, 1177, 1178, 1179, 1180, 1181, 1182, 1183, 1184, 1185, 1186, 1187, 1188, 1189, 1190, 1191, 1192, 1193, 1194, 1195, 1196, 1197, 1198, 1199, 1200, 1201, 1202, 1203, 1204, 1205, 1206, 1207, 1208, 1209, 1210, 1211, 1212, 1213, 1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1223, 1224, 1225, 1226, 1227, 1228, 1229, 1230, 1231, 1232, 1233, 1234, 1235, 1236, 1237, 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1247, 1248, 1249, 1250, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008]]UnitRange{Int64}[1:252, 253:504, 505:756, 757:1008, 1009:1260]Let's perform the cross validation.
kfold_pred = cross_val_predict(mr, rd, kfold)MultiPeriodPredictionResult
pred ┼ PredictionResult[PredictionResult
│ res ┼ MeanRiskResult
│ │ oe ┼ DataType: DataType
│ │ pa ┼ ProcessedJuMPOptimiserAttributes
│ │ │ pr ┼ LowOrderPrior
│ │ │ │ X ┼ 1008×20 SubArray{Float64, 2, Matrix{Float64}, Tuple{Vector{Int64}, Base.Slice{Base.OneTo{Int64}}}, false}
│ │ │ │ mu ┼ 20-element Vector{Float64}
│ │ │ │ sigma ┼ 20×20 Matrix{Float64}
│ │ │ │ chol ┼ nothing
│ │ │ │ w ┼ nothing
│ │ │ │ ens ┼ nothing
│ │ │ │ kld ┼ nothing
│ │ │ │ ow ┼ nothing
│ │ │ │ rr ┼ nothing
│ │ │ │ f_mu ┼ nothing
│ │ │ │ f_sigma ┼ nothing
│ │ │ │ f_w ┴ nothing
│ │ │ wb ┼ WeightBounds
│ │ │ │ lb ┼ 20-element StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
│ │ │ │ ub ┴ 20-element StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
│ │ │ lt ┼ nothing
│ │ │ st ┼ nothing
│ │ │ lcsr ┼ nothing
│ │ │ ctr ┼ nothing
│ │ │ gcardr ┼ nothing
│ │ │ sgcardr ┼ nothing
│ │ │ smtx ┼ nothing
│ │ │ sgmtx ┼ nothing
│ │ │ slt ┼ nothing
│ │ │ sst ┼ nothing
│ │ │ sglt ┼ nothing
│ │ │ sgst ┼ nothing
│ │ │ tn ┼ nothing
│ │ │ fees ┼ nothing
│ │ │ plr ┼ nothing
│ │ │ ret ┼ ArithmeticReturn
│ │ │ │ ucs ┼ nothing
│ │ │ │ lb ┼ nothing
│ │ │ │ mu ┴ 20-element Vector{Float64}
│ │ retcode ┼ OptimisationSuccess
│ │ │ res ┴ Dict{Any, Any}: Dict{Any, Any}()
│ │ sol ┼ JuMPOptimisationSolution
│ │ │ w ┴ 20-element Vector{Float64}
│ │ model ┼ A JuMP Model
│ │ │ ├ solver: Clarabel
│ │ │ ├ objective_sense: MIN_SENSE
│ │ │ │ └ objective_function_type: JuMP.QuadExpr
│ │ │ ├ num_variables: 21
│ │ │ ├ num_constraints: 4
│ │ │ │ ├ JuMP.AffExpr in MOI.EqualTo{Float64}: 1
│ │ │ │ ├ Vector{JuMP.AffExpr} in MOI.Nonnegatives: 1
│ │ │ │ ├ Vector{JuMP.AffExpr} in MOI.Nonpositives: 1
│ │ │ │ └ Vector{JuMP.AffExpr} in MOI.SecondOrderCone: 1
│ │ │ └ Names registered in the model
│ │ │ └ :G, :bgt, :dev_1, :dev_1_soc, :k, :lw, :obj_expr, :ret, :risk, :risk_vec, :sc, :so, :variance_flag, :variance_risk_1, :w, :w_lb, :w_ub
│ │ fb ┴ nothing
│ rd ┼ PredictionReturnsResult
│ │ nx ┼ 20-element SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}
│ │ X ┼ 252-element Vector{Float64}
│ │ nf ┼ nothing
│ │ F ┼ nothing
│ │ ts ┼ 252-element SubArray{Date, 1, Vector{Date}, Tuple{UnitRange{Int64}}, true}
│ │ iv ┼ nothing
│ │ ivpa ┴ nothing
│ , PredictionResult
│ res ┼ MeanRiskResult
│ │ oe ┼ DataType: DataType
│ │ pa ┼ ProcessedJuMPOptimiserAttributes
│ │ │ pr ┼ LowOrderPrior
│ │ │ │ X ┼ 1008×20 SubArray{Float64, 2, Matrix{Float64}, Tuple{Vector{Int64}, Base.Slice{Base.OneTo{Int64}}}, false}
│ │ │ │ mu ┼ 20-element Vector{Float64}
│ │ │ │ sigma ┼ 20×20 Matrix{Float64}
│ │ │ │ chol ┼ nothing
│ │ │ │ w ┼ nothing
│ │ │ │ ens ┼ nothing
│ │ │ │ kld ┼ nothing
│ │ │ │ ow ┼ nothing
│ │ │ │ rr ┼ nothing
│ │ │ │ f_mu ┼ nothing
│ │ │ │ f_sigma ┼ nothing
│ │ │ │ f_w ┴ nothing
│ │ │ wb ┼ WeightBounds
│ │ │ │ lb ┼ 20-element StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
│ │ │ │ ub ┴ 20-element StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
│ │ │ lt ┼ nothing
│ │ │ st ┼ nothing
│ │ │ lcsr ┼ nothing
│ │ │ ctr ┼ nothing
│ │ │ gcardr ┼ nothing
│ │ │ sgcardr ┼ nothing
│ │ │ smtx ┼ nothing
│ │ │ sgmtx ┼ nothing
│ │ │ slt ┼ nothing
│ │ │ sst ┼ nothing
│ │ │ sglt ┼ nothing
│ │ │ sgst ┼ nothing
│ │ │ tn ┼ nothing
│ │ │ fees ┼ nothing
│ │ │ plr ┼ nothing
│ │ │ ret ┼ ArithmeticReturn
│ │ │ │ ucs ┼ nothing
│ │ │ │ lb ┼ nothing
│ │ │ │ mu ┴ 20-element Vector{Float64}
│ │ retcode ┼ OptimisationSuccess
│ │ │ res ┴ Dict{Any, Any}: Dict{Any, Any}()
│ │ sol ┼ JuMPOptimisationSolution
│ │ │ w ┴ 20-element Vector{Float64}
│ │ model ┼ A JuMP Model
│ │ │ ├ solver: Clarabel
│ │ │ ├ objective_sense: MIN_SENSE
│ │ │ │ └ objective_function_type: JuMP.QuadExpr
│ │ │ ├ num_variables: 21
│ │ │ ├ num_constraints: 4
│ │ │ │ ├ JuMP.AffExpr in MOI.EqualTo{Float64}: 1
│ │ │ │ ├ Vector{JuMP.AffExpr} in MOI.Nonnegatives: 1
│ │ │ │ ├ Vector{JuMP.AffExpr} in MOI.Nonpositives: 1
│ │ │ │ └ Vector{JuMP.AffExpr} in MOI.SecondOrderCone: 1
│ │ │ └ Names registered in the model
│ │ │ └ :G, :bgt, :dev_1, :dev_1_soc, :k, :lw, :obj_expr, :ret, :risk, :risk_vec, :sc, :so, :variance_flag, :variance_risk_1, :w, :w_lb, :w_ub
│ │ fb ┴ nothing
│ rd ┼ PredictionReturnsResult
│ │ nx ┼ 20-element SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}
│ │ X ┼ 252-element Vector{Float64}
│ │ nf ┼ nothing
│ │ F ┼ nothing
│ │ ts ┼ 252-element SubArray{Date, 1, Vector{Date}, Tuple{UnitRange{Int64}}, true}
│ │ iv ┼ nothing
│ │ ivpa ┴ nothing
│ , PredictionResult
│ res ┼ MeanRiskResult
│ │ oe ┼ DataType: DataType
│ │ pa ┼ ProcessedJuMPOptimiserAttributes
│ │ │ pr ┼ LowOrderPrior
│ │ │ │ X ┼ 1008×20 SubArray{Float64, 2, Matrix{Float64}, Tuple{Vector{Int64}, Base.Slice{Base.OneTo{Int64}}}, false}
│ │ │ │ mu ┼ 20-element Vector{Float64}
│ │ │ │ sigma ┼ 20×20 Matrix{Float64}
│ │ │ │ chol ┼ nothing
│ │ │ │ w ┼ nothing
│ │ │ │ ens ┼ nothing
│ │ │ │ kld ┼ nothing
│ │ │ │ ow ┼ nothing
│ │ │ │ rr ┼ nothing
│ │ │ │ f_mu ┼ nothing
│ │ │ │ f_sigma ┼ nothing
│ │ │ │ f_w ┴ nothing
│ │ │ wb ┼ WeightBounds
│ │ │ │ lb ┼ 20-element StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
│ │ │ │ ub ┴ 20-element StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
│ │ │ lt ┼ nothing
│ │ │ st ┼ nothing
│ │ │ lcsr ┼ nothing
│ │ │ ctr ┼ nothing
│ │ │ gcardr ┼ nothing
│ │ │ sgcardr ┼ nothing
│ │ │ smtx ┼ nothing
│ │ │ sgmtx ┼ nothing
│ │ │ slt ┼ nothing
│ │ │ sst ┼ nothing
│ │ │ sglt ┼ nothing
│ │ │ sgst ┼ nothing
│ │ │ tn ┼ nothing
│ │ │ fees ┼ nothing
│ │ │ plr ┼ nothing
│ │ │ ret ┼ ArithmeticReturn
│ │ │ │ ucs ┼ nothing
│ │ │ │ lb ┼ nothing
│ │ │ │ mu ┴ 20-element Vector{Float64}
│ │ retcode ┼ OptimisationSuccess
│ │ │ res ┴ Dict{Any, Any}: Dict{Any, Any}()
│ │ sol ┼ JuMPOptimisationSolution
│ │ │ w ┴ 20-element Vector{Float64}
│ │ model ┼ A JuMP Model
│ │ │ ├ solver: Clarabel
│ │ │ ├ objective_sense: MIN_SENSE
│ │ │ │ └ objective_function_type: JuMP.QuadExpr
│ │ │ ├ num_variables: 21
│ │ │ ├ num_constraints: 4
│ │ │ │ ├ JuMP.AffExpr in MOI.EqualTo{Float64}: 1
│ │ │ │ ├ Vector{JuMP.AffExpr} in MOI.Nonnegatives: 1
│ │ │ │ ├ Vector{JuMP.AffExpr} in MOI.Nonpositives: 1
│ │ │ │ └ Vector{JuMP.AffExpr} in MOI.SecondOrderCone: 1
│ │ │ └ Names registered in the model
│ │ │ └ :G, :bgt, :dev_1, :dev_1_soc, :k, :lw, :obj_expr, :ret, :risk, :risk_vec, :sc, :so, :variance_flag, :variance_risk_1, :w, :w_lb, :w_ub
│ │ fb ┴ nothing
│ rd ┼ PredictionReturnsResult
│ │ nx ┼ 20-element SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}
│ │ X ┼ 252-element Vector{Float64}
│ │ nf ┼ nothing
│ │ F ┼ nothing
│ │ ts ┼ 252-element SubArray{Date, 1, Vector{Date}, Tuple{UnitRange{Int64}}, true}
│ │ iv ┼ nothing
│ │ ivpa ┴ nothing
│ , PredictionResult
│ res ┼ MeanRiskResult
│ │ oe ┼ DataType: DataType
│ │ pa ┼ ProcessedJuMPOptimiserAttributes
│ │ │ pr ┼ LowOrderPrior
│ │ │ │ X ┼ 1008×20 SubArray{Float64, 2, Matrix{Float64}, Tuple{Vector{Int64}, Base.Slice{Base.OneTo{Int64}}}, false}
│ │ │ │ mu ┼ 20-element Vector{Float64}
│ │ │ │ sigma ┼ 20×20 Matrix{Float64}
│ │ │ │ chol ┼ nothing
│ │ │ │ w ┼ nothing
│ │ │ │ ens ┼ nothing
│ │ │ │ kld ┼ nothing
│ │ │ │ ow ┼ nothing
│ │ │ │ rr ┼ nothing
│ │ │ │ f_mu ┼ nothing
│ │ │ │ f_sigma ┼ nothing
│ │ │ │ f_w ┴ nothing
│ │ │ wb ┼ WeightBounds
│ │ │ │ lb ┼ 20-element StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
│ │ │ │ ub ┴ 20-element StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
│ │ │ lt ┼ nothing
│ │ │ st ┼ nothing
│ │ │ lcsr ┼ nothing
│ │ │ ctr ┼ nothing
│ │ │ gcardr ┼ nothing
│ │ │ sgcardr ┼ nothing
│ │ │ smtx ┼ nothing
│ │ │ sgmtx ┼ nothing
│ │ │ slt ┼ nothing
│ │ │ sst ┼ nothing
│ │ │ sglt ┼ nothing
│ │ │ sgst ┼ nothing
│ │ │ tn ┼ nothing
│ │ │ fees ┼ nothing
│ │ │ plr ┼ nothing
│ │ │ ret ┼ ArithmeticReturn
│ │ │ │ ucs ┼ nothing
│ │ │ │ lb ┼ nothing
│ │ │ │ mu ┴ 20-element Vector{Float64}
│ │ retcode ┼ OptimisationSuccess
│ │ │ res ┴ Dict{Any, Any}: Dict{Any, Any}()
│ │ sol ┼ JuMPOptimisationSolution
│ │ │ w ┴ 20-element Vector{Float64}
│ │ model ┼ A JuMP Model
│ │ │ ├ solver: Clarabel
│ │ │ ├ objective_sense: MIN_SENSE
│ │ │ │ └ objective_function_type: JuMP.QuadExpr
│ │ │ ├ num_variables: 21
│ │ │ ├ num_constraints: 4
│ │ │ │ ├ JuMP.AffExpr in MOI.EqualTo{Float64}: 1
│ │ │ │ ├ Vector{JuMP.AffExpr} in MOI.Nonnegatives: 1
│ │ │ │ ├ Vector{JuMP.AffExpr} in MOI.Nonpositives: 1
│ │ │ │ └ Vector{JuMP.AffExpr} in MOI.SecondOrderCone: 1
│ │ │ └ Names registered in the model
│ │ │ └ :G, :bgt, :dev_1, :dev_1_soc, :k, :lw, :obj_expr, :ret, :risk, :risk_vec, :sc, :so, :variance_flag, :variance_risk_1, :w, :w_lb, :w_ub
│ │ fb ┴ nothing
│ rd ┼ PredictionReturnsResult
│ │ nx ┼ 20-element SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}
│ │ X ┼ 252-element Vector{Float64}
│ │ nf ┼ nothing
│ │ F ┼ nothing
│ │ ts ┼ 252-element SubArray{Date, 1, Vector{Date}, Tuple{UnitRange{Int64}}, true}
│ │ iv ┼ nothing
│ │ ivpa ┴ nothing
│ , PredictionResult
│ res ┼ MeanRiskResult
│ │ oe ┼ DataType: DataType
│ │ pa ┼ ProcessedJuMPOptimiserAttributes
│ │ │ pr ┼ LowOrderPrior
│ │ │ │ X ┼ 1008×20 SubArray{Float64, 2, Matrix{Float64}, Tuple{Vector{Int64}, Base.Slice{Base.OneTo{Int64}}}, false}
│ │ │ │ mu ┼ 20-element Vector{Float64}
│ │ │ │ sigma ┼ 20×20 Matrix{Float64}
│ │ │ │ chol ┼ nothing
│ │ │ │ w ┼ nothing
│ │ │ │ ens ┼ nothing
│ │ │ │ kld ┼ nothing
│ │ │ │ ow ┼ nothing
│ │ │ │ rr ┼ nothing
│ │ │ │ f_mu ┼ nothing
│ │ │ │ f_sigma ┼ nothing
│ │ │ │ f_w ┴ nothing
│ │ │ wb ┼ WeightBounds
│ │ │ │ lb ┼ 20-element StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
│ │ │ │ ub ┴ 20-element StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
│ │ │ lt ┼ nothing
│ │ │ st ┼ nothing
│ │ │ lcsr ┼ nothing
│ │ │ ctr ┼ nothing
│ │ │ gcardr ┼ nothing
│ │ │ sgcardr ┼ nothing
│ │ │ smtx ┼ nothing
│ │ │ sgmtx ┼ nothing
│ │ │ slt ┼ nothing
│ │ │ sst ┼ nothing
│ │ │ sglt ┼ nothing
│ │ │ sgst ┼ nothing
│ │ │ tn ┼ nothing
│ │ │ fees ┼ nothing
│ │ │ plr ┼ nothing
│ │ │ ret ┼ ArithmeticReturn
│ │ │ │ ucs ┼ nothing
│ │ │ │ lb ┼ nothing
│ │ │ │ mu ┴ 20-element Vector{Float64}
│ │ retcode ┼ OptimisationSuccess
│ │ │ res ┴ Dict{Any, Any}: Dict{Any, Any}()
│ │ sol ┼ JuMPOptimisationSolution
│ │ │ w ┴ 20-element Vector{Float64}
│ │ model ┼ A JuMP Model
│ │ │ ├ solver: Clarabel
│ │ │ ├ objective_sense: MIN_SENSE
│ │ │ │ └ objective_function_type: JuMP.QuadExpr
│ │ │ ├ num_variables: 21
│ │ │ ├ num_constraints: 4
│ │ │ │ ├ JuMP.AffExpr in MOI.EqualTo{Float64}: 1
│ │ │ │ ├ Vector{JuMP.AffExpr} in MOI.Nonnegatives: 1
│ │ │ │ ├ Vector{JuMP.AffExpr} in MOI.Nonpositives: 1
│ │ │ │ └ Vector{JuMP.AffExpr} in MOI.SecondOrderCone: 1
│ │ │ └ Names registered in the model
│ │ │ └ :G, :bgt, :dev_1, :dev_1_soc, :k, :lw, :obj_expr, :ret, :risk, :risk_vec, :sc, :so, :variance_flag, :variance_risk_1, :w, :w_lb, :w_ub
│ │ fb ┴ nothing
│ rd ┼ PredictionReturnsResult
│ │ nx ┼ 20-element SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}
│ │ X ┼ 252-element Vector{Float64}
│ │ nf ┼ nothing
│ │ F ┼ nothing
│ │ ts ┼ 252-element SubArray{Date, 1, Vector{Date}, Tuple{UnitRange{Int64}}, true}
│ │ iv ┼ nothing
│ │ ivpa ┴ nothing
│ ]
mrd ┼ PredictionReturnsResult
│ nx ┼ 20-element SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}
│ X ┼ 1260-element Vector{Float64}
│ nf ┼ nothing
│ F ┼ nothing
│ ts ┼ 1260-element Vector{Date}
│ iv ┼ nothing
│ ivpa ┴ nothing
id ┴ nothingThe result is a [MultiPeriodPredictionResult]-(@ref) object, which is a wrapper for a vector of [PredictionResult]-(@ref) objects, one for each fold. Each [PredictionResult]-(@ref) contains the optimisation result based on the training set, and a [PredictionReturnsResult]-(@ref) containing the predicted returns result of the optimised portfolio evaluated on its corresponding test set.
We can individually access the result of each fold by indexing into the pred field of the [MultiPeriodPredictionResult]-(@ref) object, but we can also directly access via the accessing the mrd and mres properties, which stand for multi-rd and multi-res. mrd concatenates the predicted returns into a single [PredictionReturnsResult]-(@ref). Since the embargo and purged sizes are zero, the timestamps of the predicted returns should be the same as the timestamps of the original returns result.
println("isequal(kfold_pred.mrd.ts, rd.ts) = $(isequal(kfold_pred.mrd.ts, rd.ts))")isequal(kfold_pred.mrd.ts, rd.ts) = trueWe can also compute performance metrics (risk measures) on the predicted returns. However, we can only use risk measures that use the returns series as an input. This means [StandardDeviation]-(@ref), [NegativeSkewness]-(@ref), [TurnoverRiskMeasure]-(@ref), [TrackingRiskMeasure]-(@ref) with WeightsTracking, [Variance]-(@ref), [UncertaintySetVariance]-(@ref), [EqualRiskMeasure]-(@ref), [ExpectedReturn]-(@ref) and [ExpectedReturnRiskRatio]-(@ref), as well as any risk measure that uses any of these cannot be used. But there are ways around this, for example:
For the variance and standard deviation, we can use [
LowOrderMoment]-(@ref) with the appropriate algorithms.For [
NegativeSkewness]-(@ref) we can use [HighOrderMoment]-(@ref), or [Skewness]-(@ref).For [
ExpectedReturn]-(@ref) and [ExpectedReturnRiskRatio]-(@ref) we can use [MeanReturn]-(@ref) and [MeanReturnRiskRatio]-(@ref) respectively.
Here we will compute the variance.
println("KFold(5) prediction variance = $(expected_risk(LowOrderMoment(; alg = SecondMoment()), kfold_pred))")KFold(5) prediction variance = 0.000132669139899210162.2 Combinatorial
The CombinatorialCrossValidation method generates all possible combinations of the data into training and testing sets. This method is computationally expensive, but provides a more comprehensive evaluation of the model's performance on unseen data.
There is also a way to compute the optimal number of folds and training folds given a user-defined desired training and test set lengths, as well as the relative weight between the training size and number of test paths.
T = size(rd.X, 1)
target_train_size = 200
target_test_size = 70
n_folds, n_test_folds = optimal_number_folds(T, target_train_size, target_test_size)
cfold = CombinatorialCrossValidation(; n_folds = n_folds, n_test_folds = n_test_folds)CombinatorialCrossValidation
n_folds ┼ Int64: 13
n_test_folds ┼ Int64: 11
purged_size ┼ Int64: 0
embargo_size ┴ Int64: 0Let's see the indices this produces.
cfold_res = split(cfold, rd)CombinatorialCrossValidationResult
train_idx ┼ 78-element Vector{Vector{Int64}}
test_idx ┼ 78-element Vector{Vector{Vector{Int64}}}
path_ids ┴ 11×78 Matrix{Int64}Here we have 78 splits, each testing path split into 11 folds. This means we have 78 * 11 = 858 total folds, which is a significant increase from the 5 folds we had in KFold. This is the trade-off for having a more comprehensive evaluation of the model's performance on unseen data.
But it also means we need a way to find a good representative of the predictions in order to evaluate the out of sample performance. First let's perform the cross validation.
There is some nuance with this approach in that the splits do not represent the same number of paths, in fact there are only 66 unique paths, which can be seen from cfold_res.path_ids.
cfold_res.path_ids11×78 Matrix{Int64}:
1 2 3 4 5 6 7 8 9 10 11 12 … 59 60 61 62 63 64 65 66 66
1 2 3 4 5 6 7 8 9 10 11 12 59 60 61 62 63 64 65 65 66
1 2 3 4 5 6 7 8 9 10 11 12 59 60 61 62 63 64 64 65 66
1 2 3 4 5 6 7 8 9 10 11 12 59 60 61 62 63 63 64 65 66
1 2 3 4 5 6 7 8 9 10 11 12 59 60 61 62 62 63 64 65 66
1 2 3 4 5 6 7 8 9 10 11 12 … 59 60 61 61 62 63 64 65 66
1 2 3 4 5 6 7 8 9 10 11 12 59 60 60 61 62 63 64 65 66
1 2 3 4 5 6 7 8 9 10 7 8 59 59 60 61 62 63 64 65 66
1 2 3 4 5 6 4 5 6 6 7 8 58 59 60 61 62 63 64 65 66
1 2 3 2 3 3 4 5 5 6 7 8 58 59 60 61 62 63 64 65 66
1 1 1 2 2 3 4 4 5 6 7 7 … 58 59 60 61 62 63 64 65 66We can now perform the cross validation.
cfold_pred = cross_val_predict(mr, rd, cfold)PopulationPredictionResult
pred ┴ 66-element Vector{MultiPeriodPredictionResult{Vector{PredictionResult}, PredictionReturnsResult{SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}, Vector{Float64}, Nothing, Nothing, Vector{Date}, Nothing, Nothing}, Int64}}We can see that there are indeed 66 predictions. Each is a valid representative of the out-of-sample performance of the model. However, for evaluating the performance, we can use a sample or the median of the predictions. The median is a good representative of the performance, as it is not affected by outliers, and it is a good measure of central tendency. We can do this with custom function, or a functor of a subtype of [PredictionScorer]-(@ref). We've implemented a simple one called [NearestQuantilePrediction]-(@ref) which takes the prediction with the nearest quantile to the desired quantile of the distribution of predictions, it defaults to the median.
We will use the risk return ratio of the variance as our performance metric. The paths are sorted according to their expected risk, return based risk measures sort them based on descending order, while true risk measures sort them in ascending order.
sharpe_scorer = NearestQuantilePrediction(;
r = MeanReturnRiskRatio(;
rk = LowOrderMoment(;
alg = SecondMoment())))NearestQuantilePrediction
r ┼ MeanReturnRiskRatio
│ rt ┼ MeanReturn
│ │ w ┼ nothing
│ │ flag ┴ Bool: false
│ rk ┼ LowOrderMoment
│ │ settings ┼ RiskMeasureSettings
│ │ │ scale ┼ Float64: 1.0
│ │ │ ub ┼ nothing
│ │ │ rke ┴ Bool: true
│ │ w ┼ nothing
│ │ mu ┼ nothing
│ │ alg ┼ SecondMoment
│ │ │ ve ┼ SimpleVariance
│ │ │ │ me ┼ nothing
│ │ │ │ w ┼ nothing
│ │ │ │ corrected ┴ Bool: true
│ │ │ alg1 ┼ Full()
│ │ │ alg2 ┴ SquaredSOCRiskExpr()
│ rf ┴ Float64: 0.0
q ┼ Float64: 0.5
kwargs ┴ @NamedTuple{}: NamedTuple()Scorer is a functor which takes a population as an input and outputs a tuple of the single prediction and the index in the population which matches the desired quantile of the distribution of predictions. In this case, we are using the mean return risk ratio with the variance as the risk measure, and we are looking for the prediction with the nearest quantile to 0.5, which is the median.
median_pred_max_sharpe = sharpe_scorer(cfold_pred)MultiPeriodPredictionResult
pred ┼ 13-element Vector{PredictionResult}
mrd ┼ PredictionReturnsResult
│ nx ┼ 20-element SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}
│ X ┼ 1260-element Vector{Float64}
│ nf ┼ nothing
│ F ┼ nothing
│ ts ┼ 1260-element Vector{Date}
│ iv ┼ nothing
│ ivpa ┴ nothing
id ┴ Int64: 18The prediction id corresponds to the index/path id of the prediction in the population.
median_pred_max_sharpe === cfold_pred.pred[median_pred_max_sharpe.id]trueSimilarty to the KFold, the timestamps of the predicted returns should be the same as the timestamps of the original returns result, since the embargo and purged sizes are zero.
isequal(median_pred_max_sharpe.mrd.ts, rd.ts)trueWe can further verify this by computing the risk return ratio of the variance for all predictions and seeing that the prediction with a risk value closest to the median is indeed the same as the one we found with the scorer. Note that the scorer also filters out predictions whose optimisations failed, so in order to be truly rigorous we'd need to skip NaN values in the array of risks, while keeping the indices aligned, but for demonstration purposes this is sufficient.
sharpe_ratios = expected_risk(MeanReturnRiskRatio(;
rk = LowOrderMoment(;
alg = SecondMoment())),
cfold_pred)
argmin(abs.(sharpe_ratios .- median(sharpe_ratios))) == median_pred_max_sharpe.idtrueWe can choose any compatible risk measure as outlined above, for demonstration purposes we will now rank them based on the variance.
variance_scorer = NearestQuantilePrediction(; r = LowOrderMoment(; alg = SecondMoment()))
median_pred_min_variance = variance_scorer(cfold_pred)MultiPeriodPredictionResult
pred ┼ 13-element Vector{PredictionResult}
mrd ┼ PredictionReturnsResult
│ nx ┼ 20-element SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}
│ X ┼ 1260-element Vector{Float64}
│ nf ┼ nothing
│ F ┼ nothing
│ ts ┼ 1260-element Vector{Date}
│ iv ┼ nothing
│ ivpa ┴ nothing
id ┴ Int64: 32Again the id matches the prediction with the nearest quantile to the median of the distribution of predictions.
median_pred_min_variance === cfold_pred.pred[median_pred_min_variance.id]trueAs always, the timestamps match.
isequal(median_pred_min_variance.mrd.ts, rd.ts)true2.3 WalkForward
We offer two different walkforward estimators, IndexWalkForward and DateWalkForward. The former splits the data based on the number of observations, while the latter splits the data based on the timestamps, and can be used with Julia's Dates module to adjust periods to specific times.
The walkforward method is a more realistic evaluation of the model's performance on unseen data, as it mimics the way the model would be used in practice. It can also dynamically use the previous optimisation weights in constraints and risk measures if so desired.
2.3.1 IndexWalkForward
The simpler estimator is IndexWalkForward so we will start with this one. We will use training sets of one full year and test sets of 3 months. Note that a year has roughly 252 trading days. We will again not use any purging, meaning that the test set will immediately follow the training set, and there will be no gap between them. This means that the timestamps of the predicted returns should be the same as the timestamps of the original returns result minus the first 252 entries.
idx_walk_forward = IndexWalkForward(252, round(Int, 252 / 4))
idx_walk_forward_res = split(idx_walk_forward, rd)
show(idx_walk_forward_res.train_idx)
show(idx_walk_forward_res.test_idx)UnitRange{Int64}[1:252, 64:315, 127:378, 190:441, 253:504, 316:567, 379:630, 442:693, 505:756, 568:819, 631:882, 694:945, 757:1008, 820:1071, 883:1134, 946:1197]UnitRange{Int64}[253:315, 316:378, 379:441, 442:504, 505:567, 568:630, 631:693, 694:756, 757:819, 820:882, 883:945, 946:1008, 1009:1071, 1072:1134, 1135:1197, 1198:1260]We can generate the prediction now.
idx_walkforward_pred = cross_val_predict(mr, rd, idx_walk_forward)MultiPeriodPredictionResult
pred ┼ 16-element Vector{PredictionResult}
mrd ┼ PredictionReturnsResult
│ nx ┼ 20-element SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}
│ X ┼ 1008-element Vector{Float64}
│ nf ┼ nothing
│ F ┼ nothing
│ ts ┼ 1008-element Vector{Date}
│ iv ┼ nothing
│ ivpa ┴ nothing
id ┴ nothingLet's check the timestamps.
isequal(idx_walkforward_pred.mrd.ts, rd.ts[253:end])trueNow let's see the evolution of the weights across the different splits.
pretty_table(hcat(DataFrame(:tickers => rd.nx),
DataFrame(reduce(hcat, getproperty.(idx_walkforward_pred.res, :w)),
Symbol.(1:16))); formatters = [resfmt])┌─────────┬──────────┬──────────┬──────────┬──────────┬──────────┬──────────┬───
│ tickers │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ ⋯
│ String │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ ⋯
├─────────┼──────────┼──────────┼──────────┼──────────┼──────────┼──────────┼───
│ AAPL │ 3.104 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ ⋯
│ AMD │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ ⋯
│ BAC │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ ⋯
│ BBY │ 3.634 % │ 3.773 % │ 4.216 % │ 1.757 % │ 0.0 % │ 0.0 % │ ⋯
│ CVX │ 2.537 % │ 5.289 % │ 2.886 % │ 6.628 % │ 11.588 % │ 0.0 % │ ⋯
│ GE │ 5.126 % │ 2.433 % │ 0.684 % │ 1.042 % │ 0.0 % │ 0.0 % │ ⋯
│ HD │ 0.0 % │ 2.601 % │ 3.602 % │ 2.514 % │ 6.347 % │ 0.0 % │ ⋯
│ JNJ │ 0.0 % │ 0.017 % │ 0.797 % │ 7.841 % │ 14.529 % │ 7.359 % │ ⋯
│ JPM │ 6.409 % │ 10.765 % │ 10.995 % │ 7.026 % │ 3.402 % │ 0.0 % │ ⋯
│ KO │ 49.958 % │ 26.25 % │ 22.85 % │ 18.299 % │ 12.377 % │ 24.085 % │ ⋯
│ LLY │ 2.11 % │ 9.026 % │ 10.324 % │ 5.688 % │ 4.636 % │ 0.0 % │ ⋯
│ MRK │ 6.173 % │ 1.687 % │ 2.78 % │ 6.572 % │ 5.126 % │ 23.968 % │ ⋯
│ MSFT │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ ⋯
│ PEP │ 8.202 % │ 9.381 % │ 11.103 % │ 12.631 % │ 5.716 % │ 0.0 % │ ⋯
│ PFE │ 0.007 % │ 2.932 % │ 0.0 % │ 0.0 % │ 1.14 % │ 2.727 % │ ⋯
│ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋱
└─────────┴──────────┴──────────┴──────────┴──────────┴──────────┴──────────┴───
10 columns and 5 rows omittedAs we can see, the weights can evolve in a fairly volatile manner. We can avoid this by adding a non-fixed turnover constraint, fee, risk measure, or weight based tracking. For demonstration purposes we will use a turnover constraint with a maximum turnover of 2% per period for all assets from an equal weight starting point, we will provide the Turnover directly, which is non-fixed by default, meaning it will be updated every period.
N = size(rd.X, 2)
tn = Turnover(; w = range(; start = 1 / N, stop = 1 / N, length = N), val = 0.02)Turnover
w ┼ 20-element StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
val ┼ Float64: 0.02
fixed ┴ Bool: falseWe can generate the optimiser with the turnover constraint and then perform the walkforward cross validation again.
mr = MeanRisk(; opt = JuMPOptimiser(; slv = slv, tn = tn))
idx_tn_walkforward_pred = cross_val_predict(mr, rd, idx_walk_forward)MultiPeriodPredictionResult
pred ┼ 16-element Vector{PredictionResult}
mrd ┼ PredictionReturnsResult
│ nx ┼ 20-element SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}
│ X ┼ 1008-element Vector{Float64}
│ nf ┼ nothing
│ F ┼ nothing
│ ts ┼ 1008-element Vector{Date}
│ iv ┼ nothing
│ ivpa ┴ nothing
id ┴ nothingNow let's see the evolution of the weights across the different splits. We can see how the weights change at most 2% per period.
pretty_table(hcat(DataFrame(:tickers => rd.nx),
DataFrame(reduce(hcat, getproperty.(idx_tn_walkforward_pred.res, :w)),
Symbol.(1:16))); formatters = [resfmt])┌─────────┬─────────┬─────────┬─────────┬─────────┬──────────┬──────────┬───────
│ tickers │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ ⋯
│ String │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Flo ⋯
├─────────┼─────────┼─────────┼─────────┼─────────┼──────────┼──────────┼───────
│ AAPL │ 3.0 % │ 1.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0 ⋯
│ AMD │ 3.0 % │ 1.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0 ⋯
│ BAC │ 3.0 % │ 1.354 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0 ⋯
│ BBY │ 3.0 % │ 2.903 % │ 3.93 % │ 1.93 % │ 0.0 % │ 0.0 % │ 0 ⋯
│ CVX │ 3.0 % │ 4.681 % │ 3.399 % │ 5.398 % │ 7.398 % │ 5.398 % │ 3.3 ⋯
│ GE │ 4.288 % │ 2.345 % │ 0.808 % │ 0.718 % │ 0.0 % │ 0.0 % │ 0 ⋯
│ HD │ 3.0 % │ 5.0 % │ 5.948 % │ 3.948 % │ 5.947 % │ 3.947 % │ 1.9 ⋯
│ JNJ │ 7.0 % │ 5.987 % │ 4.334 % │ 6.334 % │ 8.333 % │ 10.333 % │ 12.3 ⋯
│ JPM │ 5.712 % │ 7.712 % │ 9.712 % │ 8.063 % │ 6.152 % │ 4.152 % │ 2.1 ⋯
│ KO │ 7.0 % │ 9.0 % │ 11.0 % │ 13.0 % │ 12.391 % │ 14.391 % │ 16.3 ⋯
│ LLY │ 7.0 % │ 8.987 % │ 9.091 % │ 7.091 % │ 5.423 % │ 3.786 % │ 4.6 ⋯
│ MRK │ 7.0 % │ 8.75 % │ 6.751 % │ 7.327 % │ 6.199 % │ 8.199 % │ 10.1 ⋯
│ MSFT │ 3.0 % │ 1.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0 ⋯
│ PEP │ 7.0 % │ 9.0 % │ 11.0 % │ 13.0 % │ 11.015 % │ 9.015 % │ 7.0 ⋯
│ PFE │ 7.0 % │ 5.0 % │ 3.0 % │ 1.001 % │ 2.12 % │ 4.12 % │ 6. ⋯
│ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋱
└─────────┴─────────┴─────────┴─────────┴─────────┴──────────┴──────────┴───────
10 columns and 5 rows omitted2.3.2 DateWalkForward
The DateWalkForward estimator is similar to the IndexWalkForward estimator, but it allows us to specify the training and test periods in terms of dates. This can be useful if we want to align our training and test sets with specific calendar periods, such as fiscal years or quarters.
The Dates module provides a large amount of functionality to manipulate dates, but we will keep it simple. For this we will define an adjuster function that takes a date range generates a new one made up only of the last day of the month.
function ldm(x)
val = lastdayofmonth.(x)
while !isempty(val)
if val[end] > x[end]
val = val[1:(end - 1)]
else
break
end
end
return val
end;This estimator can take a few options, the first argument can also be a date period or compound period, but if we leave it as an integer it will take on that many periods. The second argument is always an integer and is the value of that many periods. Combining both gives us a fully determined mixture of training and test set lengths as both can be set to an arbitrarily defined training and testing period.
To keep it simple, we will keep the period unit the same for both. We will again train for a year and test for a quarter, but the dates will now align with the end of calendar months. The date boundaries are determined by searching for the last date in the timestamps less than a value in the date boundary. If the date of the timestamp is not found in the date range, the previous flag is used to determine whether to take the last date found (previous = true), or the next available date is used (previous = false). This means that in order to guarantee alignment of the first date of each test set with the last day of the month we need to set previous to true.
date_walk_forward = DateWalkForward(12, 3; period = Month(1), adjuster = ldm,
previous = true)DateWalkForward
train_size ┼ Int64: 12
test_size ┼ Int64: 3
period ┼ Month: Month(1)
period_offset ┼ nothing
purged_size ┼ Int64: 0
adjuster ┼ typeof(Main.ldm): Main.ldm
previous ┼ Bool: true
expend_train ┼ Bool: false
reduce_test ┴ Bool: falseWe can see what the splits look like.
date_walk_forward_res = split(date_walk_forward, rd)
show(date_walk_forward_res.train_idx)
show(date_walk_forward_res.test_idx)UnitRange{Int64}[3:253, 64:314, 128:377, 191:441, 254:505, 315:567, 378:630, 442:694, 506:758, 568:819, 631:882, 695:946, 759:1010, 820:1072, 883:1134]UnitRange{Int64}[254:314, 315:377, 378:441, 442:505, 506:567, 568:630, 631:694, 695:758, 759:819, 820:882, 883:946, 947:1010, 1011:1072, 1073:1134, 1135:1198]We will once more use the turnover constraint, but with this new cross validation method.
date_tn_walkforward_pred = cross_val_predict(mr, rd, date_walk_forward)MultiPeriodPredictionResult
pred ┼ 15-element Vector{PredictionResult}
mrd ┼ PredictionReturnsResult
│ nx ┼ 20-element SubArray{String, 1, Vector{String}, Tuple{Base.Slice{Base.OneTo{Int64}}}, true}
│ X ┼ 945-element Vector{Float64}
│ nf ┼ nothing
│ F ┼ nothing
│ ts ┼ 945-element Vector{Date}
│ iv ┼ nothing
│ ivpa ┴ nothing
id ┴ nothingWe can see the evolution of the weights across the different splits. We can see how the weights change at most 2% per period.
pretty_table(hcat(DataFrame(:tickers => rd.nx),
DataFrame(reduce(hcat, getproperty.(date_tn_walkforward_pred.res, :w)),
Symbol.(1:15))); formatters = [resfmt])┌─────────┬─────────┬─────────┬─────────┬─────────┬──────────┬──────────┬───────
│ tickers │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ ⋯
│ String │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Flo ⋯
├─────────┼─────────┼─────────┼─────────┼─────────┼──────────┼──────────┼───────
│ AAPL │ 3.0 % │ 1.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0 ⋯
│ AMD │ 3.0 % │ 1.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0 ⋯
│ BAC │ 3.0 % │ 1.388 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0 ⋯
│ BBY │ 3.0 % │ 2.859 % │ 4.005 % │ 2.005 % │ 0.005 % │ 0.0 % │ 0 ⋯
│ CVX │ 3.0 % │ 4.56 % │ 3.364 % │ 5.364 % │ 7.364 % │ 5.364 % │ 3.3 ⋯
│ GE │ 4.296 % │ 2.36 % │ 0.788 % │ 0.688 % │ 0.0 % │ 0.0 % │ 0 ⋯
│ HD │ 3.0 % │ 4.999 % │ 5.964 % │ 3.965 % │ 5.964 % │ 3.965 % │ 1.9 ⋯
│ JNJ │ 7.0 % │ 5.998 % │ 4.248 % │ 6.247 % │ 8.247 % │ 10.247 % │ 12.2 ⋯
│ JPM │ 5.704 % │ 7.704 % │ 9.703 % │ 7.982 % │ 6.127 % │ 4.127 % │ 2.1 ⋯
│ KO │ 7.0 % │ 9.0 % │ 11.0 % │ 13.0 % │ 12.603 % │ 14.603 % │ 16.6 ⋯
│ LLY │ 7.0 % │ 8.931 % │ 9.187 % │ 7.187 % │ 5.495 % │ 3.87 % │ 4.7 ⋯
│ MRK │ 7.0 % │ 8.699 % │ 6.7 % │ 7.22 % │ 6.239 % │ 8.239 % │ 10.2 ⋯
│ MSFT │ 3.0 % │ 1.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0.0 % │ 0 ⋯
│ PEP │ 7.0 % │ 9.0 % │ 11.0 % │ 13.0 % │ 11.002 % │ 9.002 % │ 7.0 ⋯
│ PFE │ 7.0 % │ 5.001 % │ 3.001 % │ 1.001 % │ 1.961 % │ 3.961 % │ 5.9 ⋯
│ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋮ │ ⋱
└─────────┴─────────┴─────────┴─────────┴─────────┴──────────┴──────────┴───────
9 columns and 5 rows omittedThe splits are different to the index walkforward method, so the weights are also different, but we can see there's not too much variation. That's because the training periods are roughly the same. However, the turnover constraint also helps in stabilising the weights.
There is another cross validation method called [MultipleRandomised]-(@ref) which uses a walk forward estimator, but also randomly samples the asset universe. Since it is more complex to analyse and understand, we will cover it in a future example.
This page was generated using Literate.jl.