You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your problem: LinearRegressor() expects a table but it is getting a matrix here. So you need to insert an MLJTable.
Another problem: You can't apply the inverse_transform directly to ẑ because it makes probablisitic predictions. So you need to insert a mean. Sadly, mean is not overloaded for nodes, so you need to use the node method to do this yourself
The following code works:
using MLJ
Xr, yr =@load_boston
zr =rand(length(yr))
X =source(Xr)
y =source(yr)
z =source(zr)
W = X |>Standardizer()
z = y |>UnivariateBoxCoxTransformer()
ẑ₁ = (W, z) |>@load RidgeRegressor pkg=MultivariateStats
ẑ₂ = (W, z) |>@load DecisionTreeRegressor
R = MLJ.table(hcat(ẑ₁, ẑ₂))
ẑ_prob = (R, z) |>@load LinearRegressor pkg=GLM
ẑ =node(v->mean.(v), ẑ_prob)
ŷ = ẑ |>inverse_transform(z)
fit!(ŷ)
ŷ()
Side comment: A great non-trivial example is stacking, as suggested by the diagram on the MLJ Readme.
@ablaom it would be nice to have a quick example of something that can't be done with a pipeline; I was thinking something like
L1 : standardizer and boxcox
L2 : Ridge and DTR
L3 : hcat and feed to a LinearRegressor
It should look something like this
but it looks like there's an issue with fitting the
R
node, could you comment on it?The text was updated successfully, but these errors were encountered: