Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model specific analysis features #216

Open
mitchelloharawild opened this issue Jan 7, 2020 · 4 comments
Open

Model specific analysis features #216

mitchelloharawild opened this issue Jan 7, 2020 · 4 comments
Labels
help wanted Extra attention is needed

Comments

@mitchelloharawild
Copy link
Member

Many models have tests or analysis specific to that model.

A method for model specific functionality is required.

Related: #199, #200

@mitchelloharawild mitchelloharawild added the help wanted Extra attention is needed label Mar 27, 2020
@nathancday
Copy link

If you are still looking for help, I would like to give this a go.

@mitchelloharawild
Copy link
Member Author

Help would be greatly appreciated.

The trouble I have with this is designing an appropriate interface which is sufficiently general for any model's tests and features. It can be assumed that each model stores enough information to compute the values of interest. Applying these functions to each model would need to be done with some other function, like how features() applies a feature function to each time series.

Any ideas for how model specific functions can be applied to models in a mable would be very helpful.

@nathancday
Copy link

I see how the generalizing is difficult. It is ambitious, but it would be very valuable.

Do you think it's reasonable to identify a set of models (VAR, NNAR, ...) to use as pilot/test set for developing a prototype of this? Really are there any other specific model inference capabilities that you think should be considered here in addition to these two?

My thinking is a defined sample of features to be supported would provide something to program against. This would also give us a good idea of how hard supporting any model might be.

@mitchelloharawild
Copy link
Member Author

Granger causality and variable importance would be a good start. I also think portmanteau tests like Ljung-Box would also fit in this framework, these wouldn't be specific to any model so long as they specify the degrees of freedom in the glance() output (#241).

I expect the possible analyses would vary substantially between models, and that others could be generalised to collections of models. For example feasts::gg_arma() plots ARMA roots (designed for ARIMA()), however it will work with any model that outputs ar_roots and/or ma_roots in the glance() output (so it could also work with AR(), tbats, etc.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants