-
Notifications
You must be signed in to change notification settings - Fork 110
/
DESCRIPTION
65 lines (65 loc) · 1.87 KB
/
DESCRIPTION
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Package: lime
Type: Package
Title: Local Interpretable Model-Agnostic Explanations
Version: 0.5.3.9000
Authors@R:
c(person(given = "Emil",
family = "Hvitfeldt",
role = c("aut", "cre"),
email = "[email protected]",
comment = c(ORCID = "0000-0002-0679-1945")),
person(given = "Thomas Lin",
family = "Pedersen",
role = c("aut"),
email = "[email protected]",
comment = c(ORCID = "0000-0002-5147-4711")),
person(given = 'Michaël',
family = 'Benesty',
role = c('aut'),
email = "[email protected]"))
Maintainer: Emil Hvitfeldt <[email protected]>
Description: When building complex models, it is often difficult to explain why
the model should be trusted. While global measures such as accuracy are
useful, they cannot be used for explaining why a model made a specific
prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for
explaining the outcome of black box models by fitting a local model around
the point in question an perturbations of this point. The approach is
described in more detail in the article by Ribeiro et al. (2016)
<arXiv:1602.04938>.
License: MIT + file LICENSE
URL: https://lime.data-imaginist.com, https://github.com/thomasp85/lime
BugReports: https://github.com/thomasp85/lime/issues
Encoding: UTF-8
LazyData: true
RoxygenNote: 7.2.1
Roxygen: list(markdown = TRUE)
VignetteBuilder: knitr
Imports: glmnet,
stats,
ggplot2,
tools,
stringi,
Matrix,
Rcpp,
assertthat,
methods,
grDevices,
gower
Suggests: xgboost,
testthat,
mlr,
h2o,
text2vec,
MASS,
covr,
knitr,
rmarkdown,
sessioninfo,
magick,
keras,
htmlwidgets,
shiny,
shinythemes,
ranger
LinkingTo: Rcpp,
RcppEigen