Skip to content

Commit

Permalink
merge
Browse files Browse the repository at this point in the history
kkholst committed Jan 13, 2025
2 parents c196550 + 771d62a commit babb4f8
Showing 13 changed files with 187 additions and 35 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/R-CMD-check.yaml
Original file line number Diff line number Diff line change
@@ -27,7 +27,7 @@ jobs:
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install -y pandoc pandoc-citeproc libcurl4-openssl-dev libharfbuzz-dev libfribidi-dev
sudo apt-get install -y pandoc libcurl4-openssl-dev libharfbuzz-dev libfribidi-dev
- uses: r-lib/actions/setup-r@v2
with:
95 changes: 95 additions & 0 deletions .github/workflows/rhub.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# R-hub's generic GitHub Actions workflow file. It's canonical location is at
# https://github.com/r-hub/actions/blob/v1/workflows/rhub.yaml
# You can update this file to a newer version using the rhub2 package:
#
# rhub::rhub_setup()
#
# It is unlikely that you need to modify this file manually.

name: R-hub
run-name: "${{ github.event.inputs.id }}: ${{ github.event.inputs.name || format('Manually run by {0}', github.triggering_actor) }}"

on:
workflow_dispatch:
inputs:
config:
description: 'A comma separated list of R-hub platforms to use.'
type: string
default: 'linux,windows,macos'
name:
description: 'Run name. You can leave this empty now.'
type: string
id:
description: 'Unique ID. You can leave this empty now.'
type: string

jobs:

setup:
runs-on: ubuntu-latest
outputs:
containers: ${{ steps.rhub-setup.outputs.containers }}
platforms: ${{ steps.rhub-setup.outputs.platforms }}

steps:
# NO NEED TO CHECKOUT HERE
- uses: r-hub/actions/setup@v1
with:
config: ${{ github.event.inputs.config }}
id: rhub-setup

linux-containers:
needs: setup
if: ${{ needs.setup.outputs.containers != '[]' }}
runs-on: ubuntu-latest
name: ${{ matrix.config.label }}
strategy:
fail-fast: false
matrix:
config: ${{ fromJson(needs.setup.outputs.containers) }}
container:
image: ${{ matrix.config.container }}

steps:
- uses: r-hub/actions/checkout@v1
- uses: r-hub/actions/platform-info@v1
with:
token: ${{ secrets.RHUB_TOKEN }}
job-config: ${{ matrix.config.job-config }}
- uses: r-hub/actions/setup-deps@v1
with:
token: ${{ secrets.RHUB_TOKEN }}
job-config: ${{ matrix.config.job-config }}
- uses: r-hub/actions/run-check@v1
with:
token: ${{ secrets.RHUB_TOKEN }}
job-config: ${{ matrix.config.job-config }}

other-platforms:
needs: setup
if: ${{ needs.setup.outputs.platforms != '[]' }}
runs-on: ${{ matrix.config.os }}
name: ${{ matrix.config.label }}
strategy:
fail-fast: false
matrix:
config: ${{ fromJson(needs.setup.outputs.platforms) }}

steps:
- uses: r-hub/actions/checkout@v1
- uses: r-hub/actions/setup-r@v1
with:
job-config: ${{ matrix.config.job-config }}
token: ${{ secrets.RHUB_TOKEN }}
- uses: r-hub/actions/platform-info@v1
with:
token: ${{ secrets.RHUB_TOKEN }}
job-config: ${{ matrix.config.job-config }}
- uses: r-hub/actions/setup-deps@v1
with:
job-config: ${{ matrix.config.job-config }}
token: ${{ secrets.RHUB_TOKEN }}
- uses: r-hub/actions/run-check@v1
with:
job-config: ${{ matrix.config.job-config }}
token: ${{ secrets.RHUB_TOKEN }}
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
@@ -61,7 +61,7 @@ Suggests:
targeted (>= 0.4),
testthat (>= 0.11),
visNetwork
VignetteBuilder: knitr
VignetteBuilder: knitr,rmarkdown
ByteCompile: yes
Encoding: UTF-8
RoxygenNote: 7.3.2
2 changes: 2 additions & 0 deletions NAMESPACE
Original file line number Diff line number Diff line change
@@ -180,6 +180,7 @@ S3method(kappa,data.frame)
S3method(kappa,multinomial)
S3method(kappa,table)
S3method(kill,lvm)
S3method(labels,estimate)
S3method(labels,graphNEL)
S3method(labels,lvm)
S3method(labels,lvmfit)
@@ -357,6 +358,7 @@ S3method(summary,twostageCV)
S3method(summary,zibreg)
S3method(totaleffects,lvmfit)
S3method(tr,matrix)
S3method(transform,estimate)
S3method(transform,lvm)
S3method(twostage,estimate)
S3method(twostage,lvm)
5 changes: 2 additions & 3 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
# lava 1.8.1
- Development version
- `sim.default` now accepts the argument `R` to be a list (of list) of
- `sim.default` now accepts the argument `R` to be a list (of lists) of
arguments.
- `subset.estimate` method
- New methods `subset.estimate`, `transform.estimate`, `labels.estimate`

# lava 1.8.0
- New methods `estimate.mlm`, `IC.mlm`, `pars.mlm`, `estimate.array`,
10 changes: 10 additions & 0 deletions R/estimate.default.R
Original file line number Diff line number Diff line change
@@ -1009,6 +1009,16 @@ subset.estimate <- function(x, keep, ...) {
estimate(x, keep = keep, ...)
}

##' @export
transform.estimate <- function(`_data`, ...) {
estimate(`_data`, ...)
}

##' @export
labels.estimate <- function(object, str, ...) {
estimate(object, labels=str, ...)
}

##' @export
parameter.estimate <- function(x, ...) {
return(x$coefmat)
5 changes: 4 additions & 1 deletion R/ksmooth.R
Original file line number Diff line number Diff line change
@@ -11,9 +11,10 @@
##' @param gridsize grid size of kernel smoother
##' @param ... Additional arguments to graphics routine (persp3d or persp)
##' @examples
##' if (requireNamespace("KernSmooth")) {##'
##' ksmooth2(rmvn0(1e4,sigma=diag(2)*.5+.5),c(-3.5,3.5),h=1,
##' rgl=FALSE,theta=30)
##'
##' ##'
##' if (interactive()) {
##' ksmooth2(rmvn0(1e4,sigma=diag(2)*.5+.5),c(-3.5,3.5),h=1)
##' ksmooth2(function(x,y) x^2+y^2, c(-20,20))
@@ -34,6 +35,8 @@
##' f <- function(x,y) 1-sqrt(x^2+y^2)
##' ksmooth2(f,c(-1,1),rgl=FALSE,image=fields::image.plot)
##' }
##'
##' }
ksmooth2 <- function(x,data,h=NULL,xlab=NULL,ylab=NULL,zlab="",gridsize=rep(51L,2),...) {
if (is.function(x)) {
args <- c(list(f=x,h=h,xlab=xlab,ylab=ylab,zlab=zlab),list(...))
2 changes: 1 addition & 1 deletion R/lava-package.R
Original file line number Diff line number Diff line change
@@ -251,7 +251,7 @@ NULL
##' starter.multigroup
##' addattr modelPar modelVar matrices pars pars.lvm
##' pars.lvmfit pars.glm score.glm procdata.lvmfit modelPar modelVar
##' matrices reorderdata graph2lvm igraph.lvm subgraph finalize
##' mat.lvm matrices reorderdata graph2lvm igraph.lvm subgraph finalize
##' index.lvm index.lvmfit index reindex index<-
##' rmvn0 dmvn0 logit expit tigol
##' randomslope randomslope<- lisrel variances offdiags describecoef
14 changes: 10 additions & 4 deletions R/matrices.R
Original file line number Diff line number Diff line change
@@ -214,10 +214,16 @@ mat.lvm <- function(x,ii=index(x),...) {

parBelongsTo <- lapply(parBelongsTo,function(x) sort(unique(x)))


return(list(mean=cbind(idxM,pidxM),
reg=cbind(idxA,pidxA),
cov=cbind(idxP,pidxP),
meanl <- idxM
if (!is.null(meanl)) meanl <- cbind(meanl, pidxM)
regl <- idxA
if (!is.null(regl)) regl <- cbind(regl, pidxA)
covl <- idxP
if (!is.null(covl)) covl <- cbind(covl, pidxP)

return(list(mean=meanl,
reg=regl,
cov=covl,
epar=ee,
parval=parval,
constrain.idx=constrain.idx,
1 change: 1 addition & 0 deletions man/internal.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 4 additions & 1 deletion man/ksmooth2.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

31 changes: 17 additions & 14 deletions tests/testthat/test-influence.R
Original file line number Diff line number Diff line change
@@ -1,43 +1,44 @@
context("Influence functions")

test_that("GEE", {
require("geepack")
if (requireNamespace("geepack",quietly=TRUE)) {
d <- lvm(y ~ x, ~ id) |>
distribution(~id, uniform.lvm(value=seq(1:20))) |>
sim(100, seed=1)
d0 <- d[order(d$id), ]
g <- geeglm(y ~ x, data=d0, id=d0$id)
g <- geepack::geeglm(y ~ x, data=d0, id=d0$id)
V <- summary(g)$cov.scaled
g0 <- glm(y ~ x, data=d)
V0 <- vcov(estimate(g0, id = d$id))
testthat::expect_true(sum((V - V0)^2) < 1e-12)
}
})


test_that("merge, IC, estimate with 'id' argument", {
require("geepack")
d <- data.frame(id=c("a","a","b","b","b","b","c","c","d"),
id1=c("a","a","b1","b1","b2","b2","c","c","d"),
y=rnorm(9), x=rnorm(9))
d$id0 <- as.numeric(as.factor(d$id))

l <- glm(y ~ x, data=d)
V <- summary(geeglm(y ~ x, id=d$id0, data=d))$cov.scaled
V0 <- vcov(estimate(l, id=d$id))
testthat::expect_true(sum((V - V0)^2) < 1e-12)

e1 <- estimate(l, id=d$id1)
V1 <- vcov(estimate(e1, id=c(1,2,2,3,4)))
testthat::expect_true(sum((V - V1)^2) < 1e-12)
V1 <- vcov(e1)
e0 <- estimate(e1, id=c(1,2,2,3,4))
V0 <- vcov(e0)

if (requireNamespace("geepack",quietly=TRUE)) {
V <- summary(geepack::geeglm(y ~ x, id=d$id0, data=d))$cov.scaled
testthat::expect_true(sum((V - V0)^2) < 1e-12)
}

e <- merge(estimate(l), estimate(l), id=list(d$id, d$id1))
testthat::expect_true(sum((vcov(e1) - vcov(e)[3:4,3:4])^2) < 1e-12)
testthat::expect_true(sum((V1 - vcov(e)[1:2,1:2])^2) < 1e-12)
testthat::expect_true(sum((V1 - vcov(e)[3:4,3:4])^2) < 1e-12)
testthat::expect_true(sum((V0 - vcov(e)[1:2,1:2])^2) < 1e-12)

ee <- estimate(e, id=c(1,2,3,4,2,2))
VV <- vcov(ee)
testthat::expect_true(sum((VV[1:2,1:2] - V)^2) < 1e-12)
testthat::expect_true(sum((VV[3:4,3:4] - V)^2) < 1e-12)
testthat::expect_true(sum((VV[1:2,1:2] - V0)^2) < 1e-12)
testthat::expect_true(sum((VV[3:4,3:4] - V0)^2) < 1e-12)
})

test_that("negative binomial regression (glm.nb)", {
@@ -47,6 +48,7 @@ test_that("negative binomial regression (glm.nb)", {
x <- rnorm(n)
lam <- z * exp(x)
y <- rpois(n, lam)
if (requireNamespace("MASS",quietly=TRUE)) {
m <- MASS::glm.nb(y ~ x)
testthat::expect_true(abs(lava:::logL.glm(m) - logLik(m)) < 1e-6)
p <- coef(m)+1
@@ -57,6 +59,7 @@ test_that("negative binomial regression (glm.nb)", {
u1 <- as.vector(numDeriv::jacobian(function(p) lava:::logL.glm(m, p = p), p))
u2 <- score(m, p = p)
testthat::expect_true(sum((u1 - u2)^2) < 1e-6)
}
})


48 changes: 39 additions & 9 deletions vignettes/influencefunction.Rmd
Original file line number Diff line number Diff line change
@@ -23,7 +23,9 @@ library("lava")
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
)
has_mets <- lava:::versioncheck('mets', 1)
has_geepack <- lava:::versioncheck('geepack', 1)
```

\newcommand{\arctanh}{\operatorname{arctanh}}
@@ -142,7 +144,13 @@ dw <- sim(m, n, seed = 1) |>
transform(y3 = y3 * ifelse(id > n / 2, NA, 1))
Print(dw)
## Data in long format
dl <- mets::fast.reshape(dw, varying = c("y", "x")) |> na.omit()
dl <- reshape(dw,
varying = list(paste0("y",1:4),
paste0("x",1:4)),
v.names=c("y", "x"), direction="long") |>
na.omit()
dl <- dl[order(dl$id), ]
## dl <- mets::fast.reshape(dw, varying = c("y", "x")) |> na.omit()
Print(dl)
```

@@ -290,7 +298,7 @@ data(pbc, package="survival")
The Cox proportional hazards model can be fitted with the `mets::phreg` method
which can estimate the IF for both the partial likelihood parameters and the
baseline hazard. Here we fit a survival model with right-censored event times
```{r phreg}
```{r phreg, eval=has_mets }
fit.phreg <- mets::phreg(Surv(time, status > 0) ~ age + sex, data = pbc)
fit.phreg
IC(fit.phreg) |> head()
@@ -300,8 +308,16 @@ The IF for the baseline cumulative hazard at a specific time point
\begin{align*}
\Lambda_0(t) = \int_0^t \lambda_0(u)\,du,
\end{align*}
<<<<<<< HEAD
where $\lambda_0(t)$ is the baseline hazard, can be estimated in similar way:
```{r phreg-baseline}
||||||| 512324e
where \(\lambda_0(t)\) is the baseline hazard, can be estimated in similar way:
```{r phreg-baseline}
=======
where \(\lambda_0(t)\) is the baseline hazard, can be estimated in similar way:
```{r phreg-baseline, eval=has_mets }
>>>>>>> 771d62a062f5705e07cc0a1ad7565d68c331436d
baseline <- function(object, time, ...) {
ic <- mets::IC(object, baseline = TRUE, time = time, ...)
est <- mets::predictCumhaz(object$cumhaz, new.time = time)[1, 2]
@@ -313,7 +329,7 @@ baseline(fit.phreg, tt)

The `estimate` and `IF` methods are also available for parametric survival
models via `survival::survreg`, here a Weibull model:
```{r survreg}
```{r survreg, eval=has_mets }
survival::survreg(Surv(time, status > 0) ~ age + sex, data = pbc, dist="weibull") |>
estimate()
```
@@ -334,7 +350,7 @@ $$
\pr(Y_{ij} = 1 \mid U_{i}, W_{ij})=\Phi(\mu_{j} + \beta_{j} W_{ij} + U_{i}), \quad U_{i}\sim\mathcal{N}(0,\sigma_{u}^{2}),\quad j=1,2
$$
to the simulated dataset
```{r semfit}
```{r semfit, eval=has_mets }
sem <- lvm(y1 + y2 ~ 1 * u + w) |>
latent(~ u) |>
ordinal(K=2, ~ y1 + y2)
@@ -570,7 +586,7 @@ estimate(g0, id=dl$id)
```
We can confirm that this situation is equivalent to the variance estimates we
obtain from a GEE marginal model with working independence correlation structure [@r_geepack]
```{r geepack}
```{r geepack, eval=has_geepack }
gee0 <- geepack::geeglm(y ~ a + w + x, data = dl, id = dl$id, family=binomial)
summary(gee0)
```
@@ -793,7 +809,7 @@ where $\Phi$ is the standard Gaussian CDF. As described in
in [@RitzPipper_multcomp] the joint distribution of $Z_{1},\ldots,Z_{p}$
can be estimated from the IFs. This is implemented in the `p.correct` method

```{r pcorrect}
```{r pcorrect, eval=has_mets }
gg0 <- estimate(gg, use="^a", regex=TRUE, null=rep(.8, 3))
p.correct(gg0)
```
@@ -811,7 +827,7 @@ adjusted $p$-values can here be obtained as the maximum $p$-value across all
the composite hypothesis tests. Unfortunately, this only works for relatively
few comparisons as the number of tests grows exponentially.

```{r closedtesting}
```{r closedtesting, eval=has_mets}
closed.testing(gg0)
```

@@ -882,8 +898,9 @@ ea
IC(ea) |> head()
```

## Example: Average Treatment Effects
<!-- ## Example: Average Treatment Effects -->

<<<<<<< HEAD
```{r targeted, cache=TRUE}
a1 <- targeted::cate(
response.model = y1 ~ x1+w+a,
@@ -896,6 +913,19 @@ IC(a1) |> head()

## Example: survival

<!-- ```{r targeted, cache=TRUE, eval=has_targeted } -->
<!-- a1 <- targeted::cate(a ~ 1, -->



<!-- data = dw, -->
<!-- response_model = y1 ~ x1+w+a, -->
<!-- propensity_model = a ~ x1*w -->
<!-- ) -->
<!-- a1 -->
<!-- IC(a1) |> head() -->
<!-- ``` -->

# SessionInfo

```{r sessionInfo}

0 comments on commit babb4f8

Please sign in to comment.