survival analysis

Be careful with standard errors in `survival::survfit`

(Mild) panic In my previous post I looked into how survival::survfit produces standard errors and confidence intervals for a survival curve based on a Cox proportional hazards model. I discovered (I could also have just read it from the documentation) that when you ask for the standard error fit_1$std.err after fit_1 <- survfit(...), it provides you not with the standard error of the estimator of the survival probability, but instead with the standard error of the estimator of the cumulative hazard.

Confidence interval for a survival curve based on a Cox model

A colleague caught me out recently when they asked about a confidence interval for a survival curve based on a Cox model. This can be done in R using survival::survfit after survival::coxph. But the question was: does this take into account the uncertainty in the baseline hazard. I had to admit that I wasn’t 100% sure. So here is an example to clear it up… 1. Understanding survfit.coxph standard errors Create a toy data set and apply survfit.

Trouble with tau

This post is to express some minor frustration with some papers I’ve read recently evaluating the performance of restricted mean survival time as a summary measure in oncology studies. I should say that I’m not a saint when it comes to designing simulation studies. Consciously and/or unconsciously, it’s tempting to give our favourite methods an easier ride. Nevertheless, a couple of things bother me, and they’re related to each other.

Landmark/Milestone analysis under a Royston-Parmar flexible parametric survival model using the R package flexsurv

The aim of this post is to demonstrate a landmark/milestone analysis of RCT time-to-event data with a Royston-Parmar flexible parametric survival model. The original reference is: Royston P, Parmar M (2002). “Flexible Parametric Proportional-Hazards and Proportional-Odds Models for Censored Survival Data, with Application to Prognostic Modelling and Estimation of Treatment Effects.” Statistics in Medicine, 21(1), 2175–2197. doi:10.1002/ sim.1203 This model has been expertly coded and documented by Chris Jackson in the R package flexsurv (https://www.

Adjusting for covariates under non-proportional hazards

I’ve written a lot recently about non-proportional hazards in immuno-oncology. One aspect that I have unfortunately overlooked is covariate adjustment. Perhaps this is because it’s so easy to work with extracted data from published Kaplan-Meier plots, where the covariate data is not available. But we know from theoretical and empirical work that covariate adjustment can lead to big increases in power, and perhaps this is equally important or even more important than the power gains from using a weighted log-rank test to match the anticipated non-proportional hazards.

A Bayesian approach to non-proportional hazards

In this blogpost I wanted to explore a Bayesian approach to non-proportional hazards. Take this data set as an example (the data is here). library(tidyverse) library(survival) library(brms) ########################## dat <- read_csv("IPD_both.csv") %>% mutate(arm = factor(arm)) km_est<-survfit(Surv(time,event)~arm, data=dat) p1 <- survminer::ggsurvplot(km_est, data = dat, risk.table = TRUE, break.x.by = 6, legend.labs = c("1", "2"), legend.title = "", xlab = "Time (months)", ylab = "Overall survival", risk.table.fontsize = 4, legend = c(0.

Non-proportional hazards in immuno-oncology: is an old perspective needed?

In my opinion, many phase III trials in immuno-oncology are 10–20 % larger than they need (ought) to be. This is because the method we use for the primary analysis doesn’t match what we know about how these drugs work. Fixing this doesn’t require anything fancy, just old-school stats from the 1960s. In this new preprint I try to explain how I think it should be done.