The COVID-19 pandemic led to unprecedented progress in linking National Health Service electronic health record data in England, and making these data available in secure, privacy-protecting environments. A collaborative group from the Universities of Bristol and Oxford, the London School of Hygiene and Tropical Medicine...
There are limited options to estimate treatment effects of variables which are continuous and measured at multiple time points (particularly if one is interested in the actual dose-response curve). However, these situations may be of relevance: in pharmacology, one may be interested in how outcomes...
Speakers:
Professor Michael Schomaker
Department of Statistics, Ludwig-Maximilans-Universität München, Germany
Background and Aims : Type 1 diabetes and its related complications significant impact on individuals and society across a wide spectrum. Our objective was to utilise machine learning techniques to predict diabetic ketoacidosis (DKA) and HbA1c>7%.
Meta-research, or “research on research”, is an emerging discipline focused on understanding and improving the current research enterprise. This field offers statisticians new opportunities to apply their technical expertise and propose improvements to how research is planned, conducted and reported.
This seminar will provide...
Speakers:
Nicole White
Australian Centre for Health Services Innovation, QUT
Prediction algorithms are often used to help decision making, for instance to evaluate whether the prognosis of an individual patient warrants a certain medical treatment. However, a major limitation is that most prediction algorithms ignore the role that treatments already played in the data on...
Speakers:
Nan Van Geloven
Biomedical Data Sciences, Leiden University Medical Center, Netherlands
Error in the measurement of time-to-event outcomes produces bias in estimates of descriptive and causal parameters. In this talk, I present several estimators to address measurement error in these settings. I will also discuss issues around use of validation data to inform such estimators and...
Meta-analyses are widely conducted and highly influential. Papers about methods for meta-analysis are among the most highly cited in health research. However, my observation is that many (possibly even most) citations of these papers are associated with misunderstanding or misuse of the method. These include...
Missing data is a widespread problem in epidemiological and clinical research. It is particularly pertinent in longitudinal studies where there are multiple opportunities for missed observations and dropout. Multiple imputation is becoming increasingly popular to handling missing data given its flexibility and its availability in...
The tools referred to as AI may assist, or eventually replace, health researchers who learn from data. To discuss the potential role of AI, this talk describes a taxonomy of learning tasks in science and explores the relationship between two of them:...
Speakers:
Miguel Hernán
Kolokotrones Professor of Biostatistics and Epidemiology, Harvard School of Public Health
Determining “what works for whom” is a key goal in prevention and treatment across a variety of areas, including mental health. Identifying effect moderators—factors that relate to the size of treatment effects--is crucial for delivery of treatment and prevention interventions, but doing so is incredibly...
Author credits: Gemma L Clayton, Emily Kawabata, Daniel Major-Smith, Chin Yang Shapland, Tim P Morris, Alice R Carter, Alba Fernández-Sanlés, Maria Carolina Borges, Kate Tilling, Gareth J Griffith, Louise AC Millard, George Davey Smith, Deborah A Lawlor, Rachael A Hughes
When estimating the time-varying average causal effect, challenges to meeting the positivity condition required for making causal inference can easily arise. In this talk, we use as an example the estimation of the per-protocol effect for the Effects of Aspirin in Gestation and Reproduction trial,...
Survival or time-to-event analysis is a key discipline in biostatistics, e.g., put to prominent use in trials on treatment of and vaccination against COVID-19. A defining characteristic is that participants have varying follow-up times and outcome status is not known for all individuals. This phenomenon...
Multiple imputation is an important technique for reducing the bias caused by missing data, but it only works if it is used. Automated predictive modelling techniques from machine learning are a promising way to support semi-automated multiple imputation of large data sets. I will talk...
Identifying causal risk factors for disease is of central importance in health research. However, when using observational data, causal inference is difficult due to the potential presence of unmeasured confounding. In this talk I will discuss how genetic data can be used to study questions...
Speakers:
Andrew Grant
Sydney School of Public Health, The University of Sydney
Clinical risk prediction models enable predictions of a person’s risk of an outcome (e.g. mortality) given their observed characteristics. It is often of interest to use risk predictions to inform whether a person should initiate a particular treatment. However, when standard clinical prediction models are...
Suicide prevention is a global public health concern. Suicide risk prediction model can identify individuals for targeted interventions or be used for suicide prevention research to adjust for confounding. I will discuss the development and evaluation of generally suicide risk prediction models using clinical data...
Speakers:
Susan Shortreed
Kaiser Permanente Washington Health Research Institute
As a PhD student, I was puzzled when my supervisor told me that techniques are not the important things in statistics. What he meant is that a statistician who is collaborating with an empirical scientist should focus on gaining a deep understanding of the substantive...
A typical paper on a prediction model (or diagnostic test or marker) presents some accuracy metrics - say, an AUC of 0.75 and a calibration plot that doesn’t look too bad – and then recommends that the model (or test or marker) can be used...
Many statisticians use simulation studies to evaluate the performance of statistical methods. The ‘performance’ of a method can be quantified in many different ways. For example, bias and coverage are frequently evaluated. In this talk, I will focus attention on simulation studies where the method/s...
Speakers:
Tim Morris
MRC Clinical Trials Unit at University College London
The highest paid players in the English Premier League earn over £20 million per year, which is over 100 times the earnings of the top referees. Experienced referees are in short supply and too few junior referees are being trained to fill the shortage....
Speakers:
Adrian Barnett
Faculty of Health, School of Public Health and Social Work, Queensland University of Technology
A key ingredient in sample size calculations for cluster randomized trials is the intracluster correlation coefficient: a quantity that describes the similarity of the outcomes from participants in the same cluster. When considering designs like the stepped wedge, where clusters provide measurements in multiple time...
A key element of epidemic decision-making is situational awareness — that is, knowing the current and potential future status of the epidemic. Outputs from mathematical and statistical models have provided enhanced situational awareness to governments throughout the course of the COVID-19 pandemic. Key analyses include...
An increasing number of trials are using adaptive design methodology, seeking to improve study efficiency or flexibility. To date though, all of the adaptive trials that have been held up as gold-standard practical examples have had individually randomised designs. Whilst there is a growing literature...
Speakers:
Michael Grayling
Population Health Sciences Institute, Faculty of Medical Sciences, Newcastle University
Abstract: In this presentation I will discuss 'separable effects' as an alternative approach to causal mediation analyses with view to time-dependent settings. The basic idea was introduced by Robins & Richardson (2011) and refers to a way of elaborating our causal...
Speakers:
Vanessa Didelez
Professor of Statistics, Leibniz Institute for Prevention Research and Epidemiology - BIPS, Bremen, Germany
The objective of this paper is to provide a practical framework to guide researchers and research ethics committees through informed consent issues in cluster randomised trials. We explicate a three-step framework: (1) identify research participants, (2) identify the study element(s) to which participants are exposed,...
The ability to reflect the complexity of the underlying clinical context characterized by potentially conflicting multiple endpoints through clinically meaningful design and valid analysis remains the focus of research effort in the area clinical trials. In this presentation, we review relevant pairwise comparisons-based approaches...
Disease mapping is a powerful approach to monitor and understand diseases and health conditions. As is often the case in statistics, the modelling may be straightforward, but data issues can be time-consuming. This talk will examine three different disease mapping projects that use routinely collected...
Speakers:
Susanna Cramb
Strategic Senior Research Fellow, Faculty of Health, Queensland University of Technology
Diagnostic and prognostic models could provide an evidence-based approach for efficient triage of suspected or infected patients, and for vaccine prioritization. Since the COVID-19 outbreak, over 200 models have been proposed, and the number keeps growing. We performed a rigorous systematic review and standardized...
Speakers:
Laure Wynants
Assistant Professor of Epidemiology, Maastricht University, Netherlands & Postdoctoral fellow, Research Foundation Flanders, Belgium
Abstract: Multilevel regression and poststratification methods have been used in an increasing range ofdifferent application areas, from political science to psychology to biomedical areas. However, asapplications have grown increasingly diverse it has become increasingly apparent that it is not only staticcross-sectional survey data that...
In this talk I will discuss three current research projects in stepped wedge designs (SWD). In the first we develop a power calculation formula for SWD with either normal or non-normal outcomes in the context of generalized linear mixed models by adopting the Laplace approximation...
Targeted Maximum Likelihood Estimation (TMLE) provides an approach for estimating the causal effects of longitudinal interventions with several attractive properties. TMLE uses estimates of both the propensity score (as used in inverse probability weighting) and of a series of outcome regressions (as can be used...
Speakers:
A/Prof Maya L Petersen
Chair, Division of Biostatistics University of California, Berkeley
In this talk, we will consider two innovative trial designs relevant to the modern field of mobile health (mHealth), namely, the sequential multiple-assignment randomized trial (SMART), and the more recently developed micro-randomized trial (MRT). Both designs involve sequential, within-individual randomizations, but are different in their...
We recently fit a series of models to account for uncertainty and variation in coronavirus tests. I will talk about the background of this problem and our analysis, and then we will expand into a general discussion of Bayesian workflow.
Andrew is a professor...
Speakers:
Prof Andrew Gelman
Department of Statistics and Department of Political Science, Columbia University, New York
Individualised treatment, which relies on the ability to identify and prescribe subject-specific treatments, is an important application of using data in the context of decision-making.
Speakers:
Professor Howard Bondell
Professor of Statistical Data Science and ARC Future Fellow Co-Director, Melbourne Centre for Data Science
Phase III randomised controlled trials (RCTs) are typically long and expensive, restricting their use and resulting in long lead times to answer important clinical questions. Researchers and funders have recognised the need for trials to become more efficient, yet the overwhelming majority of trials continue...
Big data. Multi-omics. Machine learning. Technology continues to increase our ability to both produce and summarize data on human health and function. Taking as a goal the production of knowledge about disease etiology or mechanisms for treatment effects in intensely followed observational cohorts, what are...
Our recent breakthroughs and advances in culture independent techniques (whole genome shotgun metagenomics, 16S rRNA amplicon sequencing) have dramatically changed the way we can examine microbial communities. But does the hype of microbiome outweigh the potential of our understanding of this ‘second genome’? There are...
Speakers:
Dr Kim-Anh Le Cao
University of Melbourne, School of Mathematics and Statistics
General purpose MCMC software packages like WinBUGS, JAGS, and STAN enable users to define and fit almost any statistical model without having to worry about implementation details and have enabled significant progress in applied Bayesian modelling. However, these existing tools are largely unable to make...
How can we understand the effects of DNA variation on gene expression in single cells? What sort of studies should we conduct and what computational tools do we need for them to succeed? Following a short introduction to the field of single-cell biology I will...
There is a pressing need to integrate innovative methodologies to improve clinical trials in the setting of small sample population groups. The objective of this talk is to present research that produces methods of general applicability as developed through multidisciplinary and close collaborations among researchers...
Speakers:
Prof. Kim-Anh Do
M.D. Anderson Cancer Center, The University of Texas
Electronic Health Record (EHR) data are used increasingly for comparative effectiveness research (CER). This growing source of rich clinical information provides a cost and time-effective opportunity to conduct retrospective cohort studies with large, representative samples of the diverse patient populations found in real-world clinical settings....
Speakers:
Dr Romain Neugebauer
Northern California Division of Research, Kaiser Permanente
Multi-state models are increasingly being used to model complex disease profiles. By modelling transitions between disease states, accounting for competing events at each transition, we can gain a much richer understanding of patient trajectories and how risk factors impact over the entire disease pathway. In...
Speakers:
Assoc. Prof. Michael Crowther
Department of Health Sciences, University of Leicester
Whether associations found by observational studies are causal, and in which direction, are important issues with clinical and aetiological implications. We have developed ICE FALCON (Inference on Causation from Examination of Familial Confounding), an analytical approach to make inference about causation using data of twin...
Speakers:
Dr Shuai Li
Centre of Epidemiology and Biostatistics, University of Melbourne
This talk starts with the problem of Cox’s proportional hazards model estimation for a dementia dataset where informative right censoring is likely to exist. We adopt the method of maximum penalized likelihood, where dependence between censoring and event time is modelled by a copula function...
Much research to date has found that exposure to ambient fine particles is associated with asthma development and morbidity, but there is little work examining the effects of long-term exposure to coarse particles on respiratory health. Because of this research gap, it is difficult for...
Speakers:
Prof. Roger Peng
Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health
We consider the situation where there is a known established regression model that can be used to predict an important outcome, Y, from a set of commonly available predictor variables X. There are many examples of this in the medical and epidemiologic literature. A new...
Speakers:
Prof. Jeremy Taylor
Department of Biostatistics, University of Michigan
In 2003 I presented a Melbourne biostatistics seminar on an application of a newly emerging method – marginal structural modelling – to address time-varying confounding in observational studies of the effect of health care interventions. Penetrating questions revealed deficiencies in my understanding of the approach....
Propensity score methods are commonly used to estimate causal effects in non-experimental studies. Existing propensity score methods assume that covariates are measured without error but covariate measurement error is likely common.
This talk will discuss the implications of measurement error in the covariates on...
Cluster randomised trials have been implemented increasingly over the past 30 years or so. More recently, design variants such as stepped wedge and crossover designs have been gaining favour (and flavour) due to emerging results that they have efficiency advantages over conventional parallel arm cluster...
Australian diagnostic pathology laboratories operate under strict regimes of quality monitoring testing, with programs run by both the National Association of Testing Authorities (NATA) and the Quality Assurance Program of the Royal College of Pathologists of Australasia (RCPA-QAP). In this talk I will report on...
Speakers:
Dr Alice Richardson
National Centre for Epidemiology & Population Health, Australian National University
The potential for overestimation of the treatment effect when a clinical trial stops early has been discussed extensively in the literature. However, there has been much less attention paid to the converse issue, namely, that sequentially monitored clinical trials which do not stop early tend...
Speakers:
Prof. Ian Marschner
Faculty of Science and Engineering, Macquarie University
Tyler VanderWeele provided two separate definitions of effect heterogeneity, which he referred to as “effect modification in distribution” and “effect modification in measure”. The standard epidemiological approach, based on effect modification in measure, is associated with a number of well-described shortcomings, and no consensus exists...
As medical research continues to push into new frontiers of discovery and personalized patient care, it is imperative that clinical trial designs evolve to address the forthcoming challenges. One key innovation is the use of adaptive clinical trial designs. Such designs allow certain trial design...
In studies with multiple incomplete variables, it is widely understood that if the data are missing at random (MAR) then unbiased estimation is possible with appropriate methods. While the need to assess the plausibility of this assumption has been emphasised, the practical difficulty of this...
The ACTN3 gene, known as ‘the gene for speed’, encodes a protein expressed in fast-twitch muscle fibres. About 20% of the population do not express the protein due to a loss-of-function mutation. Presence of one or two copies of this mutation is associated with reduced...
This seminar is presented in conjunction with the School of Mathematics and Statistics, University of Melbourne
The European Randomised Study of Prostate Cancer Screening has shown a substantial reduction in prostate cancer mortality, but a similar trial in the United States showed essentially...
Speakers:
Stephen Walter
Clinical Epidemiology and Biostatistics, McMaster University
In the development of drugs or therapeutic interventions, the use of trials with different designs is frequent. In particular parallel group and cross-over trials are often used. When there is a need to pool the results of such studies into a meta-analysis, combining results from...
Sanjoy Paul is Professor of Clinical Biostatistics, Epidemiology and Health Sciences Research, University of Melbourne and Director of Melbourne EpiCentre, a collaborative research centre within the Royal Melbourne Hospital, Melbourne, Australia. Prior to establishing his clinical trials research group in Australia, he created the Statistics...
Speakers:
Professor Sanjoy Paul
Director, Melbourne EpiCentre, Royal Melbourne Hospital
In demography the actuarial estimator of the survival function has been used for centuries, typically with one year time intervals. This estimator was also the default in medical science and epidemiology until the advent of computers, when the Kaplan‐Meier estimator and later the Cox model...
In randomized clinical trials with baseline variables that are prognostic for the primary outcome, there is potential to improve precision and reduce sample size by appropriately adjusting for these variables. A major challenge is that there are multiple statistical methods to adjust for baseline variables,...
In this talk Professor Allore will present the evolution of longitudinal modeling in the field of aging using ordinal data of functional disability. The presentation will demonstrate the progression from time to event models to multistate models and recurrent events, latent trajectory and growth mixture...
PLEASE NOTE: This is very preliminary work, and unpublished as of yet.
The volume of high-throughput data makes it a daunting prospect to plot, but relying primarily on false discovery rate adjusted p-values is not enough. Making plots of the data is essential...
Antenatal mental disorders are often unrecognized, despite frequent contact with healthcare professionals throughout pregnancy. The UK National Institute for Clinical Excellence recommends maternity professionals use the two Whooley questions to identify depressive disorders – the most common antenatal mental disorder - in the perinatal period....
The number of births (parity) clearly bears a relationship to a woman’s age. A negative binomial regression model for parity is developed, in which mean parity is modelled with two components relating to age. The first is a parametric growth curve which operates during the...
Modelling epidemics has become a commonplace activity over the past 40 years, since the publication of Professors Anderson and May's first pathbreaking papers.
Analytical techniques adapted from the hard sciences such as physics have given way to a field dominated by computational simulation models...
Heritability, as a measure of (relative) variation in genetic causes, has been well-defined for a continuous trait for nearly a century, with wise caveats from its inventor, R.A. Fisher. Extension of this concept to a binary trait by invoking an unmeasured ‘liability’ construct...
Speakers:
John Hopper
Melbourne School of Population and Global Health, The University of Melbourne
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error...
Speakers:
Benoit Liquet
University de Pau et Pays de l’Adour, LMAP ARC Centre of Excellence for Mathematical and Statistical Frontiers Queensland University of Technology (QUT)
Understanding treatment effect heterogeneity is an important aspect of randomised trials, and process variables describing the intervention content are crucial components of this. Frequently these variables can only be measured in intervention groups.
Principal stratification, whereby control group participants are assigned to the latent...
Speakers:
Richard Emsley
Centre for Biostatistics, Institute of Population Health, The University of Manchester, Manchester Academic Health Science Centre
In clinical trials one traditionally models the effect of treatment on the mean response. The underlying assumption is that treatment affects the response distribution through a mean location shift on a suitable scale, with other aspects of the distribution (shape/dispersion/variance) remaining the same. This work...
Speakers:
Stephane Heritier
Monash University, School of Public Health and Preventive Medicine
A lot of health data is ordinal: even if the measurement process is interval or ratio level, a 10mmHg blood pressure difference doesn't mean the same thing at different starting points. Tests that only used the ordinal structure of the data would seem to be...
The ROC curve is a popular graphical method used to study the diagnostic capacity of biomarkers. In its simplest form it plots true-positive rates against false-positive rates. Both practical and theoretical aspects of the properties of ROC curves have been extensively studied. Conventionally, it is...
Speakers:
Pablo Martınez-Camblor
Hospital Universitario Central de Asturies (HUCA) Oviedo, Asturies, Spain
Stephen will present findings from the People of the British Isles project, recently published with Stephen as first author as a cover story in one of the world’s leading scientific journals “Nature”. In particular, he will show that using newly developed statistical techniques one can...
In the last 10-15 years, there has been an explosion of prediction models developed to predict risk of various diseases. The main objective of a risk prediction model is to predict the absolute risk of a disease. For this reason, cohort study design has been...
Speakers:
Agus Salim
University of Melbourne, Melbourne School of Global and Population Health
Dynamic treatment regimens (DTRs) are sequential decision rules that specify how to adapt the type, dosage and timing of treatment according to an individual patient’s time-varying characteristics. DTRs offer a framework for operationalizing the multistage decision making in personalized clinical practice, thus providing an opportunity...
Speakers:
Bibhas Chakraborty
Centre for Quantitative Medicine, Duke-NUS Graduate Medical School, Singapore
In 2010, Jian Yang and Peter Visscher showed how by applying mixed models to genome-wide association study data, it was possible to estimate how much of a trait's heritability is explained by all (common) genetic variants, and to partition this by, say, chromosome or variant...
Rates of cancer are increased following low-dose radiation from computed tomography (CT) scans used for medical diagnosis. However, when the lag period between a CT scan and diagnosis of cancer is very short, it is likely that the cancer was not caused by the radiation,...
Speakers:
John Mathews
Melbourne School of Population and Global Health, University of Melbourne
Problem 1. How to evaluate the probability that bones found in a carpark are from a specified dead king: Although the evidence for bones found under a Leicester UK carpark to have been those of King Richard III seemed extremely strong even before...
Speakers:
David Balding
Schools of BioSciences and of Mathematics & Statistics, University of Melbourne
Data from individually-matched case-control studies, cohorts of twins and other paired designs provide a powerful resource that can be used estimate the magnitude of exposure-outcome associations free from confounding by shared factors. For binary outcomes, these data are typically analysed using conditional logistic regression (CLR),...
Speakers:
Lyle Gurrin
University of Melbourne, Melbourne School of Global and Population Health
This talk describes two approaches used in economics to identify causal effects, and illustrates their use with applications drawn from health economics. The first approach described is the difference in difference estimator. This estimator is used widely in the economics literature that seeks to identify...
Spatial epidemiology is the description and analysis of geographically indexed health data with respect to demographic, environmental, behavioral, socioeconomic, genetic, and infectious risk factors. Common familiar regression models are not sufficient to analyse such data, as they do not account for inherent spatial correlation in...
Speakers:
Arul Earnest
Monash University, School of Public Health and Preventive Medicine
Conservation is a key indicator of function in genomes, and can potentially be used to discover novel functional non‐protein‐coding RNAs and regulatory sequences. However, recent investigations have demonstrated that a simple dichotomy between conserved and non‐conserved sequence is too naïve a distinction to reflect the...
Speakers:
Jonathan Keith
School of Mathematical Sciences, Monash University
In this seminar we present the distribution for standardized difference of means (SMD) estimators under the assumption that the data is sampled from a normal distribution which includes a normally distributed random effect component. This distribution, a rescaled non-central t and which is not conditional...
Speakers:
Luke Prendergast
Department of Mathematics and Statistics, La Trobe University
Recently there has been a number of drug safety concerns involving, for example, cyclooxygenase-2 (COX2) inhibitors such as Vioxx® (rofecoxib) and Celebrex® (celecoxib), erythropoiesis-stimulating agents such as Aranesp® (darbepoetin alfa), Epogen® (epoetin alfa) and Procrit® (epoetin alfa), and anti-diabetic drugs such as Avandia® (rosiglitazone maleate)....
Trajectory modelling provides a set of tools to analyse individual longitudinal data with a goal of yielding clusters of people who share 'similar' structure in the evolution of a variable of interest over time. In addition, the methods allow identification of covariates associated with separate...
Speakers:
Nicholas Jewell
Departments of Statistics & School of Public Health (Biostatistics), University of California, Berkeley
Variable selection is a common problem in regression modelling with a myriad of applications. This talk will present a new feature ranking algorithm (DEPTH) for variable selection in parametric regression based on permutation statistics and stability selection. DEPTH is: (i) applicable to any parametric regression...
Speakers:
Enes Makalic
Centre for Epidemiology and Biostatistics, University of Melbourne
Multilevel data are often incomplete, and may be missing either at individual level or at cluster level. For example, in an observational meta-analysis of individual participant data exploring the association between carotid intima media thickness and subsequent risk of cardiovascular events, some relevant confounders were...
Modern methods of psychometric analysis are model based. They can provide greater insight into the performance of items and scales than conventional, 'classical' approaches.
This talk will introduce Item Response Theory (IRT) and variants such as the Rasch model. The ways in which...
Speakers:
Andrew Mackinnon
Centre for Youth Mental Health, University of Melbourne
Half of the world’s population is exposed to malaria, and with no vaccine for this disease, anti-malarial therapies are the first-line defence against malaria. Mechanistic within host models that characterize the relationship between the anti-malarial drug concentration and parasite-time profile are a valuable tool in...
Speakers:
Julie Simpson
University of Melbourne, Melbourne School of Global and Population Health
Once the sole preserve of psychologists and educationalists, psychometric scales are now used in a wide range of research to measure non-physical outcomes such as quality of life, preferences, satisfaction, attitudes and behaviour. However, the theoretical and practical underpinnings of these instruments is poorly understood....
Speakers:
Andrew Mackinnon
Centre for Youth Mental Health, University of Melbourne
CRXO designs are not uncommon in public health research and bounded discrete endpoints, such as pain scales, are common. The justification of sample size (i.e. the number of clusters, periods and individuals) is generally based on power calculations that assume continuous-scale, normally-distributed, primary endpoints....
Speakers:
John Reynolds
Monash University, School of Public Health and Preventive Medicine
Prof Ryan will discuss the use of Bayesian model averaging techniques for addressing uncertainty in how to adjust for covariates in the context of environmental risk assessment. The talk is motivated by a German study of the effects of prenatal exposure to PCBs on...
Like many other labs around the world, we are using next-generation sequencing data to identify disease causing variants, mainly for large effect size variants, such as those observed in single gene, or Mendelian disorders.
Whilst this approach has delivered an avalanche of genes it...
In joint work with Sue Finch, I draw on the seminal work of Edward Tufte and Bill Cleveland on excellence in statistical graphics to develop five simple principles for producing quality graphs. Our focus is on static graphics showing data, data summaries and inferences. We...
Speakers:
Ian Gordon
Statistical Consulting Centre, The University of Melbourne
Stroke is one of the three most common causes of death around the world and the sixth most common cause of disability worldwide. In this presentation we reflect on many facets of statistical and OR modelling for decision support in stroke care.
Large non-experimental studies are increasingly used to evaluate the benefits and harms of medical interventions. One of the principal challenges in such studies is confounding - systematic differences between patients exposed to an intervention of interest versus the chosen comparator.
Recently, instrumental variable approaches...
Speakers:
Alan Brookhart
Department of Epidemiology, UNC Gillings School of Global Public Health, UNC - Chapel Hill, USA
Decisions about whether to subsidise a new pharmaceutical should ideally be informed by evidence that rates highly on two criteria: quality and relevance.
Typically the available evidence is of high quality (internally valid) because high-quality randomised controlled trials (RCTs) are mandated for marketing approval....
Disease risk prediction tools, often based on statistical regression models, are used for a variety of research and clinical purposes. They have a role in decision making for clinical treatment of patients, they can aid communication among patients, carers and treating health professionals, and they...
Speakers:
Rory Wolfe
Monash University, School of Public Health and Preventive Medicine
In the past 15 years there has been an explosion of analytical methodological development based on counterfactuals (potential outcomes) and utilisation of causal diagrams to understand and inform non-randomised comparisons in epidemiology and related areas.
These largely stem from the seminal works of Rubin...
Speakers:
Andrew Forbes
Monash University, School of Public Health and Preventive Medicine
We describe selected artistic and statistical depictions of the force of mortality [hazard or mortality rate], a concept that has long pre-occupied actuaries, demographers and statisticians. We provide a more graphic form for the force of mortality function that makes the relationship between its constituents...
Speakers:
James Hanley
Department of Epidemiology & Biostatistics, McGill University
Patients with localized prostate cancer are frequently treated with radiation therapy. Following treatment, prostate-specific antigen (PSA) measurements are typically obtained at regular intervals for the purpose of monitoring and obtaining an early indication of disease recurrence.