SWE-REG Spring Meeting 2021

March 24, 9.00-15.00

“Machine learning methods in the medical and social sciences”  

Program with abstract

Introduction 

9:00 – Opening remarks from Jonas Björk, Maria Brandén and Poorna Anand (moderator)

15 Minute oral presentations

9:00

Title: Statistical model- the three cultures - Adel Daoud, Institute for Analytical Sociology, Linköping University & Department of Sociology, University of Gothenburg.

Abstract: 

Two decades ago, Leo Breiman (2001) identified two cultures for statistical modeling. The data modeling culture (DMC) refers roughly to practices aiming to conduct statistical inference, 𝛽, on one or several quantities of interest. The algorithmic modeling culture (AMC) refers to practices defining a machine-learning (ML) procedure that generates accurate predictions, Y, about an event of interest (outcome). Breiman argued that statisticians should give more attention to AMC than to DMC, because AMC provides novel procedures for prediction. While twenty years later, DMC has perhaps lost some of its dominant role in statistics because of the data-science revolution, we observe that this culture is still the modus operandi—the leading practice of a group—in the natural and social sciences. DMC is the modus operandi because of the influence of the established scientific method, called the hypothetico-deductive scientific method. Despite the incompatibilities of AMC with the hypothetico-deductive scientific method, among some research groups, AMC and DMC cultures mix intensely. We argue that this mixing has formed a fertile spawning pool for a mutated culture: a hybrid modeling culture (HMC) where prediction and inference have fused into new procedures where they reinforce one another. This article identifies key characteristics of HMC, thereby facilitating the scientific endeavor and fueling the evolution of statistical cultures towards better practices. By better, we mean increasingly reliable, valid, and efficient statistical practices in analyzing causal relationships. In combining inference and prediction, the result of HMC is that the distinction between 𝑌 and 𝛽—taken to its limit—melts away. We qualify our “melting-away” argument by describing three HMC practices, where each practice captures an aspect of the scientific cycle: ML for causal inference, ML for data acquisition, and ML for theory prediction.

9:20 

Title: Demonstrating the benefits of transfer learning when grading diabetic retinopathy in fundus photographs with deep learning– Adam Hulman, Steno Diabetes Center Aarhus, Aarhus University Hospital. 

Abstract: 

Background: Training deep learning models requires large datasets. Transfer learning i.e. recycling a pre-trained neural network by replacing the last layer and fine tuning the networks parameters might make deep learning feasible also in smaller epidemiological studies. This analysis aims to quantify the data needs of transfer learning compared to training a model from scratch.

 Methods: The study was based on a dataset of 3662 fundus photographs (with diabetic retinopathy graded from 0 to 4) and the ResNet18 convolutional neural network architecture. The pre-trained network was fine-tuned on 80%, 64%, 48%, 32%, 16% and 8% of the images, while the rest were used for evaluation. The models’ performance (cross entropy, accuracy and quadratic kappa) was compared to the ResNet18 architecture trained from scratch on 80% of the images (reference).

Results: Transfer learning has outperformed the reference model when trained on the same dataset (80%) with kappa=0.89 vs 0.74, accuracy=82% vs 72%, and cross entropy=0.62 vs 0.79. This approach needed 8% of the dataset to achieve comparable accuracy and better agreement (kappa) than the reference model. Slightly more data (16-32%) was needed to reach the same cross entropy as the reference model.

Conclusions: Transfer learning needed 10x fewer images to achieve the same accuracy and agreement than the reference model trained from scratch. Although transfer learning is most often used in computer vision and natural language processing, it clearly has a potential to impact epidemiological research, especially if the benefits are of similar magnitude when applied on tabular data.

9:35 

 Title: Rapid rule out of heart attacks in the emergency department using machine learning - Anders Björkelund, Department of Astronomy and Theoretical Physics, Lund University. 

Abstract:

Background: Chest pain is one of the most common chief complaints among patients at the emergency department (ED). Although chest pain may be an indicator for acute myocardial infarction (AMI), these cases constitute a minority and only a fraction of these patients require admission. Ruling out AMI swiftly can therefore be beneficial as it can decrease the load on ED units as well as reduce the time a patient must spend in the ED before discharge. 

Method: Using records from roughly 9,000 ED patients from two hospitals in Skåne, Sweden, collected during 2013 and 2014, we compare several machine learning models based on both logistic regression and artificial neural networks. The inputs vary between the models, including sex and age, blood markers, and ECG signal data. The models are adapted to identify patients without AMI such that very few mistakes are made (NPV>99.5%, Sensitivity>99%). The models are compared with a rule-based method relying on the biomarker troponin, which is the current practice recommended by the European Society of Cardiology. 

Results: While the rule-based method is able to rule out 47% of patients, it fails to obtain the mandated levels of NPV and sensitivity. The corresponding fraction for the machine learning models range from 39% to 55% while maintaining the safety requirements. The trend for the machine learning models is that the more inputs provided, the better the performance. 

Conclusion: The empirical results indicate that machine learning methods can be used to improve over current ED practice. This is a promising result warranting further validation and research. 

9:50

Title: Fast estimation of random effect models using quasi-Newton methods with variational approximations - Benjamin Christoffersen, RBR, Karolinska Institutet

Abstract:

Background: Assuming independence can be a too strong assumption in many register based applications where dependence may be present e.g. because of repeated measurements from the same individual, shared genes in families, shared environment, or receiving treatment at the same hospital. Random effect models are often used for such grouped data to account for heterogeneity but the marginal likelihood is often intractable. Variational approximations is a promising method to approximate the marginal likelihood that is often used in machine learning applications but less often in statistical analysis. However, doing joint optimization over the model parameters and variational parameters can easily involve large amounts of variables. Thus, it is slow to use generic non-linear optimization methods.

Methods: We use that the lower bound from the variational approximation has a special structure for grouped data. In particular, we use that lower bound is partially separable in a generic Quasi-Newton procedure we have implemented. We apply our software package to Gaussian variational approximations for generalized linear mixed models as an example. 

Results and Conclusions: Our method can estimate a generalized linear mixed models using a variational approximation more than ten times faster in some cases compared with a Laplace approximation from the lme4 package while having comparable bias. Our examples show that Gaussian variational approximations are a fast means of estimating generalized linear mixed models compared with approximations with a similar bias once the structure of the lower bound is used.

10:05  Coffee break 20 min

10:25

Title: A Bayesian Network analysis of the interaction between smoking social class in relation to risk of asthma and respiratory outcomes in adults - Rani Basna, Krefting Research Centre, University of Gothenburg. 

Abstract: The risk of asthma greatly increased during the second half of the last century. Asthma remains a disease with heterogeneous underlying phenotypes, but several environmental and demographic risk factors have been identified. Of these is the independent effects of smoking and socioeconomic status. However, given the known association between socioeconomic status and smoking, the potential interaction between these two factors in relation to the risk of asthma is an important public health question. Elucidating such interactions is not trivial, statistically. However, emerging Bayesian Network analysis is proving valuable in exploring complex data structures and offering new insights to analysis of health-related data. We applied Bayesian Network analysis to address the question of interaction of smoking and social class in relation to asthma and respiratory symptoms based on data derived from the West Sweden Asthma Study and the Obstructive Lung Diseases in Northern Sweden. In this talk, I will discuss the step-by-step approach to implementing the analysis.

(In)dependency structure can be identified in Bayesian networks, particularly with Directed Acyclic Graphs (DAGs), which are probabilistic graphical models. DAGs learn the underlying dependency structure represent these as networks with directed connections.

We will learn the network structure of the variables including the smoking-socioeconomic variables using a data-driven hill-climbing algorithm with bic-cg score (the corresponding Bayesian Information Criterion score for mixed datasets). We conducted a bootstrap aggregation and model averaging to reduce the number of arcs that are incorrectly included in the Network structure. Then, we fitted the Bayesian network model to learn the related parameters. Finally, we used the approximate inference approach to estimate the conditional probabilities by eliciting a sample of realizations of the modeled variables under specific conditions. We validated our model by: (1) running a cross-validation approach; and (2) simulating new data and comparing its statistical characteristics with the original data. 

10:40

Title: Improving machine learning 30-day mortality prediction by discounting surprising deaths - Ellen Tolestam Heyman, Department of Clinical Sciences Lund, Faculty of Medicine, Lund University. 

Abstract:

Background: Machine learning (ML) is an emerging tool for predicting need of end-of-life discussion and palliative care, by using mortality as a proxy. But deaths unrelated to the emergency department (ED) visit might cause prediction error. Therefore, we sought to develop a ML algorithm which predicts unsurprising deaths within 30 days.

Methods: In this retrospective registry study we examined all ED attendances within the Swedish region of Halland in 2015 and 2016. All registered deaths within 30 days after ED discharge were classified as either “surprising” or “unsurprising” by an adjudicating committee with three senior specialists in emergency medicine. ML algorithms were developed by using Logistic Regression (LR), Random Forest (RF) and Support Vector Machine (SVM). 

Results: Of all 30-day deaths (N=148), 76% (N=113) did not surprise the committee. Among these unsurprising cases, the most common diseases were cancer, multidisease/frailty and dementia. 20% of the patients were enrolled into palliative specialist care, while 69% had no health record note regarding terminal disease. By using LR and RF, ROC-AUC of unsurprising mortality was 0.950 (SD 0.008) and  0.944 (SD 0.007) compared to all mortality which was 0.924 (SD 0.012) and 0.922 (SD 0.009). 

Conclusions: In patients discharged to home from the ED, three quarters of all 30-day mortality did not surprise the adjudicating committee. When only unsurprising deaths were included, ML mortality prediction improved significantly.

10:55

Title: Supervised machine learning and population register data in studying predictors for long-term unemployment in early adulthood - Sanni Kuikka, Department of Sociology, Stockholm University & Institute for Analytical Sociology, Linköping University. 

Abstract

 Background: Unemployment at young age is a negative life event that has been found to have scarring effects for future life outcomes. Paths to unemployment are complex and the mechanisms can involve individual and intergenerational factors as well as macro conditions. Understanding precursors for long-term unemployment in early adulthood is important to be able to target policy interventions in critical junctures in the life course. 

Methods: A data-driven exploratory approach was used for studying individual and family level factors during ages 0-24, that predict long-term unemployment at the age of 25-30. Supervised machine learning algorithms were applied to understand associations deriving from longitudinal, individual-level administrative data from the Finnish birth cohort of 1987. A set of predictors were used to train and test classification tree models (CART & Random Forest) aiming to predict unemployment/employment as well as reveal higher-order interactions of variables that together predict the outcome.

Results: The results revealed combinations of individual and family level predictors important in forming trajectories for unemployment, e.g. GPA lower than ~7.5, low education level, late work history start and low parental education and income levels. Both models were able to produce good prediction rates for the unemployed: CART required a trade-off of misclassifying many employed, while Random Forest managed to balance prediction rates for both, simultaneously maintaining them rather high. The study shows promise for future life course research using machine learning approaches and population register data.

 11:10

 Title: Serial ECG analysis using neural networks - Axel Nyström, Department of Laboratory Medicine, Lund Universit

 Abstract:

 Background: Although machine learning has become a popular and effective tool for automating the analysis of ECG records, most existing research has focused on using a single ECG record to make some prediction. But because of the significant variability in the signal over time and between people, it is common practice for physicians to compare new ECG measurements with old ones from the same patient, whenever those are available. This study aims to explore and evaluate different machine learning models, in particular architectures using neural networks, for using such serial ECG records, with the hypothesis that the old ECG records can help improve relevant predictions. 

Method: Our dataset (ESC Trop) contains serial 10s 12-lead ECG records from 19444 distinct, consecutive patients that presented with chest pain to one of six emergency departments in Skåne, Sweden between 2017 and 2018, and for which at least one additional, older ECG record was available. The median number of available ECG records per patient is 10.

 We create a baseline model that uses a single ECG record to predict a handful of patient outcomes, including acute myocardial infarction (AMI) during the visit and a superset of diagnoses, major adverse cardiac events (MACE), within 30 days. We then implement new models of increasing complexity that uses serial ECGs, and compare them with the baseline. 

Results: This research is ongoing and we only have preliminary results for the baseline model predicting MACE. In this case we obtain an AUC of 0.78

11:25 – 12:30  Lunch 

12:30 – 13:30 Parallel poster sessions 

Poster session 1

Title: Natural language processing for register-based research: novel descriptions and improved causal identification – Martin Arvidsson, Institute for Analytical Sociology, Linköping University. 

Abstract: 

Increased availability of large-scale digitized corpora has sparked a growing interest within the social sciences for natural language processing (NLP) methods. Two families of methods in particular have received special attention; topic models and word embedding models, which have enabled social scientists to extract novel and interpretable patterns from high-dimensional textual data. In this paper, we pose the question: Could the successful application of these methods be replicated for other kinds of high-dimensional data that interest social scientists? Although developed specifically for textual data, the key assumptions underlying both topic models and word embedding models generalize far beyond textual data. By replacing ‘word’ with ‘individual’ in the foundational distributional hypothesis, for example, one arrives at something that is very reminiscent of the concept of structural equivalence, a central concept in social network theory. More generally, and in practice, for any data in which relational contexts can be defined, these methods are of potential use. One example of a data source that often meet this criterion is register data, which usually contains information on many kinds of social contexts; individuals being embedded in schools, workplaces, neighborhoods, and so on. Conceptualizing individuals as words and social contexts as documents, we investigate how topic models and word embedding models can be used to infer latent structures for the purposes of (a) extracting novel descriptions of social change, and (b) proxying unobserved confounding to improve causal identification.

Poster session 2

Title: Improved prognosis in gastric adenocarcinoma among metformin users in a population-based study – Jiaojiao Zheng, Upper Gastrointestinal Surgery, Department of Molecular Medicine and Surgery, Karolinska Institute. 

Abstract: 

Background: Metformin may improve the prognosis in gastric adenocarcinoma, but the existing literature is limited and contradictory. 

Methods: This was a Swedish population-based cohort study of diabetes patients who were diagnosed with gastric adenocarcinoma in 2005-2018, and followed up until December 2019. The data were retrieved from four national health data registries: Prescribed Drug Registry, Cancer Registry, Patient Registry and Cause of Death Registry. Associations between metformin use before the gastric adenocarcinoma diagnosis and the risk of disease-specific and all-cause mortality were assessed using multivariable Cox proportional hazard regression. The hazard ratios (HRs) and 95% confidence intervals (CIs) were adjusted for sex, age, calendar year, comorbidity, use of non-steroidal anti-inflammatory drugs or aspirin, and use of statins. 

 Results: Compared with non-users, metformin users had a decreased risk of disease-specific mortality (HR 0.79, 95% CI 0.67-0.93) and all-cause mortality (HR 0.78, 95% CI 0.68-0.90). The associations were seemingly stronger among patients of female sex (HR 0.66, 95% CI 0.49-0.89), patients with tumour stage III or IV (HR 0.71, 95% CI 0.58-0.88), and those with the least comorbidity (HR 0.71, 95% CI 0.57-0.89).

 Conclusions: Metformin use may improve survival in gastric adenocarcinoma among diabetes patients. 

Poster session 3

Title: Mortality, reoperation and hospital stay within 90 days of primary and secondary antireflux surgery in a population-based multinational study – Manar Yanes, Upper Gastrointestinal Surgery, Department of Molecular Medicine and Surgery, Karolinska Institute. 

Abstract:

Background and aims: Absolute rates and risk factors of short-term outcomes after antireflux surgery remain largely unknown. We aimed to clarify absolute risks and risk factors for poor 90-day outcomes of primary laparoscopic and secondary antireflux surgery. 

Methods: This population-based cohort study included patients who had primary laparoscopic or secondary antireflux surgery in the five Nordic countries in 2000-2018. In addition to absolute rates, we analyzed age, sex, comorbidity, hospital volume, and calendar period in relation to all-cause 90-day mortality (main outcome), 90-day reoperation, and prolonged hospital stay (≥2 days over median stay). Multivariable logistic regression provided odds ratios (ORs) with 95% confidence intervals (95%CI), adjusted for confounders.

Results: Among 26,193 patients who underwent primary laparoscopic antireflux surgery, postoperative 90-day mortality and 90-day reoperation rates were 0.13% (n=35) and 3.0% (n=750), respectively. The corresponding rates after secondary antireflux surgery (n=1,618) were 0.19% (n=3) and 6.2% (n=94). Higher age (56-80 years vs 18-42 years: OR=2.66, 95%CI 1.03-6.85) and comorbidity (Charlson Comorbidity Index ≥2 vs 0: OR=6.25, 95%CI 2.42-16.14) increased risk of 90-day mortality after primary surgery, and higher hospital volume suggested a decreased risk (highest vs lowest tertile: OR 0.58, 95% CI 0.22-1.57). Comorbidity increased the risk of 90-day reoperation. Higher age and comorbidity increased risk of prolonged hospital stay after both primary and secondary surgery. Higher annual hospital volume decreased the risk of prolonged hospital stay after primary surgery (highest vs lowest tertile: OR=0.74, 95%CI 0.67-0.80).

Conclusion: These findings suggest that laparoscopic antireflux surgery has an overall favorable safety profile in the treatment of GERD, particularly in younger patients without severe comorbidity who undergo surgery at high-volume centers.

Poster session 4

Title: Interaction of smoking and social status on respiratory outcomes in a Swedish adult population: an Epilung study – Muwada Bashir, Krefting Research Centre, Institute of Medicine, University of Gothenburg

 Abstract:

Rationale: The variations in prevalence of respiratory outcomes by social class and smoking status have been well reported in many studies. However, there are limited data on the interaction of social class and smoking in relation to risk of respiratory outcomes. 

Objectives: To examine whether and to what extent social class and smoking interact in relation to risk of respiratory outcomes in adults.

Design, setting and participants: The study data are derived from the ongoing population-based studies: West Sweden asthma study (WSAS) and the Obstructive Lung Disease Study in Northern Sweden (OLIN). The study sample constitutes of randomly selected adults aged 20-75 years from of the respective populations. Data was collected using the same validated, identical questionnaire in both cohorts in 2016. In WSAS, 23 753 participated with a response rate of 50%, while 6519 participated in OLIN response rate (56%).

 Exposure and outcome measures: Social class was defined using three different classification systems: Socioeconomic classification (SEI), the Nordic Occupations Classification1983 (NYK), and the Swedish Standard Classification of Occupations 2012 (SSYK). Smoking status was defined as whether subjects were current or former smokers or have never smoked. The outcomes included physician-diagnosed asthma and indicators of respiratory symptoms.

Methods and statistical analysis: Multiple imputation was used to input missing data. A Bayesian network is being used to analyze the data, which allows inference of various dependencies between variables in the data, including accounting for the interaction between smoking and social class in relation to the study outcomes. Inclusion of the variables into the model was based on expert consensus.

Results: Statistical analyses are yet to be finalized at the moment, but preliminary results will be available for presentation at the time of the meeting

13:30  Coffee break 20 min

13:50 – 14:50 Parallel poster session

Poster session 5

 Title: Menopausal Hormone Therapy and Women’s Health: An Umbrella Review of Systematic Reviews and Meta-Analyses of Randomized Controlled Trials and Observational Epidemiological Studies – Guo-Qiang Zhang, Krefting Research Centre, Institute of Medicine, Sahlgrenska Academy, University of Gothenbur

Abstract:

 Background: There remains uncertainty about the impact of menopausal hormone therapy (MHT) on women’s health. We sought to comprehensively summarize evidence on the benefits and harms of MHT across diverse health outcomes.

 Methods: We searched MEDLINE, EMBASE, and ten other databases from inception to November 26, 2017, updated December 17, 2020, to identify systematic reviews or meta-analyses of randomized controlled trials (RCTs) and observational studies investigating effects of MHT, including estrogen-alone therapy (ET) and estrogen plus progestin therapy (EPT). All health outcomes in previous systematic reviews were included, including menopausal symptoms, surrogate endpoints, biomarkers, morbidity outcomes, and mortality. Random-effects robust variance estimation was used to combine effect estimates.

 Results: Sixty systematic reviews were included, involving 102 meta-analyses of RCTs and 38 of observational studies, with 121 unique outcomes. In meta-analyses of RCTs, MHT was beneficial for vasomotor symptoms (risk ratio 0.29, 95% confidence interval 0.17 to 0.50), vaginal atrophy (intravaginal ET: 0.31, 0.12 to 0.81), sexual function, all fracture, vertebral and non-vertebral fracture, diabetes mellitus, cardiovascular mortality (ET), and colorectal cancer (EPT), but harmful for stroke (1.17, 1.05 to 1.29), venous thromboembolism (1.60, 0.99 to 2.58), cardiovascular disease, cerebrovascular disease, non-fatal stroke, deep vein thrombosis, gallbladder disease, and lung cancer mortality (EPT). ET and EPT had opposite effects on breast and endometrial cancer, endometrial hyperplasia, and Alzheimer’s disease.

 Conclusions: MHT has a complex balance of benefits and harms on multiple health outcomes. Some effects differ qualitatively between ET and EPT. The quality of available evidence is only moderate to poor.

 Poster session 6

Title: The Enduring Disadvantage of Residential Context: Investigating the Temporal Dimensions of Neighborhood Effects – Maël Lecoursonnais, Institute for Analytical Sociology, Linköping University

 Abstract:

Background: The neighborhood effect literature examines the effect of residential context on individual-level outcomes such as income or education. While most of the literature has focused on whether neighborhood effects exist or not by targeting the average treatment effect, less is known on the conditions on when, where, and for whom residential context matters the most (Sharkey and Faber, 2014). This article explores the temporal dimensions of neighborhood effects, namely timing and duration of exposure and intergenerational effects.

 Methods: Following recent research on causal estimation in a longitudinal setting, we use a novel weighting-based method (Zhou and Wodtke, 2020) – residual balancing weighting – to compute the cumulative effect. Neighborhoods are constructed based on the 500 nearest neighbors. The database comprises longitudinal observations of the 1989 cohort and includes residential trajectories and socioeconomic and demographic variables.

 Results: Only preliminary analyses have been computed for now. Nevertheless, they already give exciting and promising results. When using the share of educated people in the bespoke neighborhoods as treatment and (individual level) education as the outcome, the marginal structural model’s cumulative effect is linear and negative, suggesting that neighborhood effects are stronger as the number of years spent in the neighborhood increases. Results also suggest different patterns depending on the timing of exposure. 

 Conclusion: Using rich data and novel methods, this article gives new insights into neighborhood effects by examining how time-related factors affect its intensity.

 Poster session 7

 Title: Use of anti-androgenic 5α-reductase inhibitors and risk of oesophageal and gastric cancer by histological type and anatomical sub-site – Sirus Rabbani, Upper Gastrointestinal Surgery, Department of Molecular Medicine and Surgery, Karolinska Institute

 Abstract:

 Aim: Oesophageal and gastric cancer, in particular, oesophageal and cardia adenocarcinoma, are overrepresented in men, which may be explained by sex hormones. This study aimed to investigate the hypothesis that use of the anti-androgenic medications 5α-reductase inhibitors (5-ARIs) decreases the risk of developing these tumours, analysed separately by histological type and anatomical sub-site.

Methods: In this Swedish population-based cohort study taking place between 2005-2018, men using 5-ARIs were considered exposed. For each exposed participant,10 male age-matched non-users of 5-ARIs (non-exposed) were included. Multivariable Cox regression provided hazard ratios (HR) with 95% confidence intervals (CI) adjusted for age, calendar year, smoking status, non-steroidal anti-inflammatory drugs/aspirin use, and statins use. Depending on the tumour analysed adjustment was also made for gastro-oesophageal reflux disease, obesity or diabetes, Helicobacter pylori treatment, and alcohol overconsumption. 

Results: The cohort included 191,156 users of 5-ARIs and 1,911,560 non-users. Use of 5-ARIs indicated slightly decreased risks of oesophageal or cardia adenocarcinoma (HR 0.92, 95% CI 0.82-1.02), which was stronger among participants with obesity or diabetes (HR 0.55, 95% CI 0.39-0.80), and gastric non-cardia adenocarcinoma (adjusted HR 0.90, 95% CI 0.80-1.02). Use of 5-ARIs was associated with a reduced risk of oesophageal squamous cell carcinoma (HR 0.49, 95% CI 0.37-0.65).  

 Conclusion: Users of 5-ARIs may have a decreased risk of developing oesophageal or cardia adenocarcinoma, particularly in those with obesity or diabetes, as well as a decreased risk of oesophageal squamous cell carcinoma and possibly also of gastric non-cardia adenocarcinoma. 

 Poster session 8

 Title: Shrunken Pore Syndrome and Morbidity in the Malmö Diet and Cancer Cohort: A Generalized Propensity Score Approach – Anna Åkesson, Department of Laboratory Medicine, Lund University 

 Abstract:

Background: Shrunken pore syndrome (SPS) is defined by cystatin C-based estimation of glomerular filtration rate (eGFRCYS) being less than 60% or 70% of creatinine-based GFR estimation (eGFRCR) in the absence of extrarenal influences on cystatin C or creatinine. Shrunken pore syndrome (SPS) is associated with a high increase in mortality or morbidity in all investigated patient populations. However, few have investigated SPS in a healthy population. Therefore, the aim of this study was to investigate the association between SPS and morbidity and mortality in a cohort of healthy volunteers. A secondary aim of the study was to investigate sociodemographic and lifestyle risk factors that are associated with SPS.

 Methods: We studied a subgroup of 5 061 individuals from the Malmö Diet and Cancer study (MDC), a population-based prospective cohort of healthy middle-aged volunteers, with a median follow-up of 25.3 years (IQR=5.7). The eGFRCYS/eGFRCR-ratio at baseline was categorized in four groups (the lowest category indicating SPS) and used to estimate a generalized 4-level propensity score for SPS to adjust for confounding. We related the eGFRCYS/eGFRCR-ratioto incident,CVD, incident kidney disease, incident diabetes, incident cancer and all-cause mortality using Cox regression adjusted for the propensity score.

 Results: Preliminary, mainly descriptive, results will be presented. The overall prevalence of SPS in the cohort was 8%. At baseline more individuals with SPS were unemployed (6.7%), lived alone (28.9%), and had a history of prevalent CVD (11.9%) in comparison to individuals with a eGFRCYS/eGFRCR-ratio ≥.1.00 where the corresponding figures were 3.4%, 20.8%, and 11.9%.

 Summary and Wrap-up 

14:50 – 15:00 Jonas Björk, Maria Brandén and Poorna Anand (moderator)