PLoS Medicine

Condividi contenuti PLOS Medicine: New Articles
A Peer-Reviewed Open-Access Journal
Aggiornato: 12 ore 56 min fa

Correction: Dysregulation of multiple metabolic networks related to brain transmethylation and polyamine pathways in Alzheimer disease: A targeted metabolomic and transcriptomic study

Mer, 21/10/2020 - 23:00

by Uma V. Mahajan, Vijay R. Varma, Michael E. Griswold, Chad T. Blackshear, Yang An, Anup M. Oommen, Sudhir Varma, Juan C. Troncoso, Olga Pletnikova, Richard O’Brien, Timothy J. Hohman, Cristina Legido-Quigley, Madhav Thambisetty

Variation in racial/ethnic disparities in COVID-19 mortality by age in the United States: A cross-sectional study

Mar, 20/10/2020 - 23:00

by Mary T. Bassett, Jarvis T. Chen, Nancy Krieger

Background

In the United States, non-Hispanic Black (NHB), Hispanic, and non-Hispanic American Indian/Alaska Native (NHAIAN) populations experience excess COVID-19 mortality, compared to the non-Hispanic White (NHW) population, but racial/ethnic differences in age at death are not known. The release of national COVID-19 death data by racial/ethnic group now permits analysis of age-specific mortality rates for these groups and the non-Hispanic Asian or Pacific Islander (NHAPI) population. Our objectives were to examine variation in age-specific COVID-19 mortality rates by racial/ethnicity and to calculate the impact of this mortality using years of potential life lost (YPLL).

Methods and findings

This cross-sectional study used the recently publicly available data on US COVID-19 deaths with reported race/ethnicity, for the time period February 1, 2020, to July 22, 2020. Population data were drawn from the US Census. As of July 22, 2020, the number of COVID-19 deaths equaled 68,377 for NHW, 29,476 for NHB, 23,256 for Hispanic, 1,143 for NHAIAN, and 6,468 for NHAPI populations; the corresponding population sizes were 186.4 million, 40.6 million, 2.6 million, 19.5 million, and 57.7 million. Age-standardized rate ratios relative to NHW were 3.6 (95% CI 3.5, 3.8; p < 0.001) for NHB, 2.8 (95% CI 2.7, 3.0; p < 0.001) for Hispanic, 2.2 (95% CI 1.8, 2.6; p < 0.001) for NHAIAN, and 1.6 (95% CI 1.4, 1.7; p < 0.001) for NHAP populations. By contrast, NHB rate ratios relative to NHW were 7.1 (95% CI 5.8, 8.7; p < 0.001) for persons aged 25–34 years, 9.0 (95% CI 7.9, 10.2; p < 0.001) for persons aged 35–44 years, and 7.4 (95% CI 6.9, 7.9; p < 0.001) for persons aged 45–54 years. Even at older ages, NHB rate ratios were between 2.0 and 5.7. Similarly, rate ratios for the Hispanic versus NHW population were 7.0 (95% CI 5.8, 8.7; p < 0.001), 8.8 (95% CI 7.8, 9.9; p < 0.001), and 7.0 (95% CI 6.6, 7.5; p < 0.001) for the corresponding age strata above, with remaining rate ratios ranging from 1.4 to 5.0. Rate ratios for NHAIAN were similarly high through age 74 years. Among NHAPI persons, rate ratios ranged from 2.0 to 2.8 for persons aged 25–74 years and were 1.6 and 1.2 for persons aged 75–84 and 85+ years, respectively. As a consequence, more YPLL before age 65 were experienced by the NHB and Hispanic populations than the NHW population—despite the fact that the NHW population is larger—with a ratio of 4.6:1 and 3.2:1, respectively, for NHB and Hispanic persons. Study limitations include likely lag time in receipt of completed death certificates received by the Centers for Disease Control and Prevention for transmission to NCHS, with consequent lag in capturing the total number of deaths compared to data reported on state dashboards.

Conclusions

In this study, we observed racial variation in age-specific mortality rates not fully captured with examination of age-standardized rates alone. These findings suggest the importance of examining age-specific mortality rates and underscores how age standardization can obscure extreme variations within age strata. To avoid overlooking such variation, data that permit age-specific analyses should be routinely publicly available.

Differential association of air pollution exposure with neonatal and postneonatal mortality in England and Wales: A cohort study

Mar, 20/10/2020 - 23:00

by Sarah J. Kotecha, W. John Watkins, John Lowe, Jonathan Grigg, Sailesh Kotecha

Background

Many but not all studies suggest an association between air pollution exposure and infant mortality. We sought to investigate whether pollution exposure is differentially associated with all-cause neonatal or postneonatal mortality, or specific causes of infant mortality.

Methods and findings

We separately investigated the associations of exposure to particulate matter with aerodynamic diameter ≤ 10 μm (PM10), nitrogen dioxide (NO2), and sulphur dioxide (SO2) with all-cause infant, neonatal, and postneonatal mortality, and with specific causes of infant deaths in 7,984,366 live births between 2001 and 2012 in England and Wales. Overall, 51.3% of the live births were male, and there were 36,485 infant deaths (25,110 neonatal deaths and 11,375 postneonatal deaths). We adjusted for the following major confounders: deprivation, birthweight, maternal age, sex, and multiple birth. Adjusted odds ratios (95% CI; p-value) for infant deaths were significantly increased for NO2, PM10, and SO2 (1.066 [1.027, 1.107; p = 0.001], 1.044 [1.007, 1.082; p = 0.017], and 1.190 [1.146, 1.235; p < 0.001], respectively) when highest and lowest pollutant quintiles were compared; however, neonatal mortality was significantly associated with SO2 (1.207 [1.154, 1.262; p < 0.001]) but not significantly associated with NO2 and PM10 (1.044 [0.998, 1.092; p = 0.059] and 1.008 [0.966, 1.052; p = 0.702], respectively). Postneonatal mortality was significantly associated with all pollutants: NO2, 1.108 (1.038, 1.182; p < 0.001); PM10, 1.117 (1.050, 1.188; p < 0.001); and SO2, 1.147 (1.076, 1.224; p < 0.001). Whilst all were similarly associated with endocrine causes of infant deaths (NO2, 2.167 [1.539, 3.052; p < 0.001]; PM10, 1.433 [1.066, 1.926; p = 0.017]; and SO2, 1.558 [1.147, 2.116; p = 0.005]), they were differentially associated with other specific causes: NO2 and PM10 were associated with an increase in infant deaths from congenital malformations of the nervous (NO2, 1.525 [1.179, 1.974; p = 0.001]; PM10, 1.457 [1.150, 1.846; p = 0.002]) and gastrointestinal systems (NO2, 1.214 [1.006, 1.466; p = 0.043]; PM10, 1.312 [1.096, 1.571; p = 0.003]), and NO2 was also associated with deaths from malformations of the respiratory system (1.306 [1.019, 1.675; p = 0.035]). In contrast, SO2 was associated with an increase in infant deaths from perinatal causes (1.214 [1.156, 1.275; p < 0.001]) and from malformations of the circulatory system (1.172 [1.011, 1.358; p = 0.035]). A limitation of this study was that we were not able to study associations of air pollution exposure and infant mortality during the different trimesters of pregnancy. In addition, we were not able to control for all confounding factors such as maternal smoking.

Conclusions

In this study, we found that NO2, PM10, and SO2 were differentially associated with all-cause mortality and with specific causes of infant, neonatal, and postneonatal mortality.

Rapid Epidemiological Analysis of Comorbidities and Treatments as risk factors for COVID-19 in Scotland (REACT-SCOT): A population-based case-control study

Mar, 20/10/2020 - 23:00

by Paul M. McKeigue, Amanda Weir, Jen Bishop, Stuart J. McGurnaghan, Sharon Kennedy, David McAllister, Chris Robertson, Rachael Wood, Nazir Lone, Janet Murray, Thomas M. Caparrotta, Alison Smith-Palmer, David Goldberg, Jim McMenamin, Colin Ramsay, Sharon Hutchinson, Helen M. Colhoun, on behalf of Public Health Scotland COVID-19 Health Protection Study Group

Background

The objectives of this study were to identify risk factors for severe coronavirus disease 2019 (COVID-19) and to lay the basis for risk stratification based on demographic data and health records.

Methods and findings

The design was a matched case-control study. Severe COVID-19 was defined as either a positive nucleic acid test for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in the national database followed by entry to a critical care unit or death within 28 days or a death certificate with COVID-19 as underlying cause. Up to 10 controls per case matched for sex, age, and primary care practice were selected from the national population register. For this analysis—based on ascertainment of positive test results up to 6 June 2020, entry to critical care up to 14 June 2020, and deaths registered up to 14 June 2020—there were 36,948 controls and 4,272 cases, of which 1,894 (44%) were care home residents. All diagnostic codes from the past 5 years of hospitalisation records and all drug codes from prescriptions dispensed during the past 240 days were extracted. Rate ratios for severe COVID-19 were estimated by conditional logistic regression. In a logistic regression using the age-sex distribution of the national population, the odds ratios for severe disease were 2.87 for a 10-year increase in age and 1.63 for male sex. In the case-control analysis, the strongest risk factor was residence in a care home, with rate ratio 21.4 (95% CI 19.1–23.9, p = 8 × 10−644). Univariate rate ratios for conditions listed by public health agencies as conferring high risk were 2.75 (95% CI 1.96–3.88, p = 6 × 10−9) for type 1 diabetes, 1.60 (95% CI 1.48–1.74, p = 8 × 10−30) for type 2 diabetes, 1.49 (95% CI 1.37–1.61, p = 3 × 10−21) for ischemic heart disease, 2.23 (95% CI 2.08–2.39, p = 4 × 10−109) for other heart disease, 1.96 (95% CI 1.83–2.10, p = 2 × 10−78) for chronic lower respiratory tract disease, 4.06 (95% CI 3.15–5.23, p = 3 × 10−27) for chronic kidney disease, 5.4 (95% CI 4.9–5.8, p = 1 × 10−354) for neurological disease, 3.61 (95% CI 2.60–5.00, p = 2 × 10−14) for chronic liver disease, and 2.66 (95% CI 1.86–3.79, p = 7 × 10−8) for immune deficiency or suppression. Seventy-eight percent of cases and 52% of controls had at least one listed condition (51% of cases and 11% of controls under age 40). Severe disease was associated with encashment of at least one prescription in the past 9 months and with at least one hospital admission in the past 5 years (rate ratios 3.10 [95% CI 2.59–3.71] and 2.75 [95% CI 2.53–2.99], respectively) even after adjusting for the listed conditions. In those without listed conditions, significant associations with severe disease were seen across many hospital diagnoses and drug categories. Age and sex provided 2.58 bits of information for discrimination. A model based on demographic variables, listed conditions, hospital diagnoses, and prescriptions provided an additional 1.07 bits (C-statistic 0.804). A limitation of this study is that records from primary care were not available.

Conclusions

We have shown that, along with older age and male sex, severe COVID-19 is strongly associated with past medical history across all age groups. Many comorbidities beyond the risk conditions designated by public health agencies contribute to this. A risk classifier that uses all the information available in health records, rather than only a limited set of conditions, will more accurately discriminate between low-risk and high-risk individuals who may require shielding until the epidemic is over.

The impact of delayed treatment of uncomplicated <i>P</i>. <i>falciparum</i> malaria on progression to severe malaria: A systematic review and a pooled multicentre individual-patient meta-analysis

Lun, 19/10/2020 - 23:00

by Andria Mousa, Abdullah Al-Taiar, Nicholas M. Anstey, Cyril Badaut, Bridget E. Barber, Quique Bassat, Joseph D. Challenger, Aubrey J. Cunnington, Dibyadyuti Datta, Chris Drakeley, Azra C. Ghani, Victor R. Gordeuk, Matthew J. Grigg, Pierre Hugo, Chandy C. John, Alfredo Mayor, Florence Migot-Nabias, Robert O. Opoka, Geoffrey Pasvol, Claire Rees, Hugh Reyburn, Eleanor M. Riley, Binal N. Shah, Antonio Sitoe, Colin J. Sutherland, Philip E. Thuma, Stefan A. Unger, Firmine Viwami, Michael Walther, Christopher J. M. Whitty, Timothy William, Lucy C. Okell

Background

Delay in receiving treatment for uncomplicated malaria (UM) is often reported to increase the risk of developing severe malaria (SM), but access to treatment remains low in most high-burden areas. Understanding the contribution of treatment delay on progression to severe disease is critical to determine how quickly patients need to receive treatment and to quantify the impact of widely implemented treatment interventions, such as ‘test-and-treat’ policies administered by community health workers (CHWs). We conducted a pooled individual-participant meta-analysis to estimate the association between treatment delay and presenting with SM.

Methods and findings

A search using Ovid MEDLINE and Embase was initially conducted to identify studies on severe Plasmodium falciparum malaria that included information on treatment delay, such as fever duration (inception to 22nd September 2017). Studies identified included 5 case–control and 8 other observational clinical studies of SM and UM cases. Risk of bias was assessed using the Newcastle–Ottawa scale, and all studies were ranked as ‘Good’, scoring ≥7/10. Individual-patient data (IPD) were pooled from 13 studies of 3,989 (94.1% aged <15 years) SM patients and 5,780 (79.6% aged <15 years) UM cases in Benin, Malaysia, Mozambique, Tanzania, The Gambia, Uganda, Yemen, and Zambia. Definitions of SM were standardised across studies to compare treatment delay in patients with UM and different SM phenotypes using age-adjusted mixed-effects regression. The odds of any SM phenotype were significantly higher in children with longer delays between initial symptoms and arrival at the health facility (odds ratio [OR] = 1.33, 95% CI: 1.07–1.64 for a delay of >24 hours versus ≤24 hours; p = 0.009). Reported illness duration was a strong predictor of presenting with severe malarial anaemia (SMA) in children, with an OR of 2.79 (95% CI:1.92–4.06; p < 0.001) for a delay of 2–3 days and 5.46 (95% CI: 3.49–8.53; p < 0.001) for a delay of >7 days, compared with receiving treatment within 24 hours from symptom onset. We estimate that 42.8% of childhood SMA cases and 48.5% of adult SMA cases in the study areas would have been averted if all individuals were able to access treatment within the first day of symptom onset, if the association is fully causal. In studies specifically recording onset of nonsevere symptoms, long treatment delay was moderately associated with other SM phenotypes (OR [95% CI] >3 to ≤4 days versus ≤24 hours: cerebral malaria [CM] = 2.42 [1.24–4.72], p = 0.01; respiratory distress syndrome [RDS] = 4.09 [1.70–9.82], p = 0.002). In addition to unmeasured confounding, which is commonly present in observational studies, a key limitation is that many severe cases and deaths occur outside healthcare facilities in endemic countries, where the effect of delayed or no treatment is difficult to quantify.

Conclusions

Our results quantify the relationship between rapid access to treatment and reduced risk of severe disease, which was particularly strong for SMA. There was some evidence to suggest that progression to other severe phenotypes may also be prevented by prompt treatment, though the association was not as strong, which may be explained by potential selection bias, sample size issues, or a difference in underlying pathology. These findings may help assess the impact of interventions that improve access to treatment.

The association between circulating 25-hydroxyvitamin D metabolites and type 2 diabetes in European populations: A meta-analysis and Mendelian randomisation analysis

Ven, 16/10/2020 - 23:00

by Ju-Sheng Zheng, Jian’an Luan, Eleni Sofianopoulou, Stephen J. Sharp, Felix R. Day, Fumiaki Imamura, Thomas E. Gundersen, Luca A. Lotta, Ivonne Sluijs, Isobel D. Stewart, Rupal L. Shah, Yvonne T. van der Schouw, Eleanor Wheeler, Eva Ardanaz, Heiner Boeing, Miren Dorronsoro, Christina C. Dahm, Niki Dimou, Douae El-Fatouhi, Paul W. Franks, Guy Fagherazzi, Sara Grioni, José María Huerta, Alicia K. Heath, Louise Hansen, Mazda Jenab, Paula Jakszyn, Rudolf Kaaks, Tilman Kühn, Kay-Tee Khaw, Nasser Laouali, Giovanna Masala, Peter M. Nilsson, Kim Overvad, Anja Olsen, Salvatore Panico, J. Ramón Quirós, Olov Rolandsson, Miguel Rodríguez-Barranco, Carlotta Sacerdote, Annemieke M. W. Spijkerman, Tammy Y. N. Tong, Rosario Tumino, Konstantinos K. Tsilidis, John Danesh, Elio Riboli, Adam S. Butterworth, Claudia Langenberg, Nita G. Forouhi, Nicholas J. Wareham

Background

Prior research suggested a differential association of 25-hydroxyvitamin D (25(OH)D) metabolites with type 2 diabetes (T2D), with total 25(OH)D and 25(OH)D3 inversely associated with T2D, but the epimeric form (C3-epi-25(OH)D3) positively associated with T2D. Whether or not these observational associations are causal remains uncertain. We aimed to examine the potential causality of these associations using Mendelian randomisation (MR) analysis.

Methods and findings

We performed a meta-analysis of genome-wide association studies for total 25(OH)D (N = 120,618), 25(OH)D3 (N = 40,562), and C3-epi-25(OH)D3 (N = 40,562) in participants of European descent (European Prospective Investigation into Cancer and Nutrition [EPIC]–InterAct study, EPIC-Norfolk study, EPIC-CVD study, Ely study, and the SUNLIGHT consortium). We identified genetic variants for MR analysis to investigate the causal association of the 25(OH)D metabolites with T2D (including 80,983 T2D cases and 842,909 non-cases). We also estimated the observational association of 25(OH)D metabolites with T2D by performing random effects meta-analysis of results from previous studies and results from the EPIC-InterAct study. We identified 10 genetic loci associated with total 25(OH)D, 7 loci associated with 25(OH)D3 and 3 loci associated with C3-epi-25(OH)D3. Based on the meta-analysis of observational studies, each 1–standard deviation (SD) higher level of 25(OH)D was associated with a 20% lower risk of T2D (relative risk [RR]: 0.80; 95% CI 0.77, 0.84; p < 0.001), but a genetically predicted 1-SD increase in 25(OH)D was not significantly associated with T2D (odds ratio [OR]: 0.96; 95% CI 0.89, 1.03; p = 0.23); this result was consistent across sensitivity analyses. In EPIC-InterAct, 25(OH)D3 (per 1-SD) was associated with a lower risk of T2D (RR: 0.81; 95% CI 0.77, 0.86; p < 0.001), while C3-epi-25(OH)D3 (above versus below lower limit of quantification) was positively associated with T2D (RR: 1.12; 95% CI 1.03, 1.22; p = 0.006), but neither 25(OH)D3 (OR: 0.97; 95% CI 0.93, 1.01; p = 0.14) nor C3-epi-25(OH)D3 (OR: 0.98; 95% CI 0.93, 1.04; p = 0.53) was causally associated with T2D risk in the MR analysis. Main limitations include the lack of a non-linear MR analysis and of the generalisability of the current findings from European populations to other populations of different ethnicities.

Conclusions

Our study found discordant associations of biochemically measured and genetically predicted differences in blood 25(OH)D with T2D risk. The findings based on MR analysis in a large sample of European ancestry do not support a causal association of total 25(OH)D or 25(OH)D metabolites with T2D and argue against the use of vitamin D supplementation for the prevention of T2D.

Pulmonary vascular dysfunction among people aged over 65 years in the community in the Atherosclerosis Risk In Communities (ARIC) Study: A cross-sectional analysis

Gio, 15/10/2020 - 23:00

by Kanako Teramoto, Mário Santos, Brian Claggett, Jenine E. John, Scott D. Solomon, Dalane Kitzman, Aaron R. Folsom, Mary Cushman, Kunihiro Matsushita, Hicham Skali, Amil M. Shah

Background

Heart failure (HF) risk is highest in late life, and impaired pulmonary vascular function is a risk factor for HF development. However, data regarding the contributors to and prognostic importance of pulmonary vascular dysfunction among HF-free elders in the community are limited and largely restricted to pulmonary hypertension. Our objective was to define the prevalence and correlates of abnormal pulmonary pressure, resistance, and compliance and their association with incident HF and HF phenotype (left ventricular [LV] ejection fraction [LVEF] ≥ or < 50%) independent of LV structure and function.

Methods and findings

We performed cross-sectional and time-to-event analyses in a prospective epidemiologic cohort study, the Atherosclerosis Risk in Communities study. This is an ongoing, observational study that recruited 15,792 persons aged 45–64 years between 1987 and 1989 (visit 1) from four representative communities in the United States: Minneapolis, Minnesota; Jackson, Mississippi; Hagerstown, Maryland; and Forsyth County, North Carolina. The current analysis included 2,810 individuals aged 66–90 years, free of HF, who underwent echocardiography at the fifth study visit (June 8, 2011, to August 28, 2013) and had measurable tricuspid regurgitation by spectral Doppler. Echocardiography-derived pulmonary artery systolic pressure (PASP), pulmonary vascular resistance (PVR), and pulmonary arterial compliance (PAC) were measured. The main outcome was incident HF after visit 5, and key secondary end points were incident HF with preserved LVEF (HFpEF) and incident HF with reduced LVEF (HFrEF). The mean ± SD age was 76 ± 5 years, 66% were female, and 21% were black. Mean values of PASP, PVR, and PAC were 28 ± 5 mm Hg, 1.7 ± 0.4 Wood unit, and 3.4 ± 1.0 mL/mm Hg, respectively, and were abnormal in 18%, 12%, and 14%, respectively, using limits defined from the 10th and 90th percentile limits in 253 low-risk participants free of cardiovascular disease or risk factors. Left heart dysfunction was associated with abnormal PASP and PAC, whereas a restrictive ventilatory deficit was associated with abnormalities of PASP, PVR, and PAC. PASP, PVR, and PAC were each predictive of incident HF or death (hazard ratio per SD 1.3 [95% CI 1.1–1.4], p < 0.001; 1.1 [1.0–1.2], p = 0.04; 1.2 [1.1–1.4], p = 0.001, respectively) independent of LV measures. Elevated pulmonary pressure was predictive of incident HFpEF (HFpEF: 2.4 [1.4–4.0, p = 0.001]) but not HFrEF (1.4 [0.8–2.5, p = 0.31]). Abnormal PAC predicted HFrEF (HFpEF: 2.0 [1.0–4.0, p = 0.05], HFrEF: 2.8 [1.4–5.5, p = 0.003]), whereas abnormal PVR was not predictive of either (HFpEF: 0.9 [0.4–2.0, p = 0.85], HFrEF: 0.7 [0.3–1.4, p = 0.30],). A greater number of abnormal pulmonary vascular measures was associated with greater risk of incident HF. Major limitations include the use of echo Doppler to estimate pulmonary hemodynamic measures, which may lead to misclassification; inclusions bias related to detectable tricuspid regurgitation, which may limit generalizability of our findings; and survivor bias related to the cohort age, which may result in underestimation of the described associations.

Conclusions

In this study, we observed abnormalities of PASP, PVR, and PAC in 12%–18% of elders in the community. Higher PASP and lower PAC were independently predictive of incident HF. Abnormally high PASP predicted incident HFpEF but not HFrEF. These findings suggest that impairments in pulmonary vascular function may precede clinical HF and that a comprehensive pulmonary hemodynamic evaluation may identify pulmonary vascular phenotypes that differentially predict HF phenotypes.

Risk of disease and willingness to vaccinate in the United States: A population-based survey

Gio, 15/10/2020 - 23:00

by Bert Baumgaertner, Benjamin J. Ridenhour, Florian Justwan, Juliet E. Carlisle, Craig R. Miller

Background

Vaccination complacency occurs when perceived risks of vaccine-preventable diseases are sufficiently low so that vaccination is no longer perceived as a necessary precaution. Disease outbreaks can once again increase perceptions of risk, thereby decrease vaccine complacency, and in turn decrease vaccine hesitancy. It is not well understood, however, how change in perceived risk translates into change in vaccine hesitancy.We advance the concept of vaccine propensity, which relates a change in willingness to vaccinate with a change in perceived risk of infection—holding fixed other considerations such as vaccine confidence and convenience.

Methods and findings

We used an original survey instrument that presents 7 vaccine-preventable “new” diseases to gather demographically diverse sample data from the United States in 2018 (N = 2,411). Our survey was conducted online between January 25, 2018, and February 2, 2018, and was structured in 3 parts. First, we collected information concerning the places participants live and visit in a typical week. Second, participants were presented with one of 7 hypothetical disease outbreaks and asked how they would respond. Third, we collected sociodemographic information. The survey was designed to match population parameters in the US on 5 major dimensions: age, sex, income, race, and census region. We also were able to closely match education. The aggregate demographic details for study participants were a mean age of 43.80 years, 47% male and 53% female, 38.5% with a college degree, and 24% nonwhite. We found an overall change of at least 30% in proportion willing to vaccinate as risk of infection increases. When considering morbidity information, the proportion willing to vaccinate went from 0.476 (0.449–0.503) at 0 local cases of disease to 0.871 (0.852–0.888) at 100 local cases (upper and lower 95% confidence intervals). When considering mortality information, the proportion went from 0.526 (0.494–0.557) at 0 local cases of disease to 0.916 (0.897–0.931) at 100 local cases. In addition, we ffound that the risk of mortality invokes a larger proportion willing to vaccinate than mere morbidity (P = 0.0002), that older populations are more willing than younger (P<0.0001), that the highest income bracket (>$90,000) is more willing than all others (P = 0.0001), that men are more willing than women (P = 0.0011), and that the proportion willing to vaccinate is related to both ideology and the level of risk (P = 0.004). Limitations of this study include that it does not consider how other factors (such as social influence) interact with local case counts in people’s vaccine decision-making, it cannot determine whether different degrees of severity in morbidity or mortality failed to be statistically significant because of survey design or because participants use heuristically driven decision-making that glosses over degrees, and the study does not capture the part of the US that is not online.

Conclusions

In this study, we found that different degrees of risk (in terms of local cases of disease) correspond with different proportions of populations willing to vaccinate. We also identified several sociodemographic aspects of vaccine propensity.Understanding how vaccine propensity is affected by sociodemographic factors is invaluable for predicting where outbreaks are more likely to occur and their expected size, even with the resulting cascade of changing vaccination rates and the respective feedback on potential outbreaks.

Time trends and prescribing patterns of opioid drugs in UK primary care patients with non-cancer pain: A retrospective cohort study

Gio, 15/10/2020 - 23:00

by Meghna Jani, Belay Birlie Yimer, Therese Sheppard, Mark Lunt, William G. Dixon

Background

The US opioid epidemic has led to similar concerns about prescribed opioids in the UK. In new users, initiation of or escalation to more potent and high dose opioids may contribute to long-term use. Additionally, physician prescribing behaviour has been described as a key driver of rising opioid prescriptions and long-term opioid use. No studies to our knowledge have investigated the extent to which regions, practices, and prescribers vary in opioid prescribing whilst accounting for case mix. This study sought to (i) describe prescribing trends between 2006 and 2017, (ii) evaluate the transition of opioid dose and potency in the first 2 years from initial prescription, (iii) quantify and identify risk factors for long-term opioid use, and (iv) quantify the variation of long-term use attributed to region, practice, and prescriber, accounting for case mix and chance variation.

Methods and findings

A retrospective cohort study using UK primary care electronic health records from the Clinical Practice Research Datalink was performed. Adult patients without cancer with a new prescription of an opioid were included; 1,968,742 new users of opioids were identified. Mean age was 51 ± 19 years, and 57% were female. Codeine was the most commonly prescribed opioid, with use increasing 5-fold from 2006 to 2017, reaching 2,456 prescriptions/10,000 people/year. Morphine, buprenorphine, and oxycodone prescribing rates continued to rise steadily throughout the study period. Of those who started on high dose (120–199 morphine milligram equivalents [MME]/day) or very high dose opioids (≥200 MME/day), 10.3% and 18.7% remained in the same MME/day category or higher at 2 years, respectively. Following opioid initiation, 14.6% became long-term opioid users in the first year. In the fully adjusted model, the following were associated with the highest adjusted odds ratios (aORs) for long-term use: older age (≥75 years, aOR 4.59, 95% CI 4.48–4.70, p < 0.001; 65–74 years, aOR 3.77, 95% CI 3.68–3.85, p < 0.001, compared to <35 years), social deprivation (Townsend score quintile 5/most deprived, aOR 1.56, 95% CI 1.52–1.59, p < 0.001, compared to quintile 1/least deprived), fibromyalgia (aOR 1.81, 95% CI 1.49–2.19, p < 0.001), substance abuse (aOR 1.72, 95% CI 1.65–1.79, p < 0.001), suicide/self-harm (aOR 1.56, 95% CI 1.52–1.61, p < 0.001), rheumatological conditions (aOR 1.53, 95% CI 1.48–1.58, p < 0.001), gabapentinoid use (aOR 2.52, 95% CI 2.43–2.61, p < 0.001), and MME/day at initiation (aOR 1.08, 95% CI 1.07–1.08, p < 0.001). After adjustment for case mix, 3 of the 10 UK regions (North West [16%], Yorkshire and the Humber [15%], and South West [15%]), 103 practices (25.6%), and 540 prescribers (3.5%) had a higher proportion of patients with long-term use compared to the population average. This study was limited to patients prescribed opioids in primary care and does not include opioids available over the counter or prescribed in hospitals or drug treatment centres.

Conclusions

Of patients commencing opioids on very high MME/day (≥200), a high proportion stayed in the same category for a subsequent 2 years. Age, deprivation, prescribing factors, comorbidities such as fibromyalgia, rheumatological conditions, recent major surgery, and history of substance abuse, alcohol abuse, and self-harm/suicide were associated with long-term opioid use. Despite adjustment for case mix, variation across regions and especially practices and prescribers in high-risk prescribing was observed. Our findings support greater calls for action for reduction in practice and prescriber variation by promoting safe practice in opioid prescribing.

Developing and validating subjective and objective risk-assessment measures for predicting mortality after major surgery: An international prospective cohort study

Gio, 15/10/2020 - 23:00

by Danny J. N. Wong, Steve Harris, Arun Sahni, James R. Bedford, Laura Cortes, Richard Shawyer, Andrew M. Wilson, Helen A. Lindsay, Doug Campbell, Scott Popham, Lisa M. Barneto, Paul S. Myles, SNAP-2: EPICCS collaborators , S. Ramani Moonesinghe

Background

Preoperative risk prediction is important for guiding clinical decision-making and resource allocation. Clinicians frequently rely solely on their own clinical judgement for risk prediction rather than objective measures. We aimed to compare the accuracy of freely available objective surgical risk tools with subjective clinical assessment in predicting 30-day mortality.

Methods and findings

We conducted a prospective observational study in 274 hospitals in the United Kingdom (UK), Australia, and New Zealand. For 1 week in 2017, prospective risk, surgical, and outcome data were collected on all adults aged 18 years and over undergoing surgery requiring at least a 1-night stay in hospital. Recruitment bias was avoided through an ethical waiver to patient consent; a mixture of rural, urban, district, and university hospitals participated. We compared subjective assessment with 3 previously published, open-access objective risk tools for predicting 30-day mortality: the Portsmouth-Physiology and Operative Severity Score for the enUmeration of Mortality (P-POSSUM), Surgical Risk Scale (SRS), and Surgical Outcome Risk Tool (SORT). We then developed a logistic regression model combining subjective assessment and the best objective tool and compared its performance to each constituent method alone. We included 22,631 patients in the study: 52.8% were female, median age was 62 years (interquartile range [IQR] 46 to 73 years), median postoperative length of stay was 3 days (IQR 1 to 6), and inpatient 30-day mortality was 1.4%. Clinicians used subjective assessment alone in 88.7% of cases. All methods overpredicted risk, but visual inspection of plots showed the SORT to have the best calibration. The SORT demonstrated the best discrimination of the objective tools (SORT Area Under Receiver Operating Characteristic curve [AUROC] = 0.90, 95% confidence interval [CI]: 0.88–0.92; P-POSSUM = 0.89, 95% CI 0.88–0.91; SRS = 0.85, 95% CI 0.82–0.87). Subjective assessment demonstrated good discrimination (AUROC = 0.89, 95% CI: 0.86–0.91) that was not different from the SORT (p = 0.309). Combining subjective assessment and the SORT improved discrimination (bootstrap optimism-corrected AUROC = 0.92, 95% CI: 0.90–0.94) and demonstrated continuous Net Reclassification Improvement (NRI = 0.13, 95% CI: 0.06–0.20, p < 0.001) compared with subjective assessment alone. Decision-curve analysis (DCA) confirmed the superiority of the SORT over other previously published models, and the SORT–clinical judgement model again performed best overall. Our study is limited by the low mortality rate, by the lack of blinding in the ‘subjective’ risk assessments, and because we only compared the performance of clinical risk scores as opposed to other prediction tools such as exercise testing or frailty assessment.

Conclusions

In this study, we observed that the combination of subjective assessment with a parsimonious risk model improved perioperative risk estimation. This may be of value in helping clinicians allocate finite resources such as critical care and to support patient involvement in clinical decision-making.

Serially assessed bisphenol A and phthalate exposure and association with kidney function in children with chronic kidney disease in the US and Canada: A longitudinal cohort study

Mer, 14/10/2020 - 23:00

by Melanie H. Jacobson, Yinxiang Wu, Mengling Liu, Teresa M. Attina, Mrudula Naidu, Rajendiran Karthikraj, Kurunthachalam Kannan, Bradley A. Warady, Susan Furth, Suzanne Vento, Howard Trachtman, Leonardo Trasande

Background

Exposure to environmental chemicals may be a modifiable risk factor for progression of chronic kidney disease (CKD). The purpose of this study was to examine the impact of serially assessed exposure to bisphenol A (BPA) and phthalates on measures of kidney function, tubular injury, and oxidative stress over time in a cohort of children with CKD.

Methods and findings

Samples were collected between 2005 and 2015 from 618 children and adolescents enrolled in the Chronic Kidney Disease in Children study, an observational cohort study of pediatric CKD patients from the US and Canada. Most study participants were male (63.8%) and white (58.3%), and participants had a median age of 11.0 years (interquartile range 7.6 to 14.6) at the baseline visit. In urine samples collected serially over an average of 3.0 years (standard deviation [SD] 1.6), concentrations of BPA, phthalic acid (PA), and phthalate metabolites were measured as well as biomarkers of tubular injury (kidney injury molecule-1 [KIM-1] and neutrophil gelatinase-associated lipocalin [NGAL]) and oxidative stress (8-hydroxy-2′-deoxyguanosine [8-OHdG] and F2-isoprostane). Clinical renal function measures included estimated glomerular filtration rate (eGFR), proteinuria, and blood pressure. Linear mixed models were fit to estimate the associations between urinary concentrations of 6 chemical exposure measures (i.e., BPA, PA, and 4 phthalate metabolite groups) and clinical renal outcomes and urinary concentrations of KIM-1, NGAL, 8-OHdG, and F2-isoprostane controlling for sex, age, race/ethnicity, glomerular status, birth weight, premature birth, angiotensin-converting enzyme inhibitor use, angiotensin receptor blocker use, BMI z-score for age and sex, and urinary creatinine. Urinary concentrations of BPA, PA, and phthalate metabolites were positively associated with urinary KIM-1, NGAL, 8-OHdG, and F2-isoprostane levels over time. For example, a 1-SD increase in ∑di-n-octyl phthalate metabolites was associated with increases in NGAL (β = 0.13 [95% CI: 0.05, 0.21], p = 0.001), KIM-1 (β = 0.30 [95% CI: 0.21, 0.40], p < 0.001), 8-OHdG (β = 0.10 [95% CI: 0.06, 0.13], p < 0.001), and F2-isoprostane (β = 0.13 [95% CI: 0.01, 0.25], p = 0.04) over time. BPA and phthalate metabolites were not associated with eGFR, proteinuria, or blood pressure, but PA was associated with lower eGFR over time. For a 1-SD increase in ln-transformed PA, there was an average decrease in eGFR of 0.38 ml/min/1.73 m2 (95% CI: −0.75, −0.01; p = 0.04). Limitations of this study included utilization of spot urine samples for exposure assessment of non-persistent compounds and lack of specific information on potential sources of exposure.

Conclusions

Although BPA and phthalate metabolites were not associated with clinical renal endpoints such as eGFR or proteinuria, there was a consistent pattern of increased tubular injury and oxidative stress over time, which have been shown to affect renal function in the long term. This raises concerns about the potential for clinically significant changes in renal function in relation to exposure to common environmental toxicants at current levels.

Neurodevelopmental multimorbidity and educational outcomes of Scottish schoolchildren: A population-based record linkage cohort study

Mar, 13/10/2020 - 23:00

by Michael Fleming, Ehsan E. Salim, Daniel F. Mackay, Angela Henderson, Deborah Kinnear, David Clark, Albert King, James S. McLay, Sally-Ann Cooper, Jill P. Pell

Background

Neurodevelopmental conditions commonly coexist in children, but compared to adults, childhood multimorbidity attracts less attention in research and clinical practice. We previously reported that children treated for attention deficit hyperactivity disorder (ADHD) and depression have more school absences and exclusions, additional support needs, poorer attainment, and increased unemployment. They are also more likely to have coexisting conditions, including autism and intellectual disability. We investigated prevalence of neurodevelopmental multimorbidity (≥2 conditions) among Scottish schoolchildren and their educational outcomes compared to peers.

Methods and findings

We retrospectively linked 6 Scotland-wide databases to analyse 766,244 children (390,290 [50.9%] boys; 375,954 [49.1%] girls) aged 4 to 19 years (mean = 10.9) attending Scottish schools between 2009 and 2013. Children were distributed across all deprivation quintiles (most to least deprived: 22.7%, 20.1%, 19.3%, 19.5%, 18.4%). The majority (96.2%) were white ethnicity. We ascertained autism spectrum disorder (ASD) and intellectual disabilities from records of additional support needs and ADHD and depression through relevant encashed prescriptions. We identified neurodevelopmental multimorbidity (≥2 of these conditions) in 4,789 (0.6%) children, with ASD and intellectual disability the most common combination. On adjusting for sociodemographic (sex, age, ethnicity, deprivation) and maternity (maternal age, maternal smoking, sex-gestation–specific birth weight centile, gestational age, 5-minute Apgar score, mode of delivery, parity) factors, multimorbidity was associated with increased school absenteeism and exclusion, unemployment, and poorer exam attainment. Significant dose relationships were evident between number of conditions (0, 1, ≥2) and the last 3 outcomes. Compared to children with no conditions, children with 1 condition, and children with 2 or more conditions, had more absenteeism (1 condition adjusted incidence rate ratio [IRR] 1.28, 95% CI 1.27–1.30, p < 0.001 and 2 or more conditions adjusted IRR 1.23, 95% CI 1.20–1.28, p < 0.001), greater exclusion (adjusted IRR 2.37, 95% CI 2.25–2.48, p < 0.001 and adjusted IRR 3.04, 95% CI 2.74–3.38, p < 0.001), poorer attainment (adjusted odds ratio [OR] 3.92, 95% CI 3.63–4.23, p < 0.001 and adjusted OR 12.07, 95% CI 9.15–15.94, p < 0.001), and increased unemployment (adjusted OR 1.57, 95% CI 1.49–1.66, p < 0.001 and adjusted OR 2.11, 95% CI 1.83–2.45, p < 0.001). Associations remained after further adjustment for comorbid physical conditions and additional support needs. Coexisting depression was the strongest driver of absenteeism and coexisting ADHD the strongest driver of exclusion. Absence of formal primary care diagnoses was a limitation since ascertaining depression and ADHD from prescriptions omitted affected children receiving alternative or no treatment and some antidepressants can be prescribed for other indications.

Conclusions

Structuring clinical practice and training around single conditions may disadvantage children with neurodevelopmental multimorbidity, who we observed had significantly poorer educational outcomes compared to children with 1 condition and no conditions.

Evaluation of a pharmacist-led actionable audit and feedback intervention for improving medication safety in UK primary care: An interrupted time series analysis

Mar, 13/10/2020 - 23:00

by Niels Peek, Wouter T. Gude, Richard N. Keers, Richard Williams, Evangelos Kontopantelis, Mark Jeffries, Denham L. Phipps, Benjamin Brown, Anthony J. Avery, Darren M. Ashcroft

Background

We evaluated the impact of the pharmacist-led Safety Medication dASHboard (SMASH) intervention on medication safety in primary care.

Methods and findings

SMASH comprised (1) training of clinical pharmacists to deliver the intervention; (2) a web-based dashboard providing actionable, patient-level feedback; and (3) pharmacists reviewing individual at-risk patients, and initiating remedial actions or advising general practitioners on doing so. It was implemented in 43 general practices covering a population of 235,595 people in Salford (Greater Manchester), UK. All practices started receiving the intervention between 18 April 2016 and 26 September 2017. We used an interrupted time series analysis of rates (prevalence) of potentially hazardous prescribing and inadequate blood-test monitoring, comparing observed rates post-intervention to extrapolations from a 24-month pre-intervention trend. The number of people registered to participating practices and having 1 or more risk factors for being exposed to hazardous prescribing or inadequate blood-test monitoring at the start of the intervention was 47,413 (males: 23,073 [48.7%]; mean age: 60 years [standard deviation: 21]). At baseline, 95% of practices had rates of potentially hazardous prescribing (composite of 10 indicators) between 0.88% and 6.19%. The prevalence of potentially hazardous prescribing reduced by 27.9% (95% CI 20.3% to 36.8%, p < 0.001) at 24 weeks and by 40.7% (95% CI 29.1% to 54.2%, p < 0.001) at 12 months after introduction of SMASH. The rate of inadequate blood-test monitoring (composite of 2 indicators) reduced by 22.0% (95% CI 0.2% to 50.7%, p = 0.046) at 24 weeks; the change at 12 months (23.5%) was no longer significant (95% CI −4.5% to 61.6%, p = 0.127). After 12 months, 95% of practices had rates of potentially hazardous prescribing between 0.74% and 3.02%. Study limitations include the fact that practices were not randomised, and therefore unmeasured confounding may have influenced our findings.

Conclusions

The SMASH intervention was associated with reduced rates of potentially hazardous prescribing and inadequate blood-test monitoring in general practices. This reduction was sustained over 12 months after the start of the intervention for prescribing but not for monitoring of medication. There was a marked reduction in the variation in rates of hazardous prescribing between practices.

Health outcomes and cost-effectiveness of diversion programs for low-level drug offenders: A model-based analysis

Mar, 13/10/2020 - 23:00

by Cora L. Bernard, Isabelle J. Rao, Konner K. Robison, Margaret L. Brandeau

Background

Cycles of incarceration, drug abuse, and poverty undermine ongoing public health efforts to reduce overdose deaths and the spread of infectious disease in vulnerable populations. Jail diversion programs aim to divert low-level drug offenders toward community care resources, avoiding criminal justice costs and disruptions in treatment for HIV, hepatitis C virus (HCV), and drug abuse. We sought to assess the health benefits and cost-effectiveness of a jail diversion program for low-level drug offenders.

Methods and findings

We developed a microsimulation model, calibrated to King County, Washington, that captured the spread of HIV and HCV infections and incarceration and treatment systems as well as preexisting interventions such as needle and syringe programs and opiate agonist therapy. We considered an adult population of people who inject drugs (PWID), people who use drugs but do not inject (PWUD), men who have sex with men, and lower-risk heterosexuals. We projected discounted lifetime costs and quality-adjusted life years (QALYs) over a 10-year time horizon with and without a jail diversion program and calculated resulting incremental cost-effectiveness ratios (ICERs) from the health system and societal perspectives. We also tracked HIV and HCV infections, overdose deaths, and jail population size.Over 10 years, the program was estimated to reduce HIV and HCV incidence by 3.4% (95% CI 2.7%–4.0%) and 3.3% (95% CI 3.1%–3.4%), respectively, overdose deaths among PWID by 10.0% (95% CI 9.8%–10.8%), and jail population size by 6.3% (95% CI 5.9%–6.7%). When considering healthcare costs only, the program cost $25,500/QALY gained (95% CI $12,600–$48,600). Including savings from reduced incarceration (societal perspective) improved the ICER to $6,200/QALY gained (95% CI, cost-saving $24,300). Sensitivity analysis indicated that cost-effectiveness depends on diversion program participants accessing community programs such as needle and syringe programs, treatment for substance use disorder, and HIV and HCV treatment, as well as diversion program cost.A limitation of the analysis is data availability, as fewer data are available for diversion programs than for more established interventions aimed at people with substance use disorder. Additionally, like any model of a complex system, our model relies on simplifying assumptions: For example, we simplified pathways in the healthcare and criminal justice systems, modeled an average efficacy for substance use disorder treatment, and did not include costs associated with homelessness, unemployment, and breakdown in family structure.

Conclusions

We found that diversion programs for low-level drug offenders are likely to be cost-effective, generating savings in the criminal justice system while only moderately increasing healthcare costs. Such programs can reduce incarceration and its associated costs, and also avert overdose deaths and improve quality of life for PWID, PWUD, and the broader population (through reduced HIV and HCV transmission).

The potential health impact of restricting less-healthy food and beverage advertising on UK television between 05.30 and 21.00 hours: A modelling study

Mar, 13/10/2020 - 23:00

by Oliver T. Mytton, Emma Boyland, Jean Adams, Brendan Collins, Martin O’Connell, Simon J. Russell, Kate Smith, Rebekah Stroud, Russell M. Viner, Linda J. Cobiac

Background

Restrictions on the advertising of less-healthy foods and beverages is seen as one measure to tackle childhood obesity and is under active consideration by the UK government. Whilst evidence increasingly links this advertising to excess calorie intake, understanding of the potential impact of advertising restrictions on population health is limited.

Methods and findings

We used a proportional multi-state life table model to estimate the health impact of prohibiting the advertising of food and beverages high in fat, sugar, and salt (HFSS) from 05.30 hours to 21.00 hours (5:30 AM to 9:00 PM) on television in the UK. We used the following data to parameterise the model: children’s exposure to HFSS advertising from AC Nielsen and Broadcasters’ Audience Research Board (2015); effect of less-healthy food advertising on acute caloric intake in children from a published meta-analysis; population numbers and all-cause mortality rates from the Human Mortality Database for the UK (2015); body mass index distribution from the Health Survey for England (2016); disability weights for estimating disability-adjusted life years (DALYs) from the Global Burden of Disease Study; and healthcare costs from NHS England programme budgeting data. The main outcome measures were change in the percentage of the children (aged 5–17 years) with obesity defined using the International Obesity Task Force cut-points, and change in health status (DALYs). Monte Carlo analyses was used to estimate 95% uncertainty intervals (UIs). We estimate that if all HFSS advertising between 05.30 hours and 21.00 hours was withdrawn, UK children (n = 13,729,000), would see on average 1.5 fewer HFSS adverts per day and decrease caloric intake by 9.1 kcal (95% UI 0.5–17.7 kcal), which would reduce the number of children (aged 5–17 years) with obesity by 4.6% (95% UI 1.4%–9.5%) and with overweight (including obesity) by 3.6% (95% UI 1.1%–7.4%) This is equivalent to 40,000 (95% UI 12,000–81,000) fewer UK children with obesity, and 120,000 (95% UI 34,000–240,000) fewer with overweight. For children alive in 2015 (n = 13,729,000), this would avert 240,000 (95% UI 65,000–530,000) DALYs across their lifetime (i.e., followed from 2015 through to death), and result in a health-related net monetary benefit of £7.4 billion (95% UI £2.0 billion–£16 billion) to society. Under a scenario where all HFSS advertising is displaced to after 21.00 hours, rather than withdrawn, we estimate that the benefits would be reduced by around two-thirds. This is a modelling study and subject to uncertainty; we cannot fully and accurately account for all of the factors that would affect the impact of this policy if implemented. Whilst randomised trials show that children exposed to less-healthy food advertising consume more calories, there is uncertainty about the nature of the dose–response relationship between HFSS advertising and calorie intake.

Conclusions

Our results show that HFSS television advertising restrictions between 05.30 hours and 21.00 hours in the UK could make a meaningful contribution to reducing childhood obesity. We estimate that the impact on childhood obesity of this policy may be reduced by around two-thirds if adverts are displaced to after 21.00 hours rather than being withdrawn.

Universal third-trimester ultrasonic screening using fetal macrosomia in the prediction of adverse perinatal outcome: A systematic review and meta-analysis of diagnostic test accuracy

Mar, 13/10/2020 - 23:00

by Alexandros A. Moraitis, Norman Shreeve, Ulla Sovio, Peter Brocklehurst, Alexander E. P. Heazell, Jim G. Thornton, Stephen C. Robson, Aris Papageorghiou, Gordon C. Smith

Background

The effectiveness of screening for macrosomia is not well established. One of the critical elements of an effective screening program is the diagnostic accuracy of a test at predicting the condition. The objective of this study is to investigate the diagnostic effectiveness of universal ultrasonic fetal biometry in predicting the delivery of a macrosomic infant, shoulder dystocia, and associated neonatal morbidity in low- and mixed-risk populations.

Methods and findings

We conducted a predefined literature search in Medline, Excerpta Medica database (EMBASE), the Cochrane library and ClinicalTrials.gov from inception to May 2020. No language restrictions were applied. We included studies where the ultrasound was performed as part of universal screening and those that included low- and mixed-risk pregnancies and excluded studies confined to high risk pregnancies. We used the estimated fetal weight (EFW) (multiple formulas and thresholds) and the abdominal circumference (AC) to define suspected large for gestational age (LGA). Adverse perinatal outcomes included macrosomia (multiple thresholds), shoulder dystocia, and other markers of neonatal morbidity. The risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Meta-analysis was carried out using the hierarchical summary receiver operating characteristic (ROC) and the bivariate logit-normal (Reitsma) models. We identified 41 studies that met our inclusion criteria involving 112,034 patients in total. These included 11 prospective cohort studies (N = 9986), one randomized controlled trial (RCT) (N = 367), and 29 retrospective cohort studies (N = 101,681). The quality of the studies was variable, and only three studies blinded the ultrasound findings to the clinicians. Both EFW >4,000 g (or 90th centile for the gestational age) and AC >36 cm (or 90th centile) had >50% sensitivity for predicting macrosomia (birthweight above 4,000 g or 90th centile) at birth with positive likelihood ratios (LRs) of 8.74 (95% confidence interval [CI] 6.84–11.17) and 7.56 (95% CI 5.85–9.77), respectively. There was significant heterogeneity at predicting macrosomia, which could reflect the different study designs, the characteristics of the included populations, and differences in the formulas used. An EFW >4,000 g (or 90th centile) had 22% sensitivity at predicting shoulder dystocia with a positive likelihood ratio of 2.12 (95% CI 1.34–3.35). There was insufficient data to analyze other markers of neonatal morbidity.

Conclusions

In this study, we found that suspected LGA is strongly predictive of the risk of delivering a large infant in low- and mixed-risk populations. However, it is only weakly (albeit statistically significantly) predictive of the risk of shoulder dystocia. There was insufficient data to analyze other markers of neonatal morbidity.

Trends in prevalence of acute stroke impairments: A population-based cohort study using the South London Stroke Register

Ven, 09/10/2020 - 23:00

by Amanda Clery, Ajay Bhalla, Anthony G. Rudd, Charles D. A. Wolfe, Yanzhong Wang

Background

Acute stroke impairments often result in poor long-term outcome for stroke survivors. The aim of this study was to estimate the trends over time in the prevalence of these acute stroke impairments.

Methods and findings

All first-ever stroke patients recorded in the South London Stroke Register (SLSR) between 2001 and 2018 were included in this cohort study. Multivariable Poisson regression models with robust error variance were used to estimate the adjusted prevalence of 8 acute impairments, across six 3-year time cohorts. Prevalence ratios comparing impairments over time were also calculated, stratified by age, sex, ethnicity, and aetiological classification (Trial of Org 10172 in Acute Stroke Treatment [TOAST]). A total of 4,683 patients had a stroke between 2001 and 2018. Mean age was 68.9 years, 48% were female, and 64% were White. After adjustment for demographic factors, pre-stroke risk factors, and stroke subtype, the prevalence of 3 out of the 8 acute impairments declined during the 18-year period, including limb motor deficit (from 77% [95% CI 74%–81%] to 62% [56%–68%], p < 0.001), dysphagia (37% [33%–41%] to 15% [12%–20%], p < 0.001), and urinary incontinence (43% [39%–47%) to 29% [24%–35%], p < 0.001). Declines in limb impairment over time were 2 times greater in men than women (prevalence ratio 0.73 [95% CI 0.64–0.84] and 0.87 [95% CI 0.77–0.98], respectively). Declines also tended to be greater in younger patients. Stratified by TOAST classification, the prevalence of all impairments was high for large artery atherosclerosis (LAA), cardioembolism (CE), and stroke of undetermined aetiology. Conversely, small vessel occlusions (SVOs) had low levels of all impairments except for limb motor impairment and dysarthria. While we have assessed 8 key acute stroke impairments, this study is limited by a focus on physical impairments, although cognitive impairments are equally important to understand. In addition, this is an inner-city cohort, which has unique characteristics compared to other populations.

Conclusions

In this study, we found that stroke patients in the SLSR had a complexity of acute impairments, of which limb motor deficit, dysphagia, and incontinence have declined between 2001 and 2018. These reductions have not been uniform across all patient groups, with women and the older population, in particular, seeing fewer reductions.

Impact of providing free HIV self-testing kits on frequency of testing among men who have sex with men and their sexual partners in China: A randomized controlled trial

Ven, 09/10/2020 - 23:00

by Ci Zhang, Deborah Koniak-Griffin, Han-Zhu Qian, Lloyd A. Goldsamt, Honghong Wang, Mary-Lynn Brecht, Xianhong Li

Background

The HIV epidemic is rapidly growing among men who have sex with men (MSM) in China, yet HIV testing remains suboptimal. We aimed to determine the impact of HIV self-testing (HIVST) interventions on frequency of HIV testing among Chinese MSM and their sexual partners.

Methods and findings

This randomized controlled trial was conducted in 4 cities in Hunan Province, China. Sexually active and HIV-negative MSM were recruited from communities and randomly assigned (1:1) to intervention or control arms. Participants in the control arm had access to site-based HIV testing (SBHT); those in the intervention arm were provided with 2 free finger-prick-based HIVST kits at enrollment and could receive 2 to 4 kits delivered through express mail every 3 months for 1 year in addition to SBHT. They were encouraged to distribute HIVST kits to their sexual partners. The primary outcome was the number of HIV tests taken by MSM participants, and the secondary outcome was the number of HIV tests taken by their sexual partners during 12 months of follow-up. The effect size for the primary and secondary outcomes was evaluated as the standardized mean difference (SMD) in testing frequency between intervention and control arms.Between April 14, 2018, and June 30, 2018, 230 MSM were recruited. Mean age was 29 years; 77% attended college; 75% were single. The analysis population who completed at least one follow-up questionnaire included 110 (93%, 110/118) in the intervention and 106 (95%, 106/112) in the control arm. The average frequency of HIV tests per participant in the intervention arm (3.75) was higher than that in the control arm (1.80; SMD 1.26; 95% CI 0.97–1.55; P < 0.001). This difference was mainly due to the difference in HIVST between the 2 arms (intervention 2.18 versus control 0.41; SMD 1.30; 95% CI 1.01–1.59; P < 0.001), whereas the average frequency of SBHT was comparable (1.57 versus 1.40, SMD 0.14; 95% CI −0.13 to 0.40; P = 0.519). The average frequency of HIV tests among sexual partners of each participant was higher in intervention than control arm (2.65 versus 1.31; SMD 0.64; 95% CI 0.36–0.92; P < 0.001), and this difference was also due to the difference in HIVST between the 2 arms (intervention 1.41 versus control 0.36; SMD 0.75; 95% CI 0.47–1.04; P < 0.001) but not SBHT (1.24 versus 0.96; SMD 0.23; 95% CI −0.05 to 0.50; P = 0.055). Zero-inflated Poisson regression analyses showed that the likelihood of taking HIV testing among intervention participants were 2.1 times greater than that of control participants (adjusted rate ratio [RR] 2.10; 95% CI 1.75–2.53, P < 0.001), and their sexual partners were 1.55 times more likely to take HIV tests in the intervention arm compared with the control arm (1.55, 1.23–1.95, P < 0.001). During the study period, 3 participants in the intervention arm and none in the control arm tested HIV positive, and 8 sexual partners of intervention arm participants also tested positive. No other adverse events were reported. Limitations in this study included the data on number of SBHT were solely based on self-report by the participants, but self-reported number of HIVST in the intervention arm was validated; the number of partner HIV testing was indirectly reported by participants because of difficulties in accessing each of their partners.

Conclusions

In this study, we found that providing free HIVST kits significantly increased testing frequency among Chinese MSM and effectively enlarged HIV testing coverage by enhancing partner HIV testing through distribution of kits within their sexual networks.

Trial registration

Chinese Clinical Trial Registry ChiCTR1800015584.

Socioeconomic level and associations between heat exposure and all-cause and cause-specific hospitalization in 1,814 Brazilian cities: A nationwide case-crossover study

Gio, 08/10/2020 - 23:00

by Rongbin Xu, Qi Zhao, Micheline S. Z. S. Coelho, Paulo H. N. Saldiva, Michael J. Abramson, Shanshan Li, Yuming Guo

Background

Heat exposure, which will increase with global warming, has been linked to increased risk of a range of types of cause-specific hospitalizations. However, little is known about socioeconomic disparities in vulnerability to heat. We aimed to evaluate whether there were socioeconomic disparities in vulnerability to heat-related all-cause and cause-specific hospitalization among Brazilian cities.

Methods and findings

We collected daily hospitalization and weather data in the hot season (city-specific 4 adjacent hottest months each year) during 2000–2015 from 1,814 Brazilian cities covering 78.4% of the Brazilian population. A time-stratified case-crossover design modeled by quasi-Poisson regression and a distributed lag model was used to estimate city-specific heat–hospitalization association. Then meta-analysis was used to synthesize city-specific estimates according to different socioeconomic quartiles or levels. We included 49 million hospitalizations (58.5% female; median [interquartile range] age: 33.3 [19.8–55.7] years). For cities of lower middle income (LMI), upper middle income (UMI), and high income (HI) according to the World Bank’s classification, every 5°C increase in daily mean temperature during the hot season was associated with a 5.1% (95% CI 4.4%–5.7%, P < 0.001), 3.7% (3.3%–4.0%, P < 0.001), and 2.6% (1.7%–3.4%, P < 0.001) increase in all-cause hospitalization, respectively. The inter-city socioeconomic disparities in the association were strongest for children and adolescents (0–19 years) (increased all-cause hospitalization risk with every 5°C increase [95% CI]: 9.9% [8.7%–11.1%], P < 0.001, in LMI cities versus 5.2% [4.1%–6.3%], P < 0.001, in HI cities). The disparities were particularly evident for hospitalization due to certain diseases, including ischemic heart disease (increase in cause-specific hospitalization risk with every 5°C increase [95% CI]: 5.6% [−0.2% to 11.8%], P = 0.060, in LMI cities versus 0.5% [−2.1% to 3.1%], P = 0.717, in HI cities), asthma (3.7% [0.3%–7.1%], P = 0.031, versus −6.4% [−12.1% to −0.3%], P = 0.041), pneumonia (8.0% [5.6%–10.4%], P < 0.001, versus 3.8% [1.1%–6.5%], P = 0.005), renal diseases (9.6% [6.2%–13.1%], P < 0.001, versus 4.9% [1.8%–8.0%], P = 0.002), mental health conditions (17.2% [8.4%–26.8%], P < 0.001, versus 5.5% [−1.4% to 13.0%], P = 0.121), and neoplasms (3.1% [0.7%–5.5%], P = 0.011, versus −0.1% [−2.1% to 2.0%], P = 0.939). The disparities were similar when stratifying the cities by other socioeconomic indicators (urbanization rate, literacy rate, and household income). The main limitations were lack of data on personal exposure to temperature, and that our city-level analysis did not assess intra-city or individual-level socioeconomic disparities and could not exclude confounding effects of some unmeasured variables.

Conclusions

Less developed cities displayed stronger associations between heat exposure and all-cause hospitalizations and certain types of cause-specific hospitalizations in Brazil. This may exacerbate the existing geographical health and socioeconomic inequalities under a changing climate.

Genetics of height and risk of atrial fibrillation: A Mendelian randomization study

Gio, 08/10/2020 - 23:00

by Michael G. Levin, Renae Judy, Dipender Gill, Marijana Vujkovic, Shefali S. Verma, Yuki Bradford, Regeneron Genetics Center , Marylyn D. Ritchie, Matthew C. Hyman, Saman Nazarian, Daniel J. Rader, Benjamin F. Voight, Scott M. Damrauer

Background

Observational studies have identified height as a strong risk factor for atrial fibrillation, but this finding may be limited by residual confounding. We aimed to examine genetic variation in height within the Mendelian randomization (MR) framework to determine whether height has a causal effect on risk of atrial fibrillation.

Methods and findings

In summary-level analyses, MR was performed using summary statistics from genome-wide association studies of height (GIANT/UK Biobank; 693,529 individuals) and atrial fibrillation (AFGen; 65,446 cases and 522,744 controls), finding that each 1-SD increase in genetically predicted height increased the odds of atrial fibrillation (odds ratio [OR] 1.34; 95% CI 1.29 to 1.40; p = 5 × 10−42). This result remained consistent in sensitivity analyses with MR methods that make different assumptions about the presence of pleiotropy, and when accounting for the effects of traditional cardiovascular risk factors on atrial fibrillation. Individual-level phenome-wide association studies of height and a height genetic risk score were performed among 6,567 European-ancestry participants of the Penn Medicine Biobank (median age at enrollment 63 years, interquartile range 55–72; 38% female; recruitment 2008–2015), confirming prior observational associations between height and atrial fibrillation. Individual-level MR confirmed that each 1-SD increase in height increased the odds of atrial fibrillation, including adjustment for clinical and echocardiographic confounders (OR 1.89; 95% CI 1.50 to 2.40; p = 0.007). The main limitations of this study include potential bias from pleiotropic effects of genetic variants, and lack of generalizability of individual-level findings to non-European populations.

Conclusions

In this study, we observed evidence that height is likely a positive causal risk factor for atrial fibrillation. Further study is needed to determine whether risk prediction tools including height or anthropometric risk factors can be used to improve screening and primary prevention of atrial fibrillation, and whether biological pathways involved in height may offer new targets for treatment of atrial fibrillation.