PLoS Medicine

Condividi contenuti PLOS Medicine: New Articles
A Peer-Reviewed Open-Access Journal
Aggiornato: 18 ore 4 min fa

Evidence-informed policy for tackling adverse climate change effects on health: Linking regional and global assessments of science to catalyse action

Mar, 20/07/2021 - 16:00

by Robin Fears, Khairul Annuar B. Abdullah, Claudia Canales-Holzeis, Deoraj Caussy, Andy Haines, Sherilee L. Harper, Jeremy N. McNeil, Johanna Mogwitz, Volker ter Meulen

Robin Fears and co-authors discuss evidence-informed regional and global policy responses to health impacts of climate change.

Telmisartan use and risk of dementia in type 2 diabetes patients with hypertension: A population-based cohort study

Lun, 19/07/2021 - 16:00

by Chi-Hung Liu, Pi-Shan Sung, Yan-Rong Li, Wen-Kuan Huang, Tay-Wey Lee, Chin-Chang Huang, Tsong-Hai Lee, Tien-Hsing Chen, Yi-Chia Wei

Background

Angiotensin receptor blockers (ARBs) may have protective effects against dementia occurrence in patients with hypertension (HTN). However, whether telmisartan, an ARB with peroxisome proliferator-activated receptor γ (PPAR-γ)–modulating effects, has additional benefits compared to other ARBs remains unclear.

Methods and findings

Between 1997 and 2013, 2,166,944 type 2 diabetes mellitus (T2DM) patients were identified from the National Health Insurance Research Database of Taiwan. Patients with HTN using ARBs were included in the study. Patients with a history of stroke, traumatic brain injury, or dementia were excluded. Finally, 65,511 eligible patients were divided into 2 groups: the telmisartan group and the non-telmisartan ARB group. Propensity score matching (1:4) was used to balance the distribution of baseline characteristics and medications. The primary outcome was the diagnosis of dementia. The secondary outcomes included the diagnosis of Alzheimer disease and occurrence of symptomatic ischemic stroke (IS), any IS, and all-cause mortality. The risks between groups were compared using a Cox proportional hazard model. Statistical significance was set at p < 0.05. There were 2,280 and 9,120 patients in the telmisartan and non-telmisartan ARB groups, respectively. Patients in the telmisartan group had a lower risk of dementia diagnosis (telmisartan versus non-telmisartan ARBs: 2.19% versus 3.20%; HR, 0.72; 95% CI, 0.53 to 0.97; p = 0.030). They also had lower risk of dementia diagnosis with IS as a competing risk (subdistribution HR, 0.70; 95% CI, 0.51 to 0.95; p = 0.022) and with all-cause mortality as a competing risk (subdistribution HR, 0.71; 95% CI, 0.53 to 0.97; p = 0.029). In addition, the telmisartan users had a lower risk of any IS (6.84% versus 8.57%; HR, 0.79; 95% CI, 0.67 to 0.94; p = 0.008) during long-term follow-up. Study limitations included potential residual confounding by indication, interpretation of causal effects in an observational study, and bias caused by using diagnostic and medication codes to represent real clinical data.

Conclusions

The current study suggests that telmisartan use in hypertensive T2DM patients may be associated with a lower risk of dementia and any IS events in an East-Asian population.

Correlates of attendance at community engagement meetings held in advance of bio-behavioral research studies: A longitudinal, sociocentric social network study in rural Uganda

Ven, 16/07/2021 - 16:00

by Bernard Kakuhikire, Emily N. Satinsky, Charles Baguma, Justin D. Rasmussen, Jessica M. Perkins, Patrick Gumisiriza, Mercy Juliet, Patience Ayebare, Rumbidzai C. Mushavi, Bridget F. O. Burns, Claire Q. Evans, Mark J. Siedner, David R. Bangsberg, Alexander C. Tsai

Background

Community engagement is central to the conduct of health-related research studies as a way to determine priorities, inform study design and implementation, increase recruitment and retention, build relationships, and ensure that research meets the goals of the community. Community sensitization meetings, a form of community engagement, are often held prior to the initiation of research studies to provide information about upcoming study activities and resolve concerns in consultation with potential participants. This study estimated demographic, health, economic, and social network correlates of attendance at community sensitization meetings held in advance of a whole-population, combined behavioral, and biomedical research study in rural Uganda.

Methods and findings

Research assistants collected survey data from 1,630 adults participating in an ongoing sociocentric social network cohort study conducted in a rural region of southwestern Uganda. These community survey data, collected between 2016 and 2018, were linked to attendance logs from community sensitization meetings held in 2018 and 2019 before the subsequent community survey and community health fair. Of all participants, 264 (16%) attended a community sensitization meeting before the community survey, 464 (28%) attended a meeting before the community health fair, 558 (34%) attended a meeting before either study activity (survey or health fair), and 170 (10%) attended a meeting before both study activities (survey and health fair). Using multivariable Poisson regression models, we estimated correlates of attendance at community sensitization meetings. Attendance was more likely among study participants who were women (adjusted relative risk [ARR]health fair = 1.71, 95% confidence interval [CI], 1.32 to 2.21, p < 0.001), older age (ARRsurvey = 1.02 per year, 95% CI, 1.01 to 1.02, p < 0.001; ARRhealth fair = 1.02 per year, 95% CI, 1.01 to 1.02, p < 0.001), married (ARRsurvey = 1.74, 95% CI, 1.29 to 2.35, p < 0.001; ARRhealth fair = 1.41, 95% CI, 1.13 to 1.76, p = 0.002), and members of more community groups (ARRsurvey = 1.26 per group, 95% CI, 1.10 to 1.44, p = 0.001; ARRhealth fair = 1.26 per group, 95% CI, 1.12 to 1.43, p < 0.001). Attendance was less likely among study participants who lived farther from meeting locations (ARRsurvey = 0.54 per kilometer, 95% CI, 0.30 to 0.97, p = 0.041; ARRhealth fair = 0.57 per kilometer, 95% CI, 0.38 to 0.86, p = 0.007). Leveraging the cohort’s sociocentric design, social network analyses suggested that information conveyed during community sensitization meetings could reach a broader group of potential study participants through attendees’ social network and household connections. Study limitations include lack of detailed data on reasons for attendance/nonattendance at community sensitization meetings; achieving a representative sample of community members was not an explicit aim of the study; and generalizability may not extend beyond this study setting.

Conclusions

In this longitudinal, sociocentric social network study conducted in rural Uganda, we observed that older age, female sex, being married, membership in more community groups, and geographical proximity to meeting locations were correlated with attendance at community sensitization meetings held in advance of bio-behavioral research activities. Information conveyed during meetings could have reached a broader portion of the population through attendees’ social network and household connections. To ensure broader input and potentially increase participation in health-related research studies, the dissemination of research-related information through community sensitization meetings may need to target members of underrepresented groups.

Obesity and revision surgery, mortality, and patient-reported outcomes after primary knee replacement surgery in the National Joint Registry: A UK cohort study

Ven, 16/07/2021 - 16:00

by Jonathan Thomas Evans, Sofia Mouchti, Ashley William Blom, Jeremy Mark Wilkinson, Michael Richard Whitehouse, Andrew Beswick, Andrew Judge

Background

One in 10 people in the United Kingdom will need a total knee replacement (TKR) during their lifetime. Access to this life-changing operation has recently been restricted based on body mass index (BMI) due to belief that high BMI may lead to poorer outcomes. We investigated the associations between BMI and revision surgery, mortality, and pain/function using what we believe to be the world’s largest joint replacement registry.

Methods and findings

We analysed 493,710 TKRs in the National Joint Registry (NJR) for England, Wales, Northern Ireland, and the Isle of Man from 2005 to 2016 to investigate 90-day mortality and 10-year cumulative revision. Hospital Episodes Statistics (HES) and Patient Reported Outcome Measures (PROMs) databases were linked to the NJR to investigate change in Oxford Knee Score (OKS) 6 months postoperatively.After adjustment for age, sex, American Society of Anaesthesiologists (ASA) grade, indication for operation, year of primary TKR, and fixation type, patients with high BMI were more likely to undergo revision surgery within 10 years compared to those with “normal” BMI (obese class II hazard ratio (HR) 1.21, 95% CI: 1.10, 1.32 (p < 0.001) and obese class III HR 1.13, 95% CI: 1.02, 1.26 (p = 0.026)). All BMI classes had revision estimates within the recognised 10-year benchmark of 5%. Overweight and obese class I patients had lower mortality than patients with “normal” BMI (HR 0.76, 95% CI: 0.65, 0.90 (p = 0.001) and HR 0.69, 95% CI: 0.58, 0.82 (p < 0.001)). All BMI categories saw absolute increases in OKS after 6 months (range 18–20 points). The relative improvement in OKS was lower in overweight and obese patients than those with “normal” BMI, but the difference was below the minimal detectable change (MDC; 4 points). The main limitations were missing BMI particularly in the early years of data collection and a potential selection bias effect of surgeons selecting the fitter patients with raised BMI for surgery.

Conclusions

Given revision estimates in all BMI groups below the recognised threshold, no evidence of increased mortality, and difference in change in OKS below the MDC, this large national registry shows no evidence of poorer outcomes in patients with high BMI. This study does not support rationing of TKR based on increased BMI.

Estimating the effect of moving meat-free products to the meat aisle on sales of meat and meat-free products: A non-randomised controlled intervention study in a large UK supermarket chain

Gio, 15/07/2021 - 16:00

by Carmen Piernas, Brian Cook, Richard Stevens, Cristina Stewart, Jennifer Hollowell, Peter Scarborough, Susan A. Jebb

Background

Reducing meat consumption could bring health and environmental benefits, but there is little research to date on effective interventions to achieve this. A non-randomised controlled intervention study was used to evaluate whether prominent positioning of meat-free products in the meat aisle was associated with a change in weekly mean sales of meat and meat-free products.

Methods and findings

Weekly sales data were obtained from 108 stores: 20 intervention stores that moved a selection of 26 meat-free products into a newly created meat-free bay within the meat aisle and 88 matched control stores. The primary outcome analysis used a hierarchical negative binomial model to compare changes in weekly sales (units) of meat products sold in intervention versus control stores during the main intervention period (Phase I: February 2019 to April 2019). Interrupted time series analysis was also used to evaluate the effects of the Phase I intervention. Moreover, 8 of the 20 stores enhanced the intervention from August 2019 onwards (Phase II intervention) by adding a second bay of meat-free products into the meat aisle, which was evaluated following the same analytical methods.During the Phase I intervention, sales of meat products (units/store/week) decreased in intervention (approximately −6%) and control stores (−5%) without significant differences (incidence rate ratio [IRR] 1.01 [95% CI 0.95–1.07]. Sales of meat-free products increased significantly more in the intervention (+31%) compared to the control stores (+6%; IRR 1.43 [95% CI 1.30–1.57]), mostly due to increased sales of meat-free burgers, mince, and sausages. Consistent results were observed in interrupted time series analyses where the effect of the Phase II intervention was significant in intervention versus control stores.

Conclusions

Prominent positioning of meat-free products into the meat aisle in a supermarket was not effective in reducing sales of meat products, but successfully increased sales of meat-free alternatives in the longer term.A preregistered protocol (https://osf.io/qmz3a/) was completed and fully available before data analysis.

Mortality and concurrent use of opioids and hypnotics in older patients: A retrospective cohort study

Gio, 15/07/2021 - 16:00

by Wayne A. Ray, Cecilia P. Chung, Katherine T. Murray, Beth A. Malow, James R. Daugherty, C. Michael Stein

Background

Benzodiazepine hypnotics and the related nonbenzodiazepine hypnotics (z-drugs) are among the most frequently prescribed medications for older adults. Both can depress respiration, which could have fatal cardiorespiratory effects, particularly among patients with concurrent opioid use. Trazodone, frequently prescribed in low doses for insomnia, has minimal respiratory effects, and, consequently, may be a safer hypnotic for older patients. Thus, for patients beginning treatment with benzodiazepine hypnotics or z-drugs, we compared deaths during periods of current hypnotic use, without or with concurrent opioids, to those for comparable patients receiving trazodone in doses up to 100 mg.

Methods and findings

The retrospective cohort study in the United States included 400,924 Medicare beneficiaries 65 years of age or older without severe illness or evidence of substance use disorder initiating study hypnotic therapy from January 2014 through September 2015. Study endpoints were out-of-hospital (primary) and total mortality. Hazard ratios (HRs) were adjusted for demographic characteristics, psychiatric and neurologic disorders, cardiovascular and renal conditions, respiratory diseases, pain-related diagnoses and medications, measures of frailty, and medical care utilization in a time-dependent propensity score–stratified analysis. Patients without concurrent opioids had 32,388 person-years of current use, 260 (8.0/1,000 person-years) out-of-hospital and 418 (12.9/1,000) total deaths for benzodiazepines; 26,497 person-years,150 (5.7/1,000) out-of-hospital and 227 (8.6/1,000) total deaths for z-drugs; and 16,177 person-years,156 (9.6/1,000) out-of-hospital and 256 (15.8/1,000) total deaths for trazodone. Out-of-hospital and total mortality for benzodiazepines (respective HRs: 0.99 [95% confidence interval, 0.81 to 1.22, p = 0.954] and 0.95 [0.82 to 1.14, p = 0.513] and z-drugs (HRs: 0.96 [0.76 to 1.23], p = 0.767 and 0.87 [0.72 to 1.05], p = 0.153) did not differ significantly from that for trazodone. Patients with concurrent opioids had 4,278 person-years of current use, 90 (21.0/1,000) out-of-hospital and 127 (29.7/1,000) total deaths for benzodiazepines; 3,541 person-years, 40 (11.3/1,000) out-of-hospital and 64 (18.1/1,000) total deaths for z-drugs; and 2,347 person-years, 19 (8.1/1,000) out-of-hospital and 36 (15.3/1,000) total deaths for trazodone. Out-of-hospital and total mortality for benzodiazepines (HRs: 3.02 [1.83 to 4.97], p < 0.001 and 2.21 [1.52 to 3.20], p < 0.001) and z-drugs (HRs: 1.98 [1.14 to 3.44], p = 0.015 and 1.65 [1.09 to 2.49], p = 0.018) were significantly increased relative to trazodone; findings were similar with exclusion of overdose deaths or restriction to those with cardiovascular causes. Limitations included composition of the study cohort and potential confounding by unmeasured variables.

Conclusions

In US Medicare beneficiaries 65 years of age or older without concurrent opioids who initiated treatment with benzodiazepine hypnotics, z-drugs, or low-dose trazodone, study hypnotics were not associated with mortality. With concurrent opioids, benzodiazepines and z-drugs were associated with increased out-of-hospital and total mortality. These findings indicate that the dangers of benzodiazepine–opioid coadministration go beyond the documented association with overdose death and suggest that in combination with opioids, the z-drugs may be more hazardous than previously thought.

Evaluating the impact of the nationwide public–private mix (PPM) program for tuberculosis under National Health Insurance in South Korea: A difference in differences analysis

Mer, 14/07/2021 - 16:00

by Sarah Yu, Hojoon Sohn, Hae-Young Kim, Hyunwoo Kim, Kyung-Hyun Oh, Hee-Jin Kim, Haejoo Chung, Hongjo Choi

Background

Public–private mix (PPM) programs on tuberculosis (TB) have a critical role in engaging and integrating the private sector into the national TB control efforts in order to meet the End TB Strategy targets. South Korea’s PPM program can provide important insights on the long-term impact and policy gaps in the development and expansion of PPM as a nationwide program.

Methods and findings

Healthcare is privatized in South Korea, and a majority (80.3% in 2009) of TB patients sought care in the private sector. Since 2009, South Korea has rapidly expanded its PPM program coverage under the National Health Insurance (NHI) scheme as a formal national program with dedicated PPM nurses managing TB patients in both the private and public sectors. Using the difference in differences (DID) analytic framework, we compared relative changes in TB treatment outcomes—treatment success (TS) and loss to follow-up (LTFU)—in the private and public sector between the 2009 and 2014 TB patient cohorts. Propensity score matching (PSM) using the kernel method was done to adjust for imbalances in the covariates between the 2 population cohorts. The 2009 cohort included 6,195 (63.0% male, 37.0% female; mean age: 42.1) and 27,396 (56.1% male, 43.9% female; mean age: 45.7) TB patients in the public and private sectors, respectively. The 2014 cohort included 2,803 (63.2% male, 36.8% female; mean age: 50.1) and 29,988 (56.5% male, 43.5% female; mean age: 54.7) patients. In both the private and public sectors, the proportion of patients with transfer history decreased (public: 23.8% to 21.7% and private: 20.8% to 17.6%), and bacteriological confirmed disease increased (public: 48.9% to 62.3% and private: 48.8% to 58.1%) in 2014 compared to 2009. After expanding nationwide PPM, absolute TS rates improved by 9.10% (87.5% to 93.4%) and by 13.6% (from 70.3% to 83.9%) in the public and private sectors. Relative to the public, the private saw 4.1% (95% confidence interval [CI] 2.9% to 5.3%, p-value < 0.001) and −8.7% (95% CI −9.7% to −7.7%, p-value <0.001) higher rates of improvement in TS and reduction in LTFU. Treatment outcomes did not improve in patients who experienced at least 1 transfer during their TB treatment. Study limitations include non-longitudinal nature of our original dataset, inability to assess the regional disparities, and verify PPM program’s impact on TB mortality.

Conclusions

We found that the nationwide scale-up of the PPM program was associated with improvements in TB treatment outcomes in the private sector in South Korea. Centralized financial governance and regulatory mechanisms were integral in facilitating the integration of highly diverse South Korean private sector into the national TB control program and scaling up of the PPM intervention nationwide. However, TB care gaps continued to exist for patients who transferred at least once during their treatment. These programmatic gaps may be improved through reducing administrative hurdles and making programmatic amendments that can help facilitate management TB patients between institutions and healthcare sectors, as well as across administrative regions.

Preventing microalbuminuria with benazepril, valsartan, and benazepril–valsartan combination therapy in diabetic patients with high-normal albuminuria: A prospective, randomized, open-label, blinded endpoint (PROBE) study

Mer, 14/07/2021 - 16:00

by Piero Ruggenenti, Monica Cortinovis, Aneliya Parvanova, Matias Trillini, Ilian P. Iliev, Antonio C. Bossi, Antonio Belviso, Maria C. Aparicio, Roberto Trevisan, Stefano Rota, Annalisa Perna, Tobia Peracchi, Nadia Rubis, Davide Martinetti, Silvia Prandini, Flavio Gaspari, Fabiola Carrara, Salvatore De Cosmo, Giancarlo Tonolo, Ruggero Mangili, Giuseppe Remuzzi, on behalf of the VARIETY Study Organization

Background

Angiotensin converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs) prevent microalbuminuria in normoalbuminuric type 2 diabetic patients. We assessed whether combined therapy with the 2 medications may prevent microalbuminuria better than ACE inhibitor or ARB monotherapy.

Methods and findings

VARIETY was a prospective, randomized, open-label, blinded endpoint (PROBE) trial evaluating whether, at similar blood pressure (BP) control, combined therapy with benazepril (10 mg/day) and valsartan (160 mg/day) would prevent microalbuminuria more effectively than benazepril (20 mg/day) or valsartan (320 mg/day) monotherapy in 612 type 2 diabetic patients with high-normal albuminuria included between July 2007 and April 2013 by the Istituto di Ricerche Farmacologiche Mario Negri IRCCS and 8 diabetology or nephrology units in Italy. Time to progression to microalbuminuria was the primary outcome. Analyses were intention to treat. Baseline characteristics were similar among groups. During a median [interquartile range, IQR] follow-up of 66 [42 to 83] months, 53 patients (27.0%) on combination therapy, 57 (28.1%) on benazepril, and 64 (31.8%) on valsartan reached microalbuminuria. Using an accelerated failure time model, the estimated acceleration factors were 1.410 (95% CI: 0.806 to 2.467, P = 0.229) for benazepril compared to combination therapy, 0.799 (95% CI: 0.422 to 1.514, P = 0.492) for benazepril compared to valsartan, and 1.665 (95% CI: 1.007 to 2.746, P = 0.047) for valsartan compared to combination therapy. Between-group differences in estimated acceleration factors were nonsignificant after adjustment for predefined confounders. BP control was similar across groups. All treatments were safe and tolerated well, with a slight excess of hyperkalemia and hypotension in the combination therapy group. The main study limitation was the lower than expected albuminuria at inclusion.

Conclusions

Risk/benefit profile of study treatments was similar. Dual renin–angiotensin system (RAS) blockade is not recommended as compared to benazepril or valsartan monotherapy for prevention of microalbuminuria in normoalbuminuric type 2 diabetic patients.

Trial registration

EudraCT 2006-005954-62; ClinicalTrials.gov NCT00503152.

Investigating associations between COVID-19 mortality and population-level health and socioeconomic indicators in the United States: A modeling study

Mar, 13/07/2021 - 16:00

by Sasikiran Kandula, Jeffrey Shaman

Background

With the availability of multiple Coronavirus Disease 2019 (COVID-19) vaccines and the predicted shortages in supply for the near future, it is necessary to allocate vaccines in a manner that minimizes severe outcomes, particularly deaths. To date, vaccination strategies in the United States have focused on individual characteristics such as age and occupation. Here, we assess the utility of population-level health and socioeconomic indicators as additional criteria for geographical allocation of vaccines.

Methods and findings

County-level estimates of 14 indicators associated with COVID-19 mortality were extracted from public data sources. Effect estimates of the individual indicators were calculated with univariate models. Presence of spatial autocorrelation was established using Moran’s I statistic. Spatial simultaneous autoregressive (SAR) models that account for spatial autocorrelation in response and predictors were used to assess (i) the proportion of variance in county-level COVID-19 mortality that can explained by identified health/socioeconomic indicators (R2); and (ii) effect estimates of each predictor.Adjusting for case rates, the selected indicators individually explain 24%–29% of the variability in mortality. Prevalence of chronic kidney disease and proportion of population residing in nursing homes have the highest R2. Mortality is estimated to increase by 43 per thousand residents (95% CI: 37–49; p < 0.001) with a 1% increase in the prevalence of chronic kidney disease and by 39 deaths per thousand (95% CI: 34–44; p < 0.001) with 1% increase in population living in nursing homes. SAR models using multiple health/socioeconomic indicators explain 43% of the variability in COVID-19 mortality in US counties, adjusting for case rates. R2 was found to be not sensitive to the choice of SAR model form. Study limitations include the use of mortality rates that are not age standardized, a spatial adjacency matrix that does not capture human flows among counties, and insufficient accounting for interaction among predictors.

Conclusions

Significant spatial autocorrelation exists in COVID-19 mortality in the US, and population health/socioeconomic indicators account for a considerable variability in county-level mortality. In the context of vaccine rollout in the US and globally, national and subnational estimates of burden of disease could inform optimal geographical allocation of vaccines.

Changes in the calorie and nutrient content of purchased fast food meals after calorie menu labeling: A natural experiment

Lun, 12/07/2021 - 16:00

by Joshua Petimar, Fang Zhang, Eric B. Rimm, Denise Simon, Lauren P. Cleveland, Steven L. Gortmaker, Sara N. Bleich, Michele Polacsek, Christina A. Roberto, Jason P. Block

Background

Calorie menu labeling is a policy that requires food establishments to post the calories on menu offerings to encourage healthy food choice. Calorie labeling has been implemented in the United States since May 2018 per the Affordable Care Act, but to the best of our knowledge, no studies have evaluated the relationship between calorie labeling and meal purchases since nationwide implementation of this policy. Our objective was to investigate the relationship between calorie labeling and the calorie and nutrient content of purchased meals after a fast food franchise began labeling in April 2017, prior to the required nationwide implementation, and after nationwide implementation of labeling in May 2018, when all large US chain restaurants were required to label their menus.

Methods and findings

We obtained weekly aggregated sales data from 104 restaurants that are part of a fast food franchise for 3 national chains in 3 US states: Louisiana, Mississippi, and Texas. The franchise provided all sales data from April 2015 until April 2019. The franchise labeled menus in April 2017, 1 year prior to the required nationwide implementation date of May 2018 set by the US Food and Drug Administration. We obtained nutrition information for items sold (calories, fat, carbohydrates, protein, saturated fat, sugar, dietary fiber, and sodium) from Menustat, a publicly available database with nutrition information for items offered at the top revenue-generating US restaurant chains. We used an interrupted time series to find level and trend changes in mean weekly calorie and nutrient content per transaction after franchise and nationwide labeling. The analytic sample represented 331,776,445 items purchased across 67,112,342 transactions. Franchise labeling was associated with a level change of −54 calories/transaction (95% confidence interval [CI]: −67, −42, p < 0.0001) and a subsequent 3.3 calories/transaction increase per 4-week period (95% CI: 2.5, 4.1, p < 0.0001). Nationwide implementation was associated with a level decrease of −82 calories/transaction (95% CI: −88, −76, p < 0.0001) and a subsequent −2.1 calories/transaction decrease per 4-week period (95% CI: −2.9, −1.3, p < 0.0001). At the end of the study, the model-based predicted mean calories/transaction was 4.7% lower (change = −73 calories/transaction, 95% CI: −81, −65), and nutrients/transaction ranged from 1.8% lower (saturated fat) to 7.0% lower (sugar) than what we would expect had labeling not been implemented. The main limitations were potential residual time-varying confounding and lack of individual-level transaction data.

Conclusions

In this study, we observed that calorie labeling was associated with small decreases in mean calorie and nutrient content of fast food meals 2 years after franchise labeling and nearly 1 year after implementation of labeling nationwide. These changes imply that calorie labeling was associated with small improvements in purchased meal quality in US chain restaurants.

Psychological distress, resettlement stress, and lower school engagement among Arabic-speaking refugee parents in Sydney, Australia: A cross-sectional cohort study

Lun, 12/07/2021 - 16:00

by Jess R. Baker, Derrick Silove, Deserae Horswood, Afaf Al-Shammari, Mohammed Mohsin, Susan Rees, Valsamma Eapen

Background

Schools play a key role in supporting the well-being and resettlement of refugee children, and parental engagement with the school may be a critical factor in the process. Many resettlement countries have policies in place to support refugee parents’ engagement with their children’s school. However, the impact of these programs lacks systematic evaluation. This study first aimed to validate self-report measures of parental school engagement developed specifically for the refugee context, and second, to identify parent characteristics associated with school engagement, so as to help tailor support to families most in need.

Methods and findings

The report utilises 2016 baseline data of a cohort study of 233 Arabic-speaking parents (77% response rate) of 10- to 12-year-old schoolchildren from refugee backgrounds across 5 schools in Sydney, Australia. Most participants were born in Iraq (81%) or Syria (11%), and only 25% spoke English well to very well. Participants’ mean age was 40 years old, and 83% were female. Confirmatory factor analyses were run on provisional item sets identified from a literature review and separate qualitative study. The findings informed the development of 4 self-report tools assessing parent engagement with the school and school community, school belonging, and quality of the relationship with the schools’ bilingual cultural broker. Cronbach alpha and Pearson correlations with an established Teacher–Home Communication subscale demonstrated adequate reliability (α = 0.67 to 0.80) and construct and convergent validity of the measures (p < 0.01), respectively.Parent characteristics were entered into respective least absolute shrinkage and selection operator (LASSO) regression analyses. The degree of parents’ psychological distress (as measured by the Kessler10 self-report instrument) and postmigration living difficulties (PLMDs) were each associated with lower school engagement and belonging, whereas less time lived in Australia, lower education levels, and an unemployed status were associated with higher ratings in relationship quality with the schools’ cultural broker. Study limitations include the cross-sectional design and the modest amount of variance (8% to 22%) accounted for by the regression models.

Conclusions

The study offers preliminary refugee-specific measures of parental school engagement. It is expected they will provide a resource for evaluating efforts to support the integration of refugee families into schools. The findings support the need for initiatives that identify and support parents with school-attending children from refugee backgrounds who are experiencing psychological distress or resettlement stressors. At the school level, the findings suggest that cultural brokers may be effective in targeting newly arrived families.

Adipose tissue biomarkers and type 2 diabetes incidence in normoglycemic participants in the MESArthritis Ancillary Study: A cohort study

Ven, 09/07/2021 - 16:00

by Farhad Pishgar, Mahsima Shabani, Thiago Quinaglia A. C. Silva, David A. Bluemke, Matthew Budoff, R Graham Barr, Matthew A. Allison, Alain G. Bertoni, Wendy S. Post, João A. C. Lima, Shadpour Demehri

Background

Given the central role of skeletal muscles in glucose homeostasis, deposition of adipose depots beneath the fascia of muscles (versus subcutaneous adipose tissue [SAT]) may precede insulin resistance and type 2 diabetes (T2D) incidence. This study was aimed to investigate the associations between computed tomography (CT)–derived biomarkers for adipose tissue and T2D incidence in normoglycemic adults.

Methods and findings

This study was a population-based multiethnic retrospective cohort of 1,744 participants in the Multi-Ethnic Study of Atherosclerosis (MESA) with normoglycemia (baseline fasting plasma glucose [FPG] less than 100 mg/dL) from 6 United States of America communities. Participants were followed from April 2010 and January 2012 to December 2017, for a median of 7 years. The intermuscular adipose tissue (IMAT) and SAT areas were measured in baseline chest CT exams and were corrected by height squared (SAT and IMAT indices) using a predefined measurement protocol. T2D incidence, as the main outcome, was based on follow-up FPG, review of hospital records, or self-reported physician diagnoses.Participants’ mean age was 69 ± 9 years at baseline, and 977 (56.0%) were women. Over a median of 7 years, 103 (5.9%) participants were diagnosed with T2D, and 147 (8.4%) participants died. The IMAT index (hazard ratio [HR]: 1.27 [95% confidence interval [CI]: 1.15–1.41] per 1-standard deviation [SD] increment) and the SAT index (HR: 1.43 [95% CI: 1.16–1.77] per 1-SD increment) at baseline were associated with T2D incidence over the follow-up. The associations of the IMAT and SAT indices with T2D incidence were attenuated after adjustment for body mass index (BMI) and waist circumference, with HRs of 1.23 (95% CI: 1.09–1.38) and 1.29 (95% CI: 0.96–1.74) per 1-SD increment, respectively. The limitations of this study include unmeasured residual confounders and one-time measurement of adipose tissue biomarkers.

Conclusions

In this study, we observed an association between IMAT at baseline and T2D incidence over the follow-up. This study suggests the potential role of intermuscular adipose depots in the pathophysiology of T2D.

Trial registration

ClinicalTrials.gov NCT00005487

Prediction of risk of prolonged post-concussion symptoms: Derivation and validation of the TRICORDRR (Toronto Rehabilitation Institute Concussion Outcome Determination and Rehab Recommendations) score

Gio, 08/07/2021 - 16:00

by Laura Kathleen Langer, Seyed Mohammad Alavinia, David Wyndham Lawrence, Sarah Elizabeth Patricia Munce, Alice Kam, Alan Tam, Lesley Ruttan, Paul Comper, Mark Theodore Bayley

Background

Approximately 10% to 20% of people with concussion experience prolonged post-concussion symptoms (PPCS). There is limited information identifying risk factors for PPCS in adult populations. This study aimed to derive a risk score for PPCS by determining which demographic factors, premorbid health conditions, and healthcare utilization patterns are associated with need for prolonged concussion care among a large cohort of adults with concussion.

Methods and findings

Data from a cohort study (Ontario Concussion Cohort study, 2008 to 2016; n = 1,330,336) including all adults with a concussion diagnosis by either primary care physician (ICD-9 code 850) or in emergency department (ICD-10 code S06) and 2 years of healthcare tracking postinjury (2008 to 2014, n = 587,057) were used in a retrospective analysis. Approximately 42.4% of the cohort was female, and adults between 18 and 30 years was the largest age group (31.0%). PPCS was defined as 2 or more specialist visits for concussion-related symptoms more than 6 months after injury index date. Approximately 13% (73,122) of the cohort had PPCS. Total cohort was divided into Derivation (2009 to 2013, n = 417,335) and Validation cohorts (2009 and 2014, n = 169,722) based upon injury index year. Variables selected a priori such as psychiatric disorders, migraines, sleep disorders, demographic factors, and pre-injury healthcare patterns were entered into multivariable logistic regression and CART modeling in the Derivation Cohort to calculate PPCS estimates and forward selection logistic regression model in the Validation Cohort. Variables with the highest probability of PPCS derived in the Derivation Cohort were: Age >61 years (p^ = 0.54), bipolar disorder (p^ = 0.52), high pre-injury primary care visits per year (p^ = 0.46), personality disorders (p^ = 0.45), and anxiety and depression (p^ = 0.33). The area under the curve (AUC) was 0.79 for the derivation model, 0.79 for bootstrap internal validation of the Derivation Cohort, and 0.64 for the Validation model. A limitation of this study was ability to track healthcare usage only to healthcare providers that submit to Ontario Health Insurance Plan (OHIP); thus, some patients seeking treatment for prolonged symptoms may not be captured in this analysis.

Conclusions

In this study, we observed that premorbid psychiatric conditions, pre-injury health system usage, and older age were associated with increased risk of a prolonged recovery from concussion. This risk score allows clinicians to calculate an individual’s risk of requiring treatment more than 6 months post-concussion.

Evaluating the use of the QUiPP app and its impact on the management of threatened preterm labour: A cluster randomised trial

Mar, 06/07/2021 - 16:00

by Helena A. Watson, Naomi Carlisle, Paul T. Seed, Jenny Carter, Katy Kuhrt, Rachel M. Tribe, Andrew H. Shennan

Background

Preterm delivery (before 37 weeks of gestation) is the single most important contributor to neonatal death and morbidity, with lifelong repercussions. However, the majority of women who present with preterm labour (PTL) symptoms do not deliver imminently. Accurate prediction of PTL is needed in order ensure correct management of those most at risk of preterm birth (PTB) and to prevent the maternal and fetal risks incurred by unnecessary interventions given to the majority. The QUantitative Innovation in Predicting Preterm birth (QUIPP) app aims to support clinical decision-making about women in threatened preterm labour (TPTL) by combining quantitative fetal fibronectin (qfFN) values, cervical length (CL), and significant PTB risk factors to create an individualised percentage risk of delivery.

Methods and findings

EQUIPTT was a multi-centre cluster randomised controlled trial (RCT) involving 13 maternity units in South and Eastern England (United Kingdom) between March 2018 and February 2019. Pregnant women (n = 1,872) between 23+0 and 34+6 weeks’ gestation with symptoms of PTL in the analysis period were assigned to either the intervention (762) or control (1,111). The mean age of the study population was 30.2 (+/− SD 5.93). A total of 56.0% were white, 19.6% were black, 14.2% were Asian, and 10.2% were of other ethnicities. The intervention was the use of the QUiPP app with admission, antenatal corticosteroids (ACSs), and transfer advised for women with a QUiPP risk of delivery >5% within 7 days. Control sites continued with their conventional management of TPTL. Unnecessary management for TPTL was a composite primary outcome defined by the sum of unnecessary admission decisions (admitted and delivery interval >7 days or not admitted and delivery interval ≤7 days) and the number of unnecessary in utero transfer (IUT) decisions/actions (IUT that occurred or were attempted >7 days prior to delivery) and ex utero transfers (EUTs) that should have been in utero (attempted and not attempted). Unnecessary management of TPTL was 11.3% (84/741) at the intervention sites versus 11.5% (126/1094) at control sites (odds ratio [OR] 0.97, 95% confidence interval [CI] 0.66–1.42, p = 0.883). Control sites frequently used qfFN and did not follow UK national guidance, which recommends routine treatment below 30 weeks without testing. Unnecessary management largely consisted of unnecessary admissions which were similar at intervention and control sites (10.7% versus 10.8% of all visits). In terms of adverse outcomes for women in TPTL <36 weeks, 4 women from the intervention sites and 12 from the control sites did not receive recommended management. If the QUiPP percentage risk was used as per protocol, unnecessary management would have been 7.4% (43/578) versus 9.9% (134/1,351) (OR 0.72, 95% CI 0.45–1.16). Our external validation of the QUiPP app confirmed that it was highly predictive of delivery in 7 days; receiver operating curve area was 0.90 (95% CI 0.85–0.95) for symptomatic women. Study limitations included a lack of compliance with national guidance at the control sites and difficulties in implementation of the QUiPP app.

Conclusions

This cluster randomised trial did not demonstrate that the use of the QUiPP app reduced unnecessary management of TPTL compared to current management but would safely improve the management recommended by the National Institute for Health and Care Excellence (NICE). Interpretation of qfFN, with or without the QUiPP app, is a safe and accurate method for identifying women most likely to benefit from PTL interventions.

Trial registration

ISRCTN Registry ISRCTN17846337.

Development and validation of a risk prediction model of preterm birth for women with preterm labour symptoms (the QUIDS study): A prospective cohort study and individual participant data meta-analysis

Mar, 06/07/2021 - 16:00

by Sarah J. Stock, Margaret Horne, Merel Bruijn, Helen White, Kathleen A. Boyd, Robert Heggie, Lisa Wotherspoon, Lorna Aucott, Rachel K. Morris, Jon Dorling, Lesley Jackson, Manju Chandiramani, Anna L. David, Asma Khalil, Andrew Shennan, Gert-Jan van Baaren, Victoria Hodgetts-Morton, Tina Lavender, Ewoud Schuit, Susan Harper-Clarke, Ben W. Mol, Richard D. Riley, Jane E. Norman, John Norrie

Background

Timely interventions in women presenting with preterm labour can substantially improve health outcomes for preterm babies. However, establishing such a diagnosis is very challenging, as signs and symptoms of preterm labour are common and can be nonspecific. We aimed to develop and externally validate a risk prediction model using concentration of vaginal fluid fetal fibronectin (quantitative fFN), in combination with clinical risk factors, for the prediction of spontaneous preterm birth and assessed its cost-effectiveness.

Methods and findings

Pregnant women included in the analyses were 22+0 to 34+6 weeks gestation with signs and symptoms of preterm labour. The primary outcome was spontaneous preterm birth within 7 days of quantitative fFN test. The risk prediction model was developed and internally validated in an individual participant data (IPD) meta-analysis of 5 European prospective cohort studies (2009 to 2016; 1,783 women; mean age 29.7 years; median BMI 24.8 kg/m2; 67.6% White; 11.7% smokers; 51.8% nulliparous; 10.4% with multiple pregnancy; 139 [7.8%] with spontaneous preterm birth within 7 days). The model was then externally validated in a prospective cohort study in 26 United Kingdom centres (2016 to 2018; 2,924 women; mean age 28.2 years; median BMI 25.4 kg/m2; 88.2% White; 21% smokers; 35.2% nulliparous; 3.5% with multiple pregnancy; 85 [2.9%] with spontaneous preterm birth within 7 days). The developed risk prediction model for spontaneous preterm birth within 7 days included quantitative fFN, current smoking, not White ethnicity, nulliparity, and multiple pregnancy. After internal validation, the optimism adjusted area under the curve was 0.89 (95% CI 0.86 to 0.92), and the optimism adjusted Nagelkerke R2 was 35% (95% CI 33% to 37%). On external validation in the prospective UK cohort population, the area under the curve was 0.89 (95% CI 0.84 to 0.94), and Nagelkerke R2 of 36% (95% CI: 34% to 38%). Recalibration of the model’s intercept was required to ensure overall calibration-in-the-large. A calibration curve suggested close agreement between predicted and observed risks in the range of predictions 0% to 10%, but some miscalibration (underprediction) at higher risks (slope 1.24 (95% CI 1.23 to 1.26)). Despite any miscalibration, the net benefit of the model was higher than “treat all” or “treat none” strategies for thresholds up to about 15% risk. The economic analysis found the prognostic model was cost effective, compared to using qualitative fFN, at a threshold for hospital admission and treatment of ≥2% risk of preterm birth within 7 days. Study limitations include the limited number of participants who are not White and levels of missing data for certain variables in the development dataset.

Conclusions

In this study, we found that a risk prediction model including vaginal fFN concentration and clinical risk factors showed promising performance in the prediction of spontaneous preterm birth within 7 days of test and has potential to inform management decisions for women with threatened preterm labour. Further evaluation of the risk prediction model in clinical practice is required to determine whether the risk prediction model improves clinical outcomes if used in practice.

Trial registration

The study was approved by the West of Scotland Research Ethics Committee (16/WS/0068). The study was registered with ISRCTN Registry (ISRCTN 41598423) and NIHR Portfolio (CPMS: 31277).

Conflict violence reduction and pregnancy outcomes: A regression discontinuity design in Colombia

Mar, 06/07/2021 - 16:00

by Giancarlo Buitrago, Rodrigo Moreno-Serra

Background

The relationship between exposure to conflict violence during pregnancy and the risks of miscarriage, stillbirth, and perinatal mortality has not been studied empirically using rigorous methods and appropriate data. We investigated the association between reduced exposure to conflict violence during pregnancy and the risks of adverse pregnancy outcomes in Colombia.

Methods and findings

We adopted a regression discontinuity (RD) design using the July 20, 2015 cease-fire declared during the Colombian peace process as an exogenous discontinuous change in exposure to conflict events during pregnancy, comparing women with conception dates before and after the cease-fire date. We constructed the cohorts of all pregnant women in Colombia for each day between January 1, 2013 and December 31, 2017 using birth and death certificates. A total of 3,254,696 women were followed until the end of pregnancy. We measured conflict exposure as the total number of conflict events that occurred in the municipality where a pregnant woman lived during her pregnancy. We first assessed whether the cease-fire did induce a discontinuous fall in conflict exposure for women with conception dates after the cease-fire to then estimate the association of this reduced exposure with the risks of miscarriage, stillbirth, and perinatal mortality. We found that the July 20, 2015 cease-fire was associated with a reduction of the average number of conflict events (from 2.64 to 2.40) to which women were exposed during pregnancy in their municipalities of residence (mean differences −0.24; 95% confidence interval [CI] −0.35 to −0.13; p < 0.001). This association was greater in municipalities where Fuerzas Armadas Revolucionarias de Colombia (FARC) had a greater presence historically. The reduction in average exposure to conflict violence was, in turn, associated with a decrease of 9.53 stillbirths per 1,000 pregnancies (95% CI −16.13 to −2.93; p = 0.005) for municipalities with total number of FARC-related violent events above the 90th percentile of the distribution of FARC-related conflict events and a decrease of 7.57 stillbirths per 1,000 pregnancies (95% CI −13.14 to −2.00; p = 0.01) for municipalities with total number of FARC-related violent events above the 75th percentile of FARC-related events. For perinatal mortality, we found associated reductions of 10.69 (95% CI −18.32 to −3.05; p = 0.01) and 6.86 (95% CI −13.24 to −0.48; p = 0.04) deaths per 1,000 pregnancies for the 2 types of municipalities, respectively. We found no association with miscarriages. Formal tests support the validity of the key RD assumptions in our data, while a battery of sensitivity analyses and falsification tests confirm the robustness of our empirical results. The main limitations of the study are the retrospective nature of the information sources and the potential for conflict exposure misclassification.

Conclusions

Our study offers evidence that reduced exposure to conflict violence during pregnancy is associated with important (previously unmeasured) benefits in terms of reducing the risk of stillbirth and perinatal deaths. The findings are consistent with such beneficial associations manifesting themselves mainly through reduced violence exposure during the early stages of pregnancy. Beyond the relevance of this evidence for other countries beset by chronic armed conflicts, our results suggest that the fledgling Colombian peace process may be already contributing to better population health.

Detection of significant antiviral drug effects on COVID-19 with reasonable sample sizes in randomized controlled trials: A modeling study

Mar, 06/07/2021 - 16:00

by Shoya Iwanami, Keisuke Ejima, Kwang Su Kim, Koji Noshita, Yasuhisa Fujita, Taiga Miyazaki, Shigeru Kohno, Yoshitsugu Miyazaki, Shimpei Morimoto, Shinji Nakaoka, Yoshiki Koizumi, Yusuke Asai, Kazuyuki Aihara, Koichi Watashi, Robin N. Thompson, Kenji Shibuya, Katsuhito Fujiu, Alan S. Perelson, Shingo Iwami, Takaji Wakita

Background

Development of an effective antiviral drug for Coronavirus Disease 2019 (COVID-19) is a global health priority. Although several candidate drugs have been identified through in vitro and in vivo models, consistent and compelling evidence from clinical studies is limited. The lack of evidence from clinical trials may stem in part from the imperfect design of the trials. We investigated how clinical trials for antivirals need to be designed, especially focusing on the sample size in randomized controlled trials.

Methods and findings

A modeling study was conducted to help understand the reasons behind inconsistent clinical trial findings and to design better clinical trials. We first analyzed longitudinal viral load data for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) without antiviral treatment by use of a within-host virus dynamics model. The fitted viral load was categorized into 3 different groups by a clustering approach. Comparison of the estimated parameters showed that the 3 distinct groups were characterized by different virus decay rates (p-value < 0.001). The mean decay rates were 1.17 d−1 (95% CI: 1.06 to 1.27 d−1), 0.777 d−1 (0.716 to 0.838 d−1), and 0.450 d−1 (0.378 to 0.522 d−1) for the 3 groups, respectively. Such heterogeneity in virus dynamics could be a confounding variable if it is associated with treatment allocation in compassionate use programs (i.e., observational studies).Subsequently, we mimicked randomized controlled trials of antivirals by simulation. An antiviral effect causing a 95% to 99% reduction in viral replication was added to the model. To be realistic, we assumed that randomization and treatment are initiated with some time lag after symptom onset. Using the duration of virus shedding as an outcome, the sample size to detect a statistically significant mean difference between the treatment and placebo groups (1:1 allocation) was 13,603 and 11,670 (when the antiviral effect was 95% and 99%, respectively) per group if all patients are enrolled regardless of timing of randomization. The sample size was reduced to 584 and 458 (when the antiviral effect was 95% and 99%, respectively) if only patients who are treated within 1 day of symptom onset are enrolled. We confirmed the sample size was similarly reduced when using cumulative viral load in log scale as an outcome.We used a conventional virus dynamics model, which may not fully reflect the detailed mechanisms of viral dynamics of SARS-CoV-2. The model needs to be calibrated in terms of both parameter settings and model structure, which would yield more reliable sample size calculation.

Conclusions

In this study, we found that estimated association in observational studies can be biased due to large heterogeneity in viral dynamics among infected individuals, and statistically significant effect in randomized controlled trials may be difficult to be detected due to small sample size. The sample size can be dramatically reduced by recruiting patients immediately after developing symptoms. We believe this is the first study investigated the study design of clinical trials for antiviral treatment using the viral dynamics model.

SARS-CoV-2 neutralizing antibodies: Longevity, breadth, and evasion by emerging viral variants

Mar, 06/07/2021 - 16:00

by Fiona Tea, Alberto Ospina Stella, Anupriya Aggarwal, David Ross Darley, Deepti Pilli, Daniele Vitale, Vera Merheb, Fiona X. Z. Lee, Philip Cunningham, Gregory J. Walker, Christina Fichter, David A. Brown, William D. Rawlinson, Sonia R. Isaacs, Vennila Mathivanan, Markus Hoffmann, Stefan Pöhlman, Ohan Mazigi, Daniel Christ, Dominic E. Dwyer, Rebecca J. Rockett, Vitali Sintchenko, Veronica C. Hoad, David O. Irving, Gregory J. Dore, Iain B. Gosbell, Anthony D. Kelleher, Gail V. Matthews, Fabienne Brilot, Stuart G. Turville

The Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) antibody neutralization response and its evasion by emerging viral variants and variant of concern (VOC) are unknown, but critical to understand reinfection risk and breakthrough infection following vaccination. Antibody immunoreactivity against SARS-CoV-2 antigens and Spike variants, inhibition of Spike-driven virus–cell fusion, and infectious SARS-CoV-2 neutralization were characterized in 807 serial samples from 233 reverse transcription polymerase chain reaction (RT-PCR)–confirmed Coronavirus Disease 2019 (COVID-19) individuals with detailed demographics and followed up to 7 months. A broad and sustained polyantigenic immunoreactivity against SARS-CoV-2 Spike, Membrane, and Nucleocapsid proteins, along with high viral neutralization, was associated with COVID-19 severity. A subgroup of “high responders” maintained high neutralizing responses over time, representing ideal convalescent plasma donors. Antibodies generated against SARS-CoV-2 during the first COVID-19 wave had reduced immunoreactivity and neutralization potency to emerging Spike variants and VOC. Accurate monitoring of SARS-CoV-2 antibody responses would be essential for selection of optimal responders and vaccine monitoring and design.

Rotavirus vaccine efficacy up to 2 years of age and against diverse circulating rotavirus strains in Niger: Extended follow-up of a randomized controlled trial

Ven, 02/07/2021 - 16:00

by Sheila Isanaka, Céline Langendorf, Monica Malone McNeal, Nicole Meyer, Brian Plikaytis, Souna Garba, Nathan Sayinzoga-Makombe, Issaka Soumana, Ousmane Guindo, Rockyiath Makarimi, Marie Francoise Scherrer, Eric Adehossi, Iza Ciglenecki, Rebecca F. Grais

Background

Rotavirus vaccination is recommended in all countries to reduce the burden of diarrhea-related morbidity and mortality in children. In resource-limited settings, rotavirus vaccination in the national immunization program has important cost implications, and evidence for protection beyond the first year of life and against the evolving variety of rotavirus strains is important. We assessed the extended and strain-specific vaccine efficacy of a heat-stable, affordable oral rotavirus vaccine (Rotasiil, Serum Institute of India, Pune, India) against severe rotavirus gastroenteritis (SRVGE) among healthy infants in Niger.

Methods and findings

From August 2014 to November 2015, infants were randomized in a 1:1 ratio to receive 3 doses of Rotasiil or placebo at approximately 6, 10, and 14 weeks of age. Episodes of gastroenteritis were assessed through active and passive surveillance and graded using the Vesikari score. The primary endpoint was vaccine efficacy of 3 doses of vaccine versus placebo against a first episode of laboratory-confirmed SRVGE (Vesikari score ≥ 11) from 28 days after dose 3, as previously reported. At the time of the primary analysis, median age was 9.8 months. In the present paper, analyses of extended efficacy were undertaken for 3 periods (28 days after dose 3 to 1 year of age, 1 to 2 years of age, and the combined period 28 days after dose 3 to 2 years of age) and by individual rotavirus G type. Among the 3,508 infants included in the per-protocol efficacy analysis (mean age at first dose 6.5 weeks; 49% male), the vaccine provided significant protection against SRVGE through the first year of life (3.96 and 9.98 cases per 100 person-years for vaccine and placebo, respectively; vaccine efficacy 60.3%, 95% CI 43.6% to 72.1%) and over the entire efficacy follow-up period up to 2 years of age (2.13 and 4.69 cases per 100 person-years for vaccine and placebo, respectively; vaccine efficacy 54.7%, 95% CI 38.1% to 66.8%), but the difference was not statistically significant in the second year of life. Up to 2 years of age, rotavirus vaccination prevented 2.56 episodes of SRVGE per 100 child-years. Estimates of efficacy against SRVGE by individual rotavirus genotype were consistent with the overall protective efficacy. Study limitations include limited generalizability to settings with administration of oral polio virus due to low concomitant administration, limited power to assess vaccine efficacy in the second year of life owing to a low number of events among older children, potential bias due to censoring of placebo children at the time of study vaccine receipt, and suboptimal adapted severity scoring based on the Vesikari score, which was designed for use in settings with high parental literacy.

Conclusions

Rotasiil provided protection against SRVGE in infants through an extended follow-up period of approximately 2 years. Protection was significant in the first year of life, when the disease burden and risk of death are highest, and against a changing pattern of rotavirus strains during the 2-year efficacy period. Rotavirus vaccines that are safe, effective, and protective against multiple strains represent the best hope for preventing the severe consequences of rotavirus infection, especially in resource-limited settings, where access to care may be limited. Studies such as this provide valuable information for the planning of national immunization programs and future vaccine development.

Trial registration

ClinicalTrials.gov NCT02145000.

Evaluation of a package of risk-based pharmaceutical and lifestyle interventions in patients with hypertension and/or diabetes in rural China: A pragmatic cluster randomised controlled trial

Gio, 01/07/2021 - 16:00

by Xiaolin Wei, Zhitong Zhang, Marc K. C. Chong, Joseph P. Hicks, Weiwei Gong, Guanyang Zou, Jieming Zhong, John D. Walley, Ross E. G. Upshur, Min Yu

Background

Primary prevention of cardiovascular disease (CVD) requires adequate control of hypertension and diabetes. We designed and implemented pharmaceutical and healthy lifestyle interventions for patients with diabetes and/or hypertension in rural primary care, and assessed their effectiveness at reducing severe CVD events.

Methods and findings

We used a pragmatic, parallel group, 2-arm, controlled, superiority, cluster trial design. We randomised 67 township hospitals in Zhejiang Province, China, to intervention (34) or control (33). A total of 31,326 participants were recruited, with 15,380 in the intervention arm and 15,946 in the control arm. Participants had no known CVD and were either patients with hypertension and a 10-year CVD risk of 20% or higher, or patients with type 2 diabetes regardless of their CVD risk. The intervention included prescription of a standardised package of medicines, individual advice on lifestyle change, and adherence support. Control was usual hypertension and diabetes care. In both arms, as usual in China, most outpatient drug costs were out of pocket. The primary outcome was severe CVD events, including coronary heart disease and stroke, during 36 months of follow-up, as recorded by the CVD surveillance system. The study was implemented between December 2013 and May 2017. A total of 13,385 (87%) and 14,745 (92%) participated in the intervention and control arms, respectively. Their mean age was 64 years, 51% were women, and 90% were farmers. Of all participants, 64% were diagnosed with hypertension with or without diabetes, and 36% were diagnosed with diabetes only. All township hospitals and participants completed the 36-month follow-up. At 36 months, there were 762 and 874 severe CVD events in the intervention and control arms, respectively, yielding a non-significant effect on CVD incidence rate (1.92 and 2.01 per 100 person-years, respectively; crude incidence rate ratio = 0.90 [95% CI: 0.74, 1.08; P = 0.259]). We observed significant, but small, differences in the change from baseline to follow-up for systolic blood pressure (−1.44 mm Hg [95% CI: −2.26, −0.62; P < 0.001]) and diastolic blood pressure (−1.29 mm Hg [95% CI: −1.77, −0.80; P < 0.001]) in the intervention arm compared to the control arm. Self-reported adherence to recommended medicines was significantly higher in the intervention arm compared with the control arm at 36 months. No safety concerns were identified. Main study limitations include all participants being informed about their high CVD risk at baseline, non-blinding of participants, and the relatively short follow-up period available for judging potential changes in rates of CVD events.

Conclusions

The comprehensive package of pharmaceutical and healthy lifestyle interventions did not reduce severe CVD events over 36 months. Improving health system factors such as universal coverage for the cost of essential medicines is required for successful risk-based CVD prevention programmes.

Trial registration

ISRCTN registry ISRCTN58988083.