PLoS Medicine

Condividi contenuti PLOS Medicine: New Articles
A Peer-Reviewed Open-Access Journal
Aggiornato: 1 settimana 4 giorni fa

Awareness, treatment, and control of hypertension in adults aged 45 years and over and their spouses in India: A nationally representative cross-sectional study

Mar, 24/08/2021 - 16:00

by Sanjay K. Mohanty, Sarang P. Pedgaonkar, Ashish Kumar Upadhyay, Fabrice Kämpfen, Prashant Shekhar, Radhe Shyam Mishra, Jürgen Maurer, Owen O’Donnell

Background

Lack of nationwide evidence on awareness, treatment, and control (ATC) of hypertension among older adults in India impeded targeted management of this condition. We aimed to estimate rates of hypertension ATC in the older population and to assess differences in these rates across sociodemographic groups and states in India.

Methods and findings

We used a nationally representative survey of individuals aged 45 years and over and their spouses in all Indian states (except one) in 2017 to 2018. We identified hypertension by blood pressure (BP) measurement ≥140/90 mm Hg or self-reported diagnosis if also taking medication or observing salt/diet restriction to control BP. We distinguished those who (i) reported diagnosis (“aware”); (ii) reported taking medication or being under salt/diet restriction to control BP (“treated”); and (iii) had measured systolic BP <140 and diastolic BP <90 (“controlled”). We estimated age–sex adjusted hypertension prevalence and rates of ATC by consumption quintile, education, age, sex, urban–rural, caste, religion, marital status, living arrangement, employment status, health insurance, and state. We used concentration indices to measure socioeconomic inequalities and multivariable logistic regression to estimate fully adjusted differences in these outcomes. Study limitations included reliance on BP measurement on a single occasion, missing measurements of BP for some participants, and lack of data on nonadherence to medication.The 64,427 participants in the analysis sample had a median age of 57 years: 58% were female, and 70% were rural dwellers. We estimated hypertension prevalence to be 41.9% (95% CI 41.0 to 42.9). Among those with hypertension, we estimated that 54.4% (95% CI 53.1 to 55.7), 50.8% (95% CI 49.5 to 52.0), and 28.8% (95% CI 27.4 to 30.1) were aware, treated, and controlled, respectively. Across states, adjusted rates of ATC ranged from 27.5% (95% CI 22.2 to 32.8) to 75.9% (95% CI 70.8 to 81.1), from 23.8% (95% CI 17.6 to 30.1) to 74.9% (95% CI 69.8 to 79.9), and from 4.6% (95% CI 1.1 to 8.1) to 41.9% (95% CI 36.8 to 46.9), respectively. Age–sex adjusted rates were lower (p < 0.001) in poorer, less educated, and socially disadvantaged groups, as well as for males, rural residents, and the employed. Among individuals with hypertension, the richest fifth were 8.5 percentage points (pp) (95% CI 5.3 to 11.7; p < 0.001), 8.9 pp (95% CI 5.7 to 12.0; p < 0.001), and 7.1 pp (95% CI 4.2 to 10.1; p < 0.001) more likely to be aware, treated, and controlled, respectively, than the poorest fifth.

Conclusions

Hypertension prevalence was high, and ATC of the condition were low among older adults in India. Inequalities in these indicators pointed to opportunities to target hypertension management more effectively and equitably on socially disadvantaged groups.

The prevalence of mental disorders among homeless people in high-income countries: An updated systematic review and meta-regression analysis

Lun, 23/08/2021 - 16:00

by Stefan Gutwinski, Stefanie Schreiter, Karl Deutscher, Seena Fazel

Background

Homelessness continues to be a pressing public health concern in many countries, and mental disorders in homeless persons contribute to their high rates of morbidity and mortality. Many primary studies have estimated prevalence rates for mental disorders in homeless individuals. We conducted a systematic review and meta-analysis of studies on the prevalence of any mental disorder and major psychiatric diagnoses in clearly defined homeless populations in any high-income country.

Methods and findings

We systematically searched for observational studies that estimated prevalence rates of mental disorders in samples of homeless individuals, using Medline, Embase, PsycInfo, and Google Scholar. We updated a previous systematic review and meta-analysis conducted in 2007, and searched until 1 April 2021. Studies were included if they sampled exclusively homeless persons, diagnosed mental disorders by standardized criteria using validated methods, provided point or up to 12-month prevalence rates, and were conducted in high-income countries. We identified 39 publications with a total of 8,049 participants. Study quality was assessed using the JBI critical appraisal tool for prevalence studies and a risk of bias tool. Random effects meta-analyses of prevalence rates were conducted, and heterogeneity was assessed by meta-regression analyses. The mean prevalence of any current mental disorder was estimated at 76.2% (95% CI 64.0% to 86.6%). The most common diagnostic categories were alcohol use disorders, at 36.7% (95% CI 27.7% to 46.2%), and drug use disorders, at 21.7% (95% CI 13.1% to 31.7%), followed by schizophrenia spectrum disorders (12.4% [95% CI 9.5% to 15.7%]) and major depression (12.6% [95% CI 8.0% to 18.2%]). We found substantial heterogeneity in prevalence rates between studies, which was partially explained by sampling method, study location, and the sex distribution of participants. Limitations included lack of information on certain subpopulations (e.g., women and immigrants) and unmet healthcare needs.

Conclusions

Public health and policy interventions to improve the health of homeless persons should consider the pattern and extent of psychiatric morbidity. Our findings suggest that the burden of psychiatric morbidity in homeless persons is substantial, and should lead to regular reviews of how healthcare services assess, treat, and follow up homeless people. The high burden of substance use disorders and schizophrenia spectrum disorders need particular attention in service development. This systematic review and meta-analysis has been registered with PROSPERO (CRD42018085216).

Trial registration

PROSPERO CRD42018085216.

Organized primary human papillomavirus–based cervical screening: A randomized healthcare policy trial

Lun, 23/08/2021 - 16:00

by K. Miriam Elfström, Carina Eklund, Helena Lamin, Daniel Öhman, Maria Hortlund, Kristina Elfgren, Karin Sundström, Joakim Dillner

Background

Clinical trials in the research setting have demonstrated that primary human papillomavirus (HPV)-based screening results in greater protection against cervical cancer compared with cytology, but evidence from real-life implementation was missing. To evaluate the effectiveness of HPV-based cervical screening within a real-life screening program, the organized, population-based cervical screening program in the capital region of Sweden offered either HPV- or cytology-based screening in a randomized manner through a randomized healthcare policy (RHP).

Methods and findings

A total of 395,725 women aged 30 to 64 years that were invited for their routine cervical screening visit were randomized without blinding to either cytology-based screening with HPV triage (n = 183,309) or HPV-based screening, with cytology triage (n = 212,416 women) between September 1, 2014 and September 30, 2016 and follow-up through June 30, 2017. The main outcome was non-inferior detection rate of cervical intraepithelial neoplasia grade 2 or worse (CIN2+). Secondary outcomes included superiority in CIN2+ detection, screening attendance, and referral to histology.In total, 120,240 had a cervical screening sample on record in the study period in the HPV arm and 99,340 in the cytology arm and were followed for the outcomes of interest. In per-protocol (PP) analyses, the detection rate of CIN2+ was 1.03% (95% confidence interval (CI) 0.98 to 1.10) in the HPV arm and 0.93% (0.87 to 0.99) in the cytology arm (p for non-inferiority <0.0001; odds ratio (OR) 1.11 (95% CI 1.02 to 1.22)). There were 46 cervical cancers detected in the HPV arm (0.04% (0.03 to 0.06)) and 48 cancers detected in the cytology arm (0.05% (0.04 to 0.07)) (p for non-inferiority <0.0001; OR 0.79 (0.53 to 1.18)). Intention-to-screen (ITS) analyses found few differences. In the HPV arm, there was a modestly increased attendance after new invitations (68.56% (68.31 to 68.80) vs. 67.71% (67.43 to 67.98); OR 1.02 (1.00 to 1.03)) and increased rate of referral with completed biopsy (3.89% (3.79 to 4.00) vs. 3.53% (3.42 to 3.65); OR 1.10 (1.05 to 1.15)).The main limitations of this analysis are that only the baseline results are presented, and there was an imbalance in invitations between the study arms.

Conclusions

In this study, we observed that a real-life RHP of primary HPV-based screening was acceptable and effective when evaluated against cytology-based screening, as indicated by comparable participation, referral, and detection rates.

Trial registration

ClinicalTrials.gov NCT01511328

Global surgery, obstetric, and anaesthesia indicator definitions and reporting: An Utstein consensus report

Ven, 20/08/2021 - 16:00

by Justine I. Davies, Adrian W. Gelb, Julian Gore-Booth, Janet Martin, Jannicke Mellin-Olsen, Christina Åkerman, Emmanuel A. Ameh, Bruce M. Biccard, Geir Sverre Braut, Kathryn M. Chu, Miliard Derbew, Hege Langli Ersdal, Jose Miguel Guzman, Lars Hagander, Carolina Haylock-Loor, Hampus Holmer, Walter Johnson, Sabrina Juran, Nicolas J. Kassebaum, Tore Laerdal, Andrew J. M. Leather, Michael S. Lipnick, David Ljungman, Emmanuel M. Makasa, John G. Meara, Mark W. Newton, Doris Østergaard, Teri Reynolds, Lauri J. Romanzi, Vatshalan Santhirapala, Mark G. Shrime, Kjetil Søreide, Margit Steinholt, Emi Suzuki, John E. Varallo, Gerard H. A. Visser, David Watters, Thomas G. Weiser

Background

Indicators to evaluate progress towards timely access to safe surgical, anaesthesia, and obstetric (SAO) care were proposed in 2015 by the Lancet Commission on Global Surgery. These aimed to capture access to surgery, surgical workforce, surgical volume, perioperative mortality rate, and catastrophic and impoverishing financial consequences of surgery. Despite being rapidly taken up by practitioners, data points from which to derive the indicators were not defined, limiting comparability across time or settings. We convened global experts to evaluate and explicitly define—for the first time—the indicators to improve comparability and support achievement of 2030 goals to improve access to safe affordable surgical and anaesthesia care globally.

Methods and findings

The Utstein process for developing and reporting guidelines through a consensus building process was followed. In-person discussions at a 2-day meeting were followed by an iterative process conducted by email and virtual group meetings until consensus was reached. The meeting was held between June 16 to 18, 2019; discussions continued until August 2020. Participants consisted of experts in surgery, anaesthesia, and obstetric care, data science, and health indicators from high-, middle-, and low-income countries. Considering each of the 6 indicators in turn, we refined overarching descriptions and agreed upon data points needed for construction of each indicator at current time (basic data points), and as each evolves over 2 to 5 (intermediate) and >5 year (full) time frames. We removed one of the original 6 indicators (one of 2 financial risk protection indicators was eliminated) and refined descriptions and defined data points required to construct the 5 remaining indicators: geospatial access, workforce, surgical volume, perioperative mortality, and catastrophic expenditure.A strength of the process was the number of people from global institutes and multilateral agencies involved in the collection and reporting of global health metrics; a limitation was the limited number of participants from low- or middle-income countries—who only made up 21% of the total attendees.

Conclusions

To track global progress towards timely access to quality SAO care, these indicators—at the basic level—should be implemented universally as soon as possible. Intermediate and full indicator sets should be achieved by all countries over time. Meanwhile, these evolutions can assist in the short term in developing national surgical plans and collecting more detailed data for research studies.

Accuracy of novel antigen rapid diagnostics for SARS-CoV-2: A living systematic review and meta-analysis

Gio, 12/08/2021 - 16:00

by Lukas E. Brümmer, Stephan Katzenschlager, Mary Gaeddert, Christian Erdmann, Stephani Schmitz, Marc Bota, Maurizio Grilli, Jan Larmann, Markus A. Weigand, Nira R. Pollock, Aurélien Macé, Sergio Carmona, Stefano Ongarello, Jilian A. Sacks, Claudia M. Denkinger

Background

SARS-CoV-2 antigen rapid diagnostic tests (Ag-RDTs) are increasingly being integrated in testing strategies around the world. Studies of the Ag-RDTs have shown variable performance. In this systematic review and meta-analysis, we assessed the clinical accuracy (sensitivity and specificity) of commercially available Ag-RDTs.

Methods and findings

We registered the review on PROSPERO (registration number: CRD42020225140). We systematically searched multiple databases (PubMed, Web of Science Core Collection, medRvix, bioRvix, and FIND) for publications evaluating the accuracy of Ag-RDTs for SARS-CoV-2 up until 30 April 2021. Descriptive analyses of all studies were performed, and when more than 4 studies were available, a random-effects meta-analysis was used to estimate pooled sensitivity and specificity in comparison to reverse transcription polymerase chain reaction (RT-PCR) testing. We assessed heterogeneity by subgroup analyses, and rated study quality and risk of bias using the QUADAS-2 assessment tool. From a total of 14,254 articles, we included 133 analytical and clinical studies resulting in 214 clinical accuracy datasets with 112,323 samples. Across all meta-analyzed samples, the pooled Ag-RDT sensitivity and specificity were 71.2% (95% CI 68.2% to 74.0%) and 98.9% (95% CI 98.6% to 99.1%), respectively. Sensitivity increased to 76.3% (95% CI 73.1% to 79.2%) if analysis was restricted to studies that followed the Ag-RDT manufacturers’ instructions. LumiraDx showed the highest sensitivity, with 88.2% (95% CI 59.0% to 97.5%). Of instrument-free Ag-RDTs, Standard Q nasal performed best, with 80.2% sensitivity (95% CI 70.3% to 87.4%). Across all Ag-RDTs, sensitivity was markedly better on samples with lower RT-PCR cycle threshold (Ct) values, i.e., <20 (96.5%, 95% CI 92.6% to 98.4%) and <25 (95.8%, 95% CI 92.3% to 97.8%), in comparison to those with Ct ≥ 25 (50.7%, 95% CI 35.6% to 65.8%) and ≥30 (20.9%, 95% CI 12.5% to 32.8%). Testing in the first week from symptom onset resulted in substantially higher sensitivity (83.8%, 95% CI 76.3% to 89.2%) compared to testing after 1 week (61.5%, 95% CI 52.2% to 70.0%). The best Ag-RDT sensitivity was found with anterior nasal sampling (75.5%, 95% CI 70.4% to 79.9%), in comparison to other sample types (e.g., nasopharyngeal, 71.6%, 95% CI 68.1% to 74.9%), although CIs were overlapping. Concerns of bias were raised across all datasets, and financial support from the manufacturer was reported in 24.1% of datasets. Our analysis was limited by the included studies’ heterogeneity in design and reporting.

Conclusions

In this study we found that Ag-RDTs detect the vast majority of SARS-CoV-2-infected persons within the first week of symptom onset and those with high viral load. Thus, they can have high utility for diagnostic purposes in the early phase of disease, making them a valuable tool to fight the spread of SARS-CoV-2. Standardization in conduct and reporting of clinical accuracy studies would improve comparability and use of data.

Investigating barriers to the protective efficacy provided by rotavirus vaccines in African infants

Mar, 10/08/2021 - 16:00

by Julie E. Bines

Julie Bines discusses an accompanying study by Sheila Isanaka and colleagues on nutrient supplementation and immune responses to rotavirus vaccination.

Immunogenicity of an oral rotavirus vaccine administered with prenatal nutritional support in Niger: A cluster randomized clinical trial

Mar, 10/08/2021 - 16:00

by Sheila Isanaka, Souna Garba, Brian Plikaytis, Monica Malone McNeal, Ousmane Guindo, Céline Langendorf, Eric Adehossi, Iza Ciglenecki, Rebecca F. Grais

Background

Nutritional status may play a role in infant immune development. To identify potential boosters of immunogenicity in low-income countries where oral vaccine efficacy is low, we tested the effect of prenatal nutritional supplementation on immune response to 3 doses of a live oral rotavirus vaccine.

Methods and findings

We nested a cluster randomized trial within a double-blind, placebo-controlled randomized efficacy trial to assess the effect of 3 prenatal nutritional supplements (lipid-based nutrient supplement [LNS], multiple micronutrient supplement [MMS], or iron–folic acid [IFA]) on infant immune response (n = 53 villages and 1,525 infants with valid serology results: 794 in the vaccine group and 731 in the placebo group). From September 2015 to February 2017, participating women received prenatal nutrient supplement during pregnancy. Eligible infants were then randomized to receive 3 doses of an oral rotavirus vaccine or placebo at 6–8 weeks of age (mean age: 6.3 weeks, 50% female). Infant sera (pre-Dose 1 and 28 days post-Dose 3) were analyzed for anti-rotavirus immunoglobulin A (IgA) using enzyme-linked immunosorbent assay (ELISA). The primary immunogenicity end point, seroconversion defined as ≥3-fold increase in IgA, was compared in vaccinated infants among the 3 supplement groups and between vaccine/placebo groups using mixed model analysis of variance procedures. Seroconversion did not differ by supplementation group (41.1% (94/229) with LNS vs. 39.1% (102/261) with multiple micronutrients (MMN) vs. 38.8% (118/304) with IFA, p = 0.91). Overall, 39.6% (n = 314/794) of infants who received vaccine seroconverted, compared to 29.0% (n = 212/731) of infants who received placebo (relative risk [RR]: 1.36; 95% confidence interval [CI]: 1.18, 1.57, p < 0.001). This study was conducted in a high rotavirus transmission setting. Study limitations include the absence of an immune correlate of protection for rotavirus vaccines, with the implications of using serum anti-rotavirus IgA for the assessment of immunogenicity and efficacy in low-income countries unclear.

Conclusions

This study showed no effect of the type of prenatal nutrient supplementation on immune response in this setting. Immune response varied depending on previous exposure to rotavirus, suggesting that alternative delivery modalities and schedules may be considered to improve vaccine performance in high transmission settings.

Trial registration

ClinicalTrials.gov NCT02145000.

An open science pathway for drug marketing authorization—Registered drug approval

Lun, 09/08/2021 - 16:00

by Florian Naudet, Maximilian Siebert, Rémy Boussageon, Ioana A. Cristea, Erick H. Turner

Florian Naudet and co-authors propose a pathway involving registered criteria for evaluation and approval of new drugs.

Development and validation of the Durham Risk Score for estimating suicide attempt risk: A prospective cohort analysis

Gio, 05/08/2021 - 16:00

by Nathan A. Kimbrel, Jean C. Beckham, Patrick S. Calhoun, Bryann B. DeBeer, Terence M. Keane, Daniel J. Lee, Brian P. Marx, Eric C. Meyer, Sandra B. Morissette, Eric B. Elbogen

Background

Worldwide, nearly 800,000 individuals die by suicide each year; however, longitudinal prediction of suicide attempts remains a major challenge within the field of psychiatry. The objective of the present research was to develop and evaluate an evidence-based suicide attempt risk checklist [i.e., the Durham Risk Score (DRS)] to aid clinicians in the identification of individuals at risk for attempting suicide in the future.

Methods and findings

Three prospective cohort studies, including a population-based study from the United States [i.e., the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) study] as well as 2 smaller US veteran cohorts [i.e., the Assessing and Reducing Post-Deployment Violence Risk (REHAB) and the Veterans After-Discharge Longitudinal Registry (VALOR) studies], were used to develop and validate the DRS. From a total sample size of 35,654 participants, 17,630 participants were selected to develop the checklist, whereas the remaining participants (N = 18,024) were used to validate it. The main outcome measure was future suicide attempts (i.e., actual suicide attempts that occurred after the baseline assessment during the 1- to 3-year follow-up period). Measure development began with a review of the extant literature to identify potential variables that had substantial empirical support as longitudinal predictors of suicide attempts and deaths. Next, receiver operating characteristic (ROC) curve analysis was utilized to identify variables from the literature review that uniquely contributed to the longitudinal prediction of suicide attempts in the development cohorts. We observed that the DRS was a robust prospective predictor of future suicide attempts in both the combined development (area under the curve [AUC] = 0.91) and validation (AUC = 0.92) cohorts. A concentration of risk analysis found that across all 35,654 participants, 82% of prospective suicide attempts occurred among individuals in the top 15% of DRS scores, whereas 27% occurred in the top 1%. The DRS also performed well among important subgroups, including women (AUC = 0.91), men (AUC = 0.93), Black (AUC = 0.92), White (AUC = 0.93), Hispanic (AUC = 0.89), veterans (AUC = 0.91), lower-income individuals (AUC = 0.90), younger adults (AUC = 0.88), and lesbian, gay, bisexual, transgender, and queer or questioning (LGBTQ) individuals (AUC = 0.88). The primary limitation of the present study was its its reliance on secondary data analyses to develop and validate the risk score.

Conclusions

In this study, we observed that the DRS was a strong predictor of future suicide attempts in both the combined development (AUC = 0.91) and validation (AUC = 0.92) cohorts. It also demonstrated good utility in many important subgroups, including women, men, Black, White, Hispanic, veterans, lower-income individuals, younger adults, and LGBTQ individuals. We further observed that 82% of prospective suicide attempts occurred among individuals in the top 15% of DRS scores, whereas 27% occurred in the top 1%. Taken together, these findings suggest that the DRS represents a significant advancement in suicide risk prediction over traditional clinical assessment approaches. While more work is needed to independently validate the DRS in prospective studies and to identify the optimal methods to assess the constructs used to calculate the score, our findings suggest that the DRS is a promising new tool that has the potential to significantly enhance clinicians’ ability to identify individuals at risk for attempting suicide in the future.

Conflict-related intentional injuries in Baghdad, Iraq, 2003–2014: A modeling study and proposed method for calculating burden of injury in conflict

Gio, 05/08/2021 - 16:00

by Guy W. Jensen, Riyadh Lafta, Gilbert Burnham, Amy Hagopian, Noah Simon, Abraham D. Flaxman

Background

Previous research has focused on the mortality associated with armed conflict as the primary measure of the population health effects of war. However, mortality only demonstrates part of the burden placed on a population by conflict. Injuries and resultant disabilities also have long-term effects on a population and are not accounted for in estimates that focus solely on mortality. Our aim was to demonstrate a new method to describe the effects of both lives lost, and years of disability generated by a given conflict, with data from the US-led 2003 invasion and subsequent occupation of Iraq.

Methods and findings

Our data come from interviews conducted in 2014 in 900 Baghdad households containing 5,148 persons. The average household size was 5.72 persons. The majority of the population (55.8%) were between the ages of 19 and 60. Household composition was evenly divided between males and females. Household sample collection was based on methodology previously designed for surveying households in war zones. Survey questions were answered by the head of household or senior adult present. The questions included year the injury occurred, the mechanism of injury, the body parts injured, whether injury resulted in disability and, if so, the length of disability.We present this modeling study to offer an innovative methodology for measuring “years lived with disability” (YLDs) and “years of life lost” (YLLs) attributable to conflict-related intentional injuries, using the Global Burden of Disease (GBD) approach. YLDs were calculated with disability weights, and YLLs were calculated by comparing the age at death to the GBD standard life table to calculate remaining life expectancy. Calculations were also performed using Iraq-specific life expectancy for comparison.We calculated a burden of injury of 5.6 million disability-adjusted life years (DALYs) lost due to conflict-related injuries in Baghdad from 2003 to 2014. The majority of DALYs lost were attributable to YLLs, rather than YLDs, 4.99 million YLLs lost (95% uncertainty interval (UI) 3.87 million to 6.13 million) versus 616,000 YLDs lost (95% UI 399,000 to 894,000). Cause-based analysis demonstrated that more DALYs were lost to due to gunshot wounds (57%) than any other cause.Our study has several limitations. Recall bias regarding the reporting and attribution of injuries is possible. Second, we have no data past the time of the interview, so we assumed individuals with ongoing disability at the end of data collection would not recover, possibly counting more disability for injuries occurring later. Additionally, incomplete data could have led to misclassification of deaths, resulting in an underestimation of the total burden of injury.

Conclusions

In this study, we propose a methodology to perform burden of disease calculations for conflict-related injuries (expressed in DALYs) in Baghdad from 2003 to 2014. We go beyond previous reports of simple mortality to assess long-term population health effects of conflict-related intentional injuries. Ongoing disability is, in cross section, a relatively small 10% of the total burden. Yet, this small proportion creates years of demands on the health system, persistent limitations in earning capacity, and continuing burdens of care provision on family members.

Levels of pneumococcal conjugate vaccine coverage and indirect protection against invasive pneumococcal disease and pneumonia hospitalisations in Australia: An observational study

Mar, 03/08/2021 - 16:00

by Jocelyn Chan, Heather F. Gidding, Christopher C. Blyth, Parveen Fathima, Sanjay Jayasinghe, Peter B. McIntyre, Hannah C. Moore, Kim Mulholland, Cattram D. Nguyen, Ross Andrews, Fiona M. Russell

Background

There is limited empiric evidence on the coverage of pneumococcal conjugate vaccines (PCVs) required to generate substantial indirect protection. We investigate the association between population PCV coverage and indirect protection against invasive pneumococcal disease (IPD) and pneumonia hospitalisations among undervaccinated Australian children.

Methods and findings

Birth and vaccination records, IPD notifications, and hospitalisations were individually linked for children aged <5 years, born between 2001 and 2012 in 2 Australian states (New South Wales and Western Australia; 1.37 million children). Using Poisson regression models, we examined the association between PCV coverage, in small geographical units, and the incidence of (1) 7-valent PCV (PCV7)-type IPD; (2) all-cause pneumonia; and (3) pneumococcal and lobar pneumonia hospitalisation in undervaccinated children. Undervaccinated children received <2 doses of PCV at <12 months of age and no doses at ≥12 months of age. Potential confounding variables were selected for adjustment a priori with the assistance of a directed acyclic graph.There were strong inverse associations between PCV coverage and the incidence of PCV7-type IPD (adjusted incidence rate ratio [aIRR] 0.967, 95% confidence interval [CI] 0.958 to 0.975, p-value < 0.001), and pneumonia hospitalisations (all-cause pneumonia: aIRR 0.991 95% CI 0.990 to 0.994, p-value < 0.001) among undervaccinated children. Subgroup analyses for children <4 months old, urban, rural, and Indigenous populations showed similar trends, although effects were smaller for rural and Indigenous populations. Approximately 50% coverage of PCV7 among children <5 years of age was estimated to prevent up to 72.5% (95% CI 51.6 to 84.4) of PCV7-type IPD among undervaccinated children, while 90% coverage was estimated to prevent 95.2% (95% CI 89.4 to 97.8). The main limitations of this study include the potential for differential loss to follow-up, geographical misclassification of children (based on residential address at birth only), and unmeasured confounders.

Conclusions

In this study, we observed substantial indirect protection at lower levels of PCV coverage than previously described—challenging assumptions that high levels of PCV coverage (i.e., greater than 90%) are required. Understanding the association between PCV coverage and indirect protection is a priority since the control of vaccine-type pneumococcal disease is a prerequisite for reducing the number of PCV doses (from 3 to 2). Reduced dose schedules have the potential to substantially reduce program costs while maintaining vaccine impact.

Cardiac risk stratification in cancer patients: A longitudinal patient–patient network analysis

Lun, 02/08/2021 - 16:00

by Yuan Hou, Yadi Zhou, Muzna Hussain, G. Thomas Budd, Wai Hong Wilson Tang, James Abraham, Bo Xu, Chirag Shah, Rohit Moudgil, Zoran Popovic, Chris Watson, Leslie Cho, Mina Chung, Mohamed Kanj, Samir Kapadia, Brian Griffin, Lars Svensson, Patrick Collier, Feixiong Cheng

Background

Cardiovascular disease is a leading cause of death in general population and the second leading cause of mortality and morbidity in cancer survivors after recurrent malignancy in the United States. The growing awareness of cancer therapy–related cardiac dysfunction (CTRCD) has led to an emerging field of cardio-oncology; yet, there is limited knowledge on how to predict which patients will experience adverse cardiac outcomes. We aimed to perform unbiased cardiac risk stratification for cancer patients using our large-scale, institutional electronic medical records.

Methods and findings

We built a large longitudinal (up to 22 years’ follow-up from March 1997 to January 2019) cardio-oncology cohort having 4,632 cancer patients in Cleveland Clinic with 5 diagnosed cardiac outcomes: atrial fibrillation, coronary artery disease, heart failure, myocardial infarction, and stroke. The entire population includes 84% white Americans and 11% black Americans, and 59% females versus 41% males, with median age of 63 (interquartile range [IQR]: 54 to 71) years old.We utilized a topology-based K-means clustering approach for unbiased patient–patient network analyses of data from general demographics, echocardiogram (over 25,000), lab testing, and cardiac factors (cardiac). We performed hazard ratio (HR) and Kaplan–Meier analyses to identify clinically actionable variables. All confounding factors were adjusted by Cox regression models. We performed random-split and time-split training-test validation for our model.We identified 4 clinically relevant subgroups that are significantly correlated with incidence of cardiac outcomes and mortality. Among the 4 subgroups, subgroup I (n = 625) has the highest risk of de novo CTRCD (28%) with an HR of 3.05 (95% confidence interval (CI) 2.51 to 3.72). Patients in subgroup IV (n = 1,250) had the worst survival probability (HR 4.32, 95% CI 3.82 to 4.88). From longitudinal patient–patient network analyses, the patients in subgroup I had a higher percentage of de novo CTRCD and a worse mortality within 5 years after the initiation of cancer therapies compared to long-time exposure (6 to 20 years). Using clinical variable network analyses, we identified that serum levels of NT-proB-type Natriuretic Peptide (NT-proBNP) and Troponin T are significantly correlated with patient’s mortality (NT-proBNP > 900 pg/mL versus NT-proBNP = 0 to 125 pg/mL, HR = 2.95, 95% CI 2.28 to 3.82, p < 0.001; Troponin T > 0.05 μg/L versus Troponin T ≤ 0.01 μg/L, HR = 2.08, 95% CI 1.83 to 2.34, p < 0.001). Study limitations include lack of independent cardio-oncology cohorts from different healthcare systems to evaluate the generalizability of the models. Meanwhile, the confounding factors, such as multiple medication usages, may influence the findings.

Conclusions

In this study, we demonstrated that the patient–patient network clustering methodology is clinically intuitive, and it allows more rapid identification of cancer survivors that are at greater risk of cardiac dysfunction. We believed that this study holds great promise for identifying novel cardiac risk subgroups and clinically actionable variables for the development of precision cardio-oncology.

Menopausal hormone therapy and women’s health: An umbrella review

Lun, 02/08/2021 - 16:00

by Guo-Qiang Zhang, Jin-Liang Chen, Ying Luo, Maya B. Mathur, Panagiotis Anagnostis, Ulugbek Nurmatov, Madar Talibov, Jing Zhang, Catherine M. Hawrylowicz, Mary Ann Lumsden, Hilary Critchley, Aziz Sheikh, Bo Lundbäck, Cecilia Lässer, Hannu Kankaanranta, Siew Hwa Lee, Bright I. Nwaru

Background

There remains uncertainty about the impact of menopausal hormone therapy (MHT) on women’s health. A systematic, comprehensive assessment of the effects on multiple outcomes is lacking. We conducted an umbrella review to comprehensively summarize evidence on the benefits and harms of MHT across diverse health outcomes.

Methods and findings

We searched MEDLINE, EMBASE, and 10 other databases from inception to November 26, 2017, updated on December 17, 2020, to identify systematic reviews or meta-analyses of randomized controlled trials (RCTs) and observational studies investigating effects of MHT, including estrogen-alone therapy (ET) and estrogen plus progestin therapy (EPT), in perimenopausal or postmenopausal women in all countries and settings. All health outcomes in previous systematic reviews were included, including menopausal symptoms, surrogate endpoints, biomarkers, various morbidity outcomes, and mortality. Two investigators independently extracted data and assessed methodological quality of systematic reviews using the updated 16-item AMSTAR 2 instrument. Random-effects robust variance estimation was used to combine effect estimates, and 95% prediction intervals (PIs) were calculated whenever possible. We used the term MHT to encompass ET and EPT, and results are presented for MHT for each outcome, unless otherwise indicated. Sixty systematic reviews were included, involving 102 meta-analyses of RCTs and 38 of observational studies, with 102 unique outcomes. The overall quality of included systematic reviews was moderate to poor. In meta-analyses of RCTs, MHT was beneficial for vasomotor symptoms (frequency: 9 trials, 1,104 women, risk ratio [RR] 0.43, 95% CI 0.33 to 0.57, p < 0.001; severity: 7 trials, 503 women, RR 0.29, 95% CI 0.17 to 0.50, p = 0.002) and all fracture (30 trials, 43,188 women, RR 0.72, 95% CI 0.62 to 0.84, p = 0.002, 95% PI 0.58 to 0.87), as well as vaginal atrophy (intravaginal ET), sexual function, vertebral and nonvertebral fracture, diabetes mellitus, cardiovascular mortality (ET), and colorectal cancer (EPT), but harmful for stroke (17 trials, 37,272 women, RR 1.17, 95% CI 1.05 to 1.29, p = 0.027) and venous thromboembolism (23 trials, 42,292 women, RR 1.60, 95% CI 0.99 to 2.58, p = 0.052, 95% PI 1.03 to 2.99), as well as cardiovascular disease incidence and recurrence, cerebrovascular disease, nonfatal stroke, deep vein thrombosis, gallbladder disease requiring surgery, and lung cancer mortality (EPT). In meta-analyses of observational studies, MHT was associated with decreased risks of cataract, glioma, and esophageal, gastric, and colorectal cancer, but increased risks of pulmonary embolism, cholelithiasis, asthma, meningioma, and thyroid, breast, and ovarian cancer. ET and EPT had opposite effects for endometrial cancer, endometrial hyperplasia, and Alzheimer disease. The major limitations include the inability to address the varying effects of MHT by type, dose, formulation, duration of use, route of administration, and age of initiation and to take into account the quality of individual studies included in the systematic reviews. The study protocol is publicly available on PROSPERO (CRD42017083412).

Conclusions

MHT has a complex balance of benefits and harms on multiple health outcomes. Some effects differ qualitatively between ET and EPT. The quality of available evidence is only moderate to poor.

Predictive values for different cancers and inflammatory bowel disease of 6 common abdominal symptoms among more than 1.9 million primary care patients in the UK: A cohort study

Lun, 02/08/2021 - 16:00

by Annie Herbert, Meena Rafiq, Tra My Pham, Cristina Renzi, Gary A. Abel, Sarah Price, Willie Hamilton, Irene Petersen, Georgios Lyratzopoulos

Background

The diagnostic assessment of abdominal symptoms in primary care presents a challenge. Evidence is needed about the positive predictive values (PPVs) of abdominal symptoms for different cancers and inflammatory bowel disease (IBD).

Methods and findings

Using data from The Health Improvement Network (THIN) in the United Kingdom (2000–2017), we estimated the PPVs for diagnosis of (i) cancer (overall and for different cancer sites); (ii) IBD; and (iii) either cancer or IBD in the year post-consultation with each of 6 abdominal symptoms: dysphagia (n = 86,193 patients), abdominal bloating/distension (n = 100,856), change in bowel habit (n = 106,715), rectal bleeding (n = 235,094), dyspepsia (n = 517,326), and abdominal pain (n = 890,490). The median age ranged from 54 (abdominal pain) to 63 years (dysphagia and change in bowel habit); the ratio of women/men ranged from 50%:50% (rectal bleeding) to 73%:27% (abdominal bloating/distension). Across all studied symptoms, the risk of diagnosis of cancer and the risk of diagnosis of IBD were of similar magnitude, particularly in women, and younger men. Estimated PPVs were greatest for change in bowel habit in men (4.64% cancer and 2.82% IBD) and for rectal bleeding in women (2.39% cancer and 2.57% IBD) and lowest for dyspepsia (for cancer: 1.41% men and 1.03% women; for IBD: 0.89% men and 1.00% women). Considering PPVs for specific cancers, change in bowel habit and rectal bleeding had the highest PPVs for colon and rectal cancer; dysphagia for esophageal cancer; and abdominal bloating/distension (in women) for ovarian cancer. The highest PPVs of abdominal pain (either sex) and abdominal bloating/distension (men only) were for non-abdominal cancer sites. For the composite outcome of diagnosis of either cancer or IBD, PPVs of rectal bleeding exceeded the National Institute of Health and Care Excellence (NICE)-recommended specialist referral threshold of 3% in all age–sex strata, as did PPVs of abdominal pain, change in bowel habit, and dyspepsia, in those aged 60 years and over. Study limitations include reliance on accuracy and completeness of coding of symptoms and disease outcomes.

Conclusions

Based on evidence from more than 1.9 million patients presenting in primary care, the findings provide estimated PPVs that could be used to guide specialist referral decisions, considering the PPVs of common abdominal symptoms for cancer alongside that for IBD and their composite outcome (cancer or IBD), taking into account the variable PPVs of different abdominal symptoms for different cancers sites. Jointly assessing the risk of cancer or IBD can better support decision-making and prompt diagnosis of both conditions, optimising specialist referrals or investigations, particularly in women.

Long-term solid fuel use and risks of major eye diseases in China: A population-based cohort study of 486,532 adults

Gio, 29/07/2021 - 16:00

by Ka Hung Chan, Mingshu Yan, Derrick A. Bennett, Yu Guo, Yiping Chen, Ling Yang, Jun Lv, Canqing Yu, Pei Pei, Yan Lu, Liming Li, Huaidong Du, Kin Bong Hubert Lam, Zhengming Chen, on behalf of the China Kadoorie Biobank Study group

Background

Over 3.5 billion individuals worldwide are exposed to household air pollution from solid fuel use. There is limited evidence from cohort studies on associations of solid fuel use with risks of major eye diseases, which cause substantial disease and economic burden globally.

Methods and findings

The China Kadoorie Biobank recruited 512,715 adults aged 30 to 79 years from 10 areas across China during 2004 to 2008. Cooking frequency and primary fuel types in the 3 most recent residences were assessed by a questionnaire. During median (IQR) 10.1 (9.2 to 11.1) years of follow-up, electronic linkages to national health insurance databases identified 4,877 incident conjunctiva disorders, 13,408 cataracts, 1,583 disorders of sclera, cornea, iris, and ciliary body (DSCIC), and 1,534 cases of glaucoma. Logistic regression yielded odds ratios (ORs) for each disease associated with long-term use of solid fuels (i.e., coal or wood) compared to clean fuels (i.e., gas or electricity) for cooking, with adjustment for age at baseline, birth cohort, sex, study area, education, occupation, alcohol intake, smoking, environmental tobacco smoke, cookstove ventilation, heating fuel exposure, body mass index, prevalent diabetes, self-reported general health, and length of recall period.After excluding participants with missing or unreliable exposure data, 486,532 participants (mean baseline age 52.0 [SD 10.7] years; 59.1% women) were analysed. Overall, 71% of participants cooked regularly throughout the recall period, of whom 48% used solid fuels consistently. Compared with clean fuel users, solid fuel users had adjusted ORs of 1.32 (1.07 to 1.37, p < 0.001) for conjunctiva disorders, 1.17 (1.08 to 1.26, p < 0.001) for cataracts, 1.35 (1.10 to 1.66, p = 0.0046) for DSCIC, and 0.95 (0.76 to 1.18, p = 0.62) for glaucoma. Switching from solid to clean fuels was associated with smaller elevated risks (over long-term clean fuel users) than nonswitching, with adjusted ORs of 1.21 (1.07 to 1.37, p < 0.001), 1.05 (0.98 to 1.12, p = 0.17), and 1.21 (0.97 to 1.50, p = 0.088) for conjunctiva disorders, cataracts, and DSCIC, respectively. The adjusted ORs for the eye diseases were broadly similar in solid fuel users regardless of ventilation status. The main limitations of this study include the lack of baseline eye disease assessment, the use of self-reported cooking frequency and fuel types for exposure assessment, the risk of bias from delayed diagnosis (particularly for cataracts), and potential residual confounding from unmeasured factors (e.g., sunlight exposure).

Conclusions

Among Chinese adults, long-term solid fuel use for cooking was associated with higher risks of not only conjunctiva disorders but also cataracts and other more severe eye diseases. Switching to clean fuels appeared to mitigate the risks, underscoring the global health importance of promoting universal access to clean fuels.

Body size and composition and risk of site-specific cancers in the UK Biobank and large international consortia: A mendelian randomisation study

Gio, 29/07/2021 - 16:00

by Mathew Vithayathil, Paul Carter, Siddhartha Kar, Amy M. Mason, Stephen Burgess, Susanna C. Larsson

Background

Evidence for the impact of body size and composition on cancer risk is limited. This mendelian randomisation (MR) study investigates evidence supporting causal relationships of body mass index (BMI), fat mass index (FMI), fat-free mass index (FFMI), and height with cancer risk.

Methods and findings

Single nucleotide polymorphisms (SNPs) were used as instrumental variables for BMI (312 SNPs), FMI (577 SNPs), FFMI (577 SNPs), and height (293 SNPs). Associations of the genetic variants with 22 site-specific cancers and overall cancer were estimated in 367,561 individuals from the UK Biobank (UKBB) and with lung, breast, ovarian, uterine, and prostate cancer in large international consortia. In the UKBB, genetically predicted BMI was positively associated with overall cancer (odds ratio [OR] per 1 kg/m2 increase 1.01, 95% confidence interval [CI] 1.00–1.02; p = 0.043); several digestive system cancers: stomach (OR 1.13, 95% CI 1.06–1.21; p < 0.001), esophagus (OR 1.10, 95% CI 1.03, 1.17; p = 0.003), liver (OR 1.13, 95% CI 1.03–1.25; p = 0.012), and pancreas (OR 1.06, 95% CI 1.01–1.12; p = 0.016); and lung cancer (OR 1.08, 95% CI 1.04–1.12; p < 0.001). For sex-specific cancers, genetically predicted elevated BMI was associated with an increased risk of uterine cancer (OR 1.10, 95% CI 1.05–1.15; p < 0.001) and with a lower risk of prostate cancer (OR 0.97, 95% CI 0.94–0.99; p = 0.009). When dividing cancers into digestive system versus non-digestive system, genetically predicted BMI was positively associated with digestive system cancers (OR 1.04, 95% CI 1.02–1.06; p < 0.001) but not with non-digestive system cancers (OR 1.01, 95% CI 0.99–1.02; p = 0.369). Genetically predicted FMI was positively associated with liver, pancreatic, and lung cancer and inversely associated with melanoma and prostate cancer. Genetically predicted FFMI was positively associated with non-Hodgkin lymphoma and melanoma. Genetically predicted height was associated with increased risk of overall cancer (OR per 1 standard deviation increase 1.09; 95% CI 1.05–1.12; p < 0.001) and multiple site-specific cancers. Similar results were observed in analyses using the weighted median and MR–Egger methods. Results based on consortium data confirmed the positive associations between BMI and lung and uterine cancer risk as well as the inverse association between BMI and prostate cancer, and, additionally, showed an inverse association between genetically predicted BMI and breast cancer. The main limitations are the assumption that genetic associations with cancer outcomes are mediated via the proposed risk factors and that estimates for some lower frequency cancer types are subject to low precision.

Conclusions

Our results show that the evidence for BMI as a causal risk factor for cancer is mixed. We find that BMI has a consistent causal role in increasing risk of digestive system cancers and a role for sex-specific cancers with inconsistent directions of effect. In contrast, increased height appears to have a consistent risk-increasing effect on overall and site-specific cancers.

Impact of assessment and intervention by a health and social care professional team in the emergency department on the quality, safety, and clinical effectiveness of care for older adults: A randomised controlled trial

Mer, 28/07/2021 - 16:00

by Marica Cassarino, Katie Robinson, Dominic Trépel, Íde O’Shaughnessy, Eimear Smalle, Stephen White, Collette Devlin, Rosie Quinn, Fiona Boland, Marie E. Ward, Rosa McNamara, Fiona Steed, Margaret O’Connor, Andrew O’Regan, Gerard McCarthy, Damien Ryan, Rose Galvin

Background

Older adults frequently attend the emergency department (ED) and experience high rates of adverse events following ED presentation. This randomised controlled trial evaluated the impact of early assessment and intervention by a dedicated team of health and social care professionals (HSCPs) in the ED on the quality, safety, and clinical effectiveness of care of older adults in the ED.

Methods and findings

This single-site randomised controlled trial included a sample of 353 patients aged ≥65 years (mean age = 79.6, SD = 7.01; 59.2% female) who presented with lower urgency complaints to the ED a university hospital in the Mid-West region of Ireland, during HSCP operational hours. The intervention consisted of early assessment and intervention carried out by a HSCP team comprising a senior medical social worker, senior occupational therapist, and senior physiotherapist. The primary outcome was ED length of stay. Secondary outcomes included rates of hospital admissions from the ED; hospital length of stay for admitted patients; patient satisfaction with index visit; ED revisits, mortality, nursing home admission, and unscheduled hospital admission at 30-day and 6-month follow-up; and patient functional status and quality of life (at index visit and follow-up). Demographic information included the patient’s gender, age, marital status, residential status, mode of arrival to the ED, source of referral, index complaint, triage category, falls, and hospitalisation history. Participants in the intervention group (n = 176) experienced a significantly shorter ED stay than the control group (n = 177) (6.4 versus 12.1 median hours, p < 0.001). Other significant differences (intervention versus control) included lower rates of hospital admissions from the ED (19.3% versus 55.9%, p < 0.001), higher levels of satisfaction with the ED visit (p = 0.008), better function at 30-day (p = 0.01) and 6-month follow-up (p = 0.03), better mobility (p = 0.02 at 30 days), and better self-care (p = 0.03 at 30 days; p = 0.009 at 6 months). No differences at follow-up were observed in terms of ED re-presentation or hospital admission. Study limitations include the inability to blind patients or ED staff to allocation due to the nature of the intervention, and a focus on early assessment and intervention in the ED rather than care integration following discharge.

Conclusions

Early assessment and intervention by a dedicated ED-based HSCP team reduced ED length of stay and the risk of hospital admissions among older adults, as well as improving patient satisfaction. Our findings support the effectiveness of an interdisciplinary model of care for key ED outcomes.

Trial registration

ClinicalTrials.gov NCT03739515; registered on 12 November 2018.

Transdisciplinary research and clinical priorities for better health

Mar, 27/07/2021 - 16:00

by Luigi Fontana, Alessio Fasano, Yap Seng Chong, Paolo Vineis, Walter C. Willett

Modern medicine makes it possible for many people to live with multiple chronic diseases for decades, but this has enormous social, financial, and environmental consequences. Preclinical, epidemiological, and clinical trial data have shown that many of the most common chronic diseases are largely preventable with nutritional and lifestyle interventions that are targeting well-characterized signaling pathways and the symbiotic relationship with our microbiome. Most of the research priorities and spending for health are focused on finding new molecular targets for the development of biotech and pharmaceutical products. Very little is invested in mechanism-based preventive science, medicine, and education. We believe that overly enthusiastic expectations regarding the benefits of pharmacological research for disease treatment have the potential to impact and distort not only medical research and practice but also environmental health and sustainable economic growth. Transitioning from a primarily disease-centered medical system to a balanced preventive and personalized treatment healthcare system is key to reduce social disparities in health and achieve financially sustainable, universal health coverage for all. In this Perspective article, we discuss a range of science-based strategies, policies, and structural reforms to design an entire new disease prevention–centered science, educational, and healthcare system that maximizes both human and environmental health.

Evidence-informed policy for tackling adverse climate change effects on health: Linking regional and global assessments of science to catalyse action

Mar, 20/07/2021 - 16:00

by Robin Fears, Khairul Annuar B. Abdullah, Claudia Canales-Holzeis, Deoraj Caussy, Andy Haines, Sherilee L. Harper, Jeremy N. McNeil, Johanna Mogwitz, Volker ter Meulen

Robin Fears and co-authors discuss evidence-informed regional and global policy responses to health impacts of climate change.

Telmisartan use and risk of dementia in type 2 diabetes patients with hypertension: A population-based cohort study

Lun, 19/07/2021 - 16:00

by Chi-Hung Liu, Pi-Shan Sung, Yan-Rong Li, Wen-Kuan Huang, Tay-Wey Lee, Chin-Chang Huang, Tsong-Hai Lee, Tien-Hsing Chen, Yi-Chia Wei

Background

Angiotensin receptor blockers (ARBs) may have protective effects against dementia occurrence in patients with hypertension (HTN). However, whether telmisartan, an ARB with peroxisome proliferator-activated receptor γ (PPAR-γ)–modulating effects, has additional benefits compared to other ARBs remains unclear.

Methods and findings

Between 1997 and 2013, 2,166,944 type 2 diabetes mellitus (T2DM) patients were identified from the National Health Insurance Research Database of Taiwan. Patients with HTN using ARBs were included in the study. Patients with a history of stroke, traumatic brain injury, or dementia were excluded. Finally, 65,511 eligible patients were divided into 2 groups: the telmisartan group and the non-telmisartan ARB group. Propensity score matching (1:4) was used to balance the distribution of baseline characteristics and medications. The primary outcome was the diagnosis of dementia. The secondary outcomes included the diagnosis of Alzheimer disease and occurrence of symptomatic ischemic stroke (IS), any IS, and all-cause mortality. The risks between groups were compared using a Cox proportional hazard model. Statistical significance was set at p < 0.05. There were 2,280 and 9,120 patients in the telmisartan and non-telmisartan ARB groups, respectively. Patients in the telmisartan group had a lower risk of dementia diagnosis (telmisartan versus non-telmisartan ARBs: 2.19% versus 3.20%; HR, 0.72; 95% CI, 0.53 to 0.97; p = 0.030). They also had lower risk of dementia diagnosis with IS as a competing risk (subdistribution HR, 0.70; 95% CI, 0.51 to 0.95; p = 0.022) and with all-cause mortality as a competing risk (subdistribution HR, 0.71; 95% CI, 0.53 to 0.97; p = 0.029). In addition, the telmisartan users had a lower risk of any IS (6.84% versus 8.57%; HR, 0.79; 95% CI, 0.67 to 0.94; p = 0.008) during long-term follow-up. Study limitations included potential residual confounding by indication, interpretation of causal effects in an observational study, and bias caused by using diagnostic and medication codes to represent real clinical data.

Conclusions

The current study suggests that telmisartan use in hypertensive T2DM patients may be associated with a lower risk of dementia and any IS events in an East-Asian population.