Refine
Document Type
- Article (28)
Language
- English (28) (remove)
Keywords
- Cognition (2)
- Anticholinergic medication (1)
- Benzodiazepines (1)
- Beta-blocker therapy (1)
- CHRM2 (1)
- CHRM4 (1)
- Cannulation (1)
- Cardiac anesthesia (1)
- Cardiac surgery (1)
- Cardiovascular (1)
Institute
- Fakultät AuL (28)
"The limits of my language are the limits of my mind. All I know is what I have words for" (Wittgenstein). When learning something completely new, we connect the unknown term to an already existing part of our knowledge. We can only build new ideas and insights upon an existing conceptual foundation. In the field of statistics, we educators frequently find ourselves met with great confusion when teaching novices. These students, entirely unfamiliar with even basic statistics, must connect the introduced statistical terms within their personal existing networks of largely non-statistical knowledge. Lecturers, on the other hand, who are well versed in statistics, have deeply internalized the content to be taught and its relevant context. The juxtaposition of the two roles may produce amusement in a lecturer upon gaining insight into the word associations made by the statistical novices. For example, a ‘logistic regression’ does not involve the ‘shipping of goods in economically difficult times,’ though this might seem entirely reasonable and intuitive to the statistics learner. Other times, these different perspectives can lead to headaches and frustration for both learners and their lecturers. In this article, we illustrate how simple statistical terms are initially connected to a student’s pre-exiting knowledge and how these associations change after completing an introductory course in applied statistics. Furthermore, we emphasize the important difference between “term”, “approach”, and “context”. Understanding this fundamental distinction may help improve the communication between the lecturer and the learner. We offer a collection of practical tools for instructors to help promote students’ conceptual understanding in a supportive, mutually-beneficial learning environment.
Background and Objectives
Despite the long-standing consensus on the importance of tumor size, tumor number and carcinoembryonic antigen (CEA) levels as predictors of long-term outcomes among patients with colorectal liver metastases (CRLM), optimal prognostic cut-offs for these variables have not been established.
Methods
Patients who underwent curative-intent resection of CRLM and had available data on at least one of the three variables of interest above were selected from a multi-institutional dataset of patients with known KRAS mutational status. The resulting cohort was randomly split into training and testing datasets and recursive partitioning analysis was employed to determine optimal cut-offs. The concordance probability estimates (CPEs) for these optimal cut offs were calculated and compared to CPEs for the most widely used cut-offs in the surgical literature.
Results
A total of 1643 patients who met eligibility criteria were identified. Following recursive partitioning analysis in the training dataset, the following cut-offs were identified: 2.95 cm for tumor size, 1.5 for tumor number and 6.15 ng/ml for CEA levels. In the entire dataset, the calculated CPEs for the new tumor size (0.52), tumor number (0.56) and CEA (0.53) cut offs exceeded CPEs for other commonly employed cut-offs.
Conclusion
The current study was able to identify optimal cut-offs for the three most commonly employed prognostic factors in CRLM. While the per variable gains in discriminatory power are modest, these novel cut-offs may help produce appreciable increases in prognostic performance when combined in the context of future risk scores.
Intensive care units (ICU) are often overflooded with alarms from monitoring devices which constitutes a hazard to both staff and patients. To date, the suggested solutions to excessive monitoring alarms have remained on a research level. We aimed to identify patient characteristics that affect the ICU alarm rate with the goal of proposing a straightforward solution that can easily be implemented in ICUs. Alarm logs from eight adult ICUs of a tertiary care university-hospital in Berlin, Germany were retrospectively collected between September 2019 and March 2021. Adult patients admitted to the ICU with at least 24 h of continuous alarm logs were included in the study. The sum of alarms per patient per day was calculated. The median was 119. A total of 26,890 observations from 3205 patients were included. 23 variables were extracted from patients' electronic health records (EHR) and a multivariable logistic regression was performed to evaluate the association of patient characteristics and alarm rates. Invasive blood pressure monitoring (adjusted odds ratio (aOR) 4.68, 95%CI 4.15–5.29, p < 0.001), invasive mechanical ventilation (aOR 1.24, 95%CI 1.16–1.32, p < 0.001), heart failure (aOR 1.26, 95%CI 1.19–1.35, p < 0.001), chronic renal failure (aOR 1.18, 95%CI 1.10–1.27, p < 0.001), hypertension (aOR 1.19, 95%CI 1.13–1.26, p < 0.001), high RASS (aOR 1.22, 95%CI 1.18–1.25, p < 0.001) and scheduled surgical admission (aOR 1.22, 95%CI 1.13–1.32, p < 0.001) were significantly associated with a high alarm rate. Our study suggests that patient-specific alarm management should be integrated in the clinical routine of ICUs. To reduce the overall alarm load, particular attention regarding alarm management should be paid to patients with invasive blood pressure monitoring, invasive mechanical ventilation, heart failure, chronic renal failure, hypertension, high RASS or scheduled surgical admission since they are more likely to have a high contribution to noise pollution, alarm fatigue and hence compromised patient safety in ICUs.
Objective:
The cervical mucus plugs are enriched with proteins of known immunological functions. We aimed to characterize the anti-HIV-1 activity of the cervical mucus plugs against a panel of different HIV-1 strains in the contexts of cell-free and cell-associated virus.
Design:
A cohort of consenting HIV-1-negative and HIV-1-positive pregnant women in labour was recruited from Mthatha General Hospital in the Eastern Cape province of South Africa, from whom the cervical mucus plugs were collected in 6 M guanidinium chloride with protease inhibitors and transported to our laboratories at −80 °C.
Methods:
Samples were centrifuged to remove insoluble material and dialysed before freeze--drying and subjecting them to the cell viability assays. The antiviral activities of the samples were studied using luminometric reporter assays and flow cytometry. Time-of-addition and BlaM-Vpr virus-cell fusion assays were used to pin-point the antiviral mechanisms of the cervical mucus plugs, before proteomic profiling using liquid chromatography-tandem mass spectrometry.
Results:
The proteinaceous fraction of the cervical mucus plugs exhibited anti-HIV-1 activity with inter-individual variations and some degree of specificity among different HIV-1 strains. Cell-associated HIV-1 was less susceptible to inhibition by the potent samples whenever compared with the cell-free HIV-1. The samples with high antiviral potency exhibited a distinct proteomic profile when compared with the less potent samples.
Conclusion:
The crude cervical mucus plugs exhibit anti-HIV-1 activity, which is defined by a specific proteomic profile.
BACKGROUND: Postoperative delirium (POD) is an acute and common complication after surgery that can increase morbidity and mortality. Few previous studies with inconsistent findings have examined the association of preoperative pain and POD. Our purpose is to investigate the association of preoperative chronic pain and POD.
METHODS: This prospective observational cohort study included 200 patients ≥ 18 years scheduled for elective surgery under general anaesthesia in a tertiary care hospital. POD was defined as meeting diagnostic criteria during the study visits (according to delirium screening tests and the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition), or by diagnosis of the responsible physicians. Chronic pain was defined as pain lasting six months or longer. Features of chronic pain were assessed with the German Pain Questionnaire, including the Depression Anxiety and Stress Scale-21 (DASS-21). Associations with POD were assessed using logistic regression analysis adjusting for confounding factors.
RESULTS: Thirty-nine (22%) out of 176 patients developed POD. Chronic pain was not associated with POD after adjustment for ASA physical status, duration of anesthesia and DASS-21 Anxiety score (Odds ratio [OR], 95%-Confidence Interval [CI], 2.216 [0.968;5.070], P=0.060). A subgroup analysis of chronic pain patients revealed that current pain intensity was higher in patients with POD.
CONCLUSIONS: Preoperative chronic pain was no independent predictor for POD. Current pain intensity was higher in chronic pain patients with POD. This indicates that certain features of pain might be influential. Further research is needed to examine different forms of preoperative pain and their possible influence on POD.
Background
A pre-existing neurocognitive disorder (NCD) is a relevant factor for the outcome of surgical patients. To improve understanding of these conditions, we investigated the association between parameters of the cholinergic system and NCD.
Method
This investigation is part of the BioCog project (www.biocog.eu), which is a prospective multicenter observational study including patients aged 65 years and older scheduled for elective surgery. Patients with a Mini-Mental State Examination (MMSE) score ≤23 points were excluded. Neurocognitive disorder was assessed according to the fifth Diagnostic and Statistical Manual of Mental Disorders criteria. The basal forebrain cholinergic system volume (BFCSV) was assessed with magnetic resonance imaging, the peripheral cholinesterase (ChE) activities with point-of-care measurements, and anticholinergic load by analyzing the long-term medication with anticholinergic scales (Anticholinergic Drug Scale [ADS], Anticholinergic Risk Scale [ARS], Anticholinergic Cognitive Burden Scale [ACBS]). The associations of BFCSV, ChE activities, and anticholinergic scales with NCD were studied with logistic regression analysis, adjusting for confounding factors.
Results
A total of 797 participants (mean age 72 years, 42% females) were included. One hundred and eleven patients (13.9%) fulfilled criteria for mild NCD and 82 patients (10.3%) for major NCD criteria. We found that AcetylChE activity was associated with major NCD (odds ratio [95% confidence interval]: [U/gHB] 1.061 [1.010, 1.115]), as well as ADS score ([points] 1.353 [1.063, 1.723]) or ARS score, respectively ([points] 1.623 [1.100, 2.397]) with major NCD. However, we found no association between BFCSV or ButyrylChE activity with mild or major NCD.
Conclusions
AcetylChE activity and anticholinergic load were associated with major NCD. Future research should focus on the association of the cholinergic system and the development of postoperative delirium and postoperative NCD.
Introduction Postoperative delirium (POD) is seen in approximately 15% of elderly patients and is related to poorer outcomes. In 2017, the Federal Joint Committee (Gemeinsamer Bundesausschuss) introduced a ‘quality contract’ (QC) as a new instrument to improve healthcare in Germany. One of the four areas for improvement of in-patient care is the ‘Prevention of POD in the care of elderly patients’ (QC-POD), as a means to reduce the risk of developing POD and its complications.
The Institute for Quality Assurance and Transparency in Health Care identified gaps in the in-patient care of elderly patients related to the prevention, screening and treatment of POD, as required by consensus-based and evidence-based delirium guidelines. This paper introduces the QC-POD protocol, which aims to implement these guidelines into the clinical routine. There is an urgent need for well-structured, standardised and interdisciplinary pathways that enable the reliable screening and treatment of POD. Along with effective preventive measures, these concepts have a considerable potential to improve the care of elderly patients.
Methods and analysis The QC-POD study is a non-randomised, pre–post, monocentric, prospective trial with an interventional concept following a baseline control period. The QC-POD trial was initiated on 1 April 2020 between Charité-Universitätsmedizin Berlin and the German health insurance company BARMER and will end on 30 June 2023. Inclusion criteria: patients 70 years of age or older that are scheduled for a surgical procedure requiring anaesthesia and insurance with the QC partner (BARMER). Exclusion criteria included patients with a language barrier, moribund patients and those unwilling or unable to provide informed consent. The QC-POD protocol provides perioperative intervention at least two times per day, with delirium screening and non-pharmacological preventive measures.
Ethics and dissemination This protocol was approved by the ethics committee of the Charité-Universitätsmedizin, Berlin, Germany (EA1/054/20). The results will be published in a peer-reviewed scientific journal and presented at national and international conferences.
Preoperative medication use and development of postoperative delirium and cognitive dysfunction
(2021)
Postoperative delirium (POD) and postoperative (neuro-)cognitive disorder (POCD) are frequent and serious complications after operations. We aim to investigate the association between pre-operative polypharmacy and potentially inappropriate medications and the development of POD/POCD in elderly patients. This investigation is part of the European BioCog project (www.biocog.eu), a prospective multicenter observational study with elderly surgical patients. Patients with a Mini-Mental State Examination score less than or equal to 23 points were excluded. POD was assessed up to 7 days after surgery using the Nursing Delirium Screening Scale, Confusion Assessment Method (for the intensive care unit [ICU]), and a patient chart review. POCD was assessed 3 months after surgery with a neuropsychological test battery. Pre-operative long-term medication was evaluated in terms of polypharmacy (≥5 agents) and potentially inappropriate medication (defined by the PRISCUS and European list of potentially inappropriate medications [EU(7)-PIM] lists), and associations with POD and POCD were analyzed using logistic regression analysis. Eight hundred thirty-seven participants were included for analysis of POD and 562 participants for POCD. Of these, 165 patients (19.7%) fulfilled the criteria of POD and 60 (10.7%) for POCD. After adjusting for confounders, pre-operative polypharmacy and intake of potentially inappropriate medications could not be shown to be associated with the development of POD nor POCD. We found no associations between pre-operative polypharmacy and potentially inappropriate medications and development of POD and POCD. Future studies should focus on the evaluation of drug interactions to determine whether patients benefit from a pre-operative adjustment.
Background
Postoperative delirium (POD) is a frequent and serious complication after surgery. Evidence of a relationship between anticholinergic medication and the development of delirium is inconclusive, but studies on POD are rare.
Objectives
The objective of this study was to evaluate the anticholinergic load of preoperative medication in older adult patients and its association with the development of POD.
Methods
This investigation was part of the European BioCog project (http://www.biocog.eu), a prospective multicenter observational study in older adult surgical patients (ClinicalTrials.gov identifier: NCT02265263, 15 October 2014). Patients with a Mini–Mental State Examination score ≤ 23 points were excluded. POD was assessed up to 7 days after surgery using the Nursing Delirium Screening Scale, Confusion Assessment Method and a patient chart review. The preoperative anticholinergic load was calculated using the Anticholinergic Drug Scale (ADS), the Anticholinergic Risk Scale (ARS) and the Anticholinergic Cognitive Burden Scale (ACBS), and associations with POD were analyzed using logistic regression analysis adjusting for age, comorbidities, duration of anesthesia and number of drugs used.
Results
In total, 837 participants were included for analysis, and 165 patients (19.7%) fulfilled the criteria of POD. After adjusting for confounders, we found no association between preoperative anticholinergic load and the development of POD (ADS [points] odds ratio [OR] 0.928; 95% confidence interval [CI] 0.749–1.150; ARS [points] OR 0.832; 95% CI 0.564–1.227; ACBS [points] OR 1.045; 95% CI 0.842–1.296).
Conclusion
This study found no association between the anticholinergic load of drugs used preoperatively and the development of POD in older adult patients without severe preexisting cognitive impairment. Future analyses should examine the influence of intra- and postoperative administration of anticholinergic drugs as well as dosages of and interactions between medications.
BACKGROUND:
Intraoperative electroencephalography (EEG) signatures related to the development of postoperative delirium (POD) in older patients are frequently studied. However, a broad analysis of the EEG dynamics including preoperative, postinduction, intraoperative and postoperative scenarios and its correlation to POD development is still lacking. We explored the relationship between perioperative EEG spectra-derived parameters and POD development, aiming to ascertain the diagnostic utility of these parameters to detect patients developing POD.
METHODS:
Patients aged ≥65 years undergoing elective surgeries that were expected to last more than 60 minutes were included in this prospective, observational single center study (Biomarker Development for Postoperative Cognitive Impairment [BioCog] study). Frontal EEGs were recorded, starting before induction of anesthesia and lasting until recovery of consciousness. EEG data were analyzed based on raw EEG files and downloaded excel data files. We performed multitaper spectral analyses of relevant EEG epochs and further used multitaper spectral estimate to calculate a corresponding spectral parameter. POD assessments were performed twice daily up to the seventh postoperative day. Our primary aim was to analyze the relation between the perioperative spectral edge frequency (SEF) and the development of POD.
RESULTS:
Of the 237 included patients, 41 (17%) patients developed POD. The preoperative EEG in POD patients was associated with lower values in both SEF (POD 13.1 ± 4.6 Hz versus no postoperative delirium [NoPOD] 17.4 ± 6.9 Hz; P = .002) and corresponding γ-band power (POD −24.33 ± 2.8 dB versus NoPOD −17.9 ± 4.81 dB), as well as reduced postinduction absolute α-band power (POD −7.37 ± 4.52 dB versus NoPOD −5 ± 5.03 dB). The ratio of SEF from the preoperative to postinduction state (SEF ratio) was ~1 in POD patients, whereas NoPOD patients showed a SEF ratio >1, thus indicating a slowing of EEG with loss of unconscious. Preoperative SEF, preoperative γ-band power, and SEF ratio were independently associated with POD (P = .025; odds ratio [OR] = 0.892, 95% confidence interval [CI], 0.808–0.986; P = .029; OR = 0.568, 95% CI, 0.342–0.944; and P = .009; OR = 0.108, 95% CI, 0.021–0.568, respectively).
CONCLUSIONS:
Lower preoperative SEF, absence of slowing in EEG while transitioning from preoperative state to unconscious state, and lower EEG power in relevant frequency bands in both these states are related to POD development. These findings may suggest an underlying pathophysiology and might be used as EEG-based marker for early identification of patients at risk to develop POD.
The benzodiazepine, midazolam, is one of the most frequently used sedatives in intensive care medicine, but it has an unfavorable pharmacokinetic profile when continuously applied. As a consequence, patients are frequently prolonged and more deeply sedated than intended. Due to its distinct pharmacological features, including a cytochrome P450-independent metabolization, intravenous lormetazepam might be clinically advantageous compared to midazolam. In this retrospective cohort study, we compared patients who received either intravenous lormetazepam or midazolam with respect to their survival and sedation characteristics. The cohort included 3314 mechanically ventilated, critically ill patients that received one of the two drugs in a tertiary medical center in Germany between 2006 and 2018. A Cox proportional hazards model with mortality as outcome and APACHE II, age, gender, and admission mode as covariates revealed a hazard ratio of 1.75 [95% CI 1.46–2.09; p < 0.001] for in-hospital mortality associated with the use of midazolam. After additionally adjusting for sedation intensity, the HR became 1.04 [95% CI 0.83–1.31; p = 0.97]. Thus, we concluded that excessive sedation occurs more frequently in critically ill patients treated with midazolam than in patients treated with lormetazepam. These findings require further investigation in prospective trials to assess if lormetazepam, due to its ability to maintain light sedation, might be favorable over other benzodiazepines for sedation in the ICU.
Background
There is no consensus on the instruments for diagnosis of post-intensive care syndrome (PICS). We present a proposal for a set of outcome measurement instruments of PICS in outpatient care.
Methods
We conducted a three-round, semi-structured consensus-seeking process with medical experts, followed each by exploratory feasibility investigations with intensive care unit survivors (n1 = 5; n2 = 5; n3 = 7). Fourteen participants from nine stakeholder groups participated in the first and second consensus meeting. In the third consensus meeting, a core group of six clinical researchers refined the final outcome measurement instrument set proposal.
Results
We suggest an outcome measurement instrument set used in a two-step process. First step: Screening with brief tests covering PICS domains of (1) mental health (Patient Health Questionnaire-4 (PHQ-4)), (2) cognition (MiniCog, Animal Naming), (3) physical function (Timed Up-and-Go (TUG), handgrip strength), and (4) health-related quality of life (HRQoL) (EQ-5D-5L). Single items measure subjective health before and after the intensive care unit stay. If patients report new or worsened health problems after intensive care unit discharge and show relevant impairment in at least one of the screening tests, a second extended assessment follows: (1) Mental health (Patient Health Questionnaire-8 (PHQ-8), Generalized Anxiety Disorder Scale-7 (GAD-7), Impact of Event Scale – revised (IES-R)); (2) cognition (Repeatable Battery for the Assessment of Neuropsychological Status (RBANS), Trail Making Test (TMT) A and B); (3) physical function (2-Minute Walk Test (2-MWT), handgrip strength, Short Physical Performance Battery (SPPB)); and (4) HRQoL (EQ-5D-5L, 12-Item WHO Disability Assessment Schedule (WHODAS 2.0)).
Conclusions
We propose an outcome measurement instrument set used in a two-step measurement of PICS, combining performance-based and patient-reported outcome measures. First-step screening is brief, free-of-charge, and easily applicable by health care professionals across different sectors. If indicated, specialized healthcare providers can perform the extended, second-step assessment. Usage of the first-step screening of our suggested outcome measurement instrument set in outpatient clinics with subsequent transfer to specialists is recommended for all intensive care unit survivors. This may increase awareness and reduce the burden of PICS.
Background
Beta-blocker (BB) therapy plays a central role in the treatment of cardiovascular diseases. An increasing number of patients with cardiovascular diseases undergoe noncardiac surgery, where opioids are an integral part of the anesthesiological management. There is evidence to suggest that short-term intravenous BB therapy may influence perioperative opioid requirements due to an assumed cross-talk between G-protein coupled beta-adrenergic and opioid receptors. Whether chronic BB therapy could also have an influence on perioperative opioid requirements is unclear.
Methods
A post hoc analysis of prospectively collected data from a multicenter observational (BioCog) study was performed. Inclusion criteria consisted of elderly patients (≥ 65 years) undergoing elective noncardiac surgery as well as total intravenous general anesthesia without the use of regional anesthesia and duration of anesthesia ≥ 60 min. Two groups were defined: patients with and without BB in their regular preopreative medication. The administered opioids were converted to their respective morphine equivalent doses. Multiple regression analysis was performed using the morphine-index to identify independent predictors.
Results
A total of 747 patients were included in the BioCog study in the study center Berlin. 106 patients fulfilled the inclusion criteria. Of these, 37 were on chronic BB. The latter were preoperatively significantly more likely to have arterial hypertension (94.6%), chronic renal failure (27%) and hyperlipoproteinemia (51.4%) compared to patients without BB. Both groups did not differ in terms of cumulative perioperative morphine equivalent dose (230.9 (BB group) vs. 214.8 mg (Non-BB group)). Predictive factors for increased morphine-index were older age, male sex, longer duration of anesthesia and surgery of the trunk. In a model with logarithmised morphine index, only gender (female) and duration of anesthesia remained predictive factors.
Conclusions
Chronic BB therapy was not associated with a reduced perioperative opioid consumption.
Background:
Etomidate is typically used as an induction agent in cardiac surgery because it has little impact on hemodynamics. It is a known suppressor of adrenocortical function and may increase the risk for post-operative infections, sepsis, and mortality. The aim of this study was to evaluate whether etomidate increases the risk of postoperative sepsis (primary outcome) and infections (secondary outcome) compared to propofol.
Methods:
This was a retrospective before–after trial (IRB EA1/143/20) performed at a tertiary medical center in Berlin, Germany, between 10/2012 and 01/2015. Patients undergoing cardiac surgery were investigated within two observation intervals, during which etomidate and propofol were the sole induction agents.
Results:
One-thousand, four-hundred, and sixty-two patients, and 622 matched pairs, after caliper propensity-score matching, were included in the final analysis. Sepsis rates did not differ in the matched cohort (etomidate: 11.5% vs. propofol: 8.2%, p = 0.052). Patients in the etomidate interval were more likely to develop hospital-acquired pneumonia (etomidate: 18.6% vs. propofol: 14.0%, p = 0.031).
Conclusion:
Our study showed that a single-dose of etomidate is not statistically associated with higher postoperative sepsis rates after cardiac surgery, but is associated with a higher incidence of hospital-acquired pneumonia. However, there is a notable trend towards a higher sepsis rate.
Background:
Cardiac surgery patients represent a high-risk cohort in intensive care units (ICUs). Central venous pressure (CVP) measurement seems to remain an integral part in hemodynamic monitoring, especially in cardio-surgical ICUs. However, its value as a prognostic marker for organ failure is still unclear. Therefore, we analyzed postoperative CVP values after adult cardiac surgery in a large cohort with regard to its prognostic value for morbidity and mortality.
Methods:
All adult patients admitted to our ICUs between 2006 and 2019 after cardiac surgery were eligible for inclusion in the study (n = 11,198). We calculated the median initial CVP (miCVP) after admission to the ICU, which returned valid values for 9802 patients. An ROC curve analysis for optimal cut-off miCVP to predict ICU mortality was conducted with consecutive patient allocation into a (a) low miCVP (LCVP) group (≤11 mmHg) and (b) high miCVP (HCVP) group (>11 mmHg). We analyzed the impact of high miCVP on morbidity and mortality by propensity score matching (PSM) and logistic regression.
Results:
ICU mortality was increased in HCVP patients. In addition, patients in the HCVP group required longer mechanical ventilation, had a higher incidence of acute kidney injury, were more frequently treated with renal replacement therapy, and showed a higher risk for postoperative liver dysfunction, parametrized by a postoperative rise of ≥ 10 in MELD Score. Multiple regression analysis confirmed HCVP has an effect on postoperative ICU-mortality and intrahospital mortality, which seems to be independent.
Conclusions:
A high initial CVP in the early postoperative ICU course after cardiac surgery is associated with worse patient outcome. Whether or not CVP, as a readily and constantly available hemodynamic parameter, should promote clinical efforts regarding diagnostics and/or treatment, warrants further investigations.
Background
In DNA methylation analyses like epigenome-wide association studies, effects in differentially methylated CpG sites are assessed. Two kinds of outcomes can be used for statistical analysis: Beta-values and M-values. M-values follow a normal distribution and help to detect differentially methylated CpG sites. As biological effect measures, differences of M-values are more or less meaningless. Beta-values are of more interest since they can be interpreted directly as differences in percentage of DNA methylation at a given CpG site, but they have poor statistical properties. Different frameworks are proposed for reporting estimands in DNA methylation analysis, relying on Beta-values, M-values, or both.
Results
We present and discuss four possible approaches of achieving estimands in DNA methylation analysis. In addition, we present the usage of M-values or Beta-values in the context of bioinformatical pipelines, which often demand a predefined outcome. We show the dependencies between the differences in M-values to differences in Beta-values in two data simulations: a analysis with and without confounder effect. Without present confounder effects, M-values can be used for the statistical analysis and Beta-values statistics for the reporting. If confounder effects exist, we demonstrate the deviations and correct the effects by the intercept method. Finally, we demonstrate the theoretical problem on two large human genome-wide DNA methylation datasets to verify the results.
Conclusions
The usage of M-values in the analysis of DNA methylation data will produce effect estimates, which cannot be biologically interpreted. The parallel usage of Beta-value statistics ignores possible confounder effects and can therefore not be recommended. Hence, if the differences in Beta-values are the focus of the study, the intercept method is recommendable. Hyper- or hypomethylated CpG sites must then be carefully evaluated. If an exploratory analysis of possible CpG sites is the aim of the study, M-values can be used for inference.
Background
In mucosal barrier interfaces, flexible responses of gene expression to long-term environmental changes allow adaptation and fine-tuning for the balance of host defense and uncontrolled not-resolving inflammation. Epigenetic modifications of the chromatin confer plasticity to the genetic information and give insight into how tissues use the genetic information to adapt to environmental factors. The oral mucosa is particularly exposed to environmental stressors such as a variable microbiota. Likewise, persistent oral inflammation is the most important intrinsic risk factor for the oral inflammatory disease periodontitis and has strong potential to alter DNA-methylation patterns. The aim of the current study was to identify epigenetic changes of the oral masticatory mucosa in response to long-term inflammation that resulted in periodontitis.
Methods and results
Genome-wide CpG methylation of both inflamed and clinically uninflamed solid gingival tissue biopsies of 60 periodontitis cases was analyzed using the Infinium MethylationEPIC BeadChip. We validated and performed cell-type deconvolution for infiltrated immune cells using the EpiDish algorithm. Effect sizes of DMPs in gingival epithelial and fibroblast cells were estimated and adjusted for confounding factors using our recently developed “intercept-method”. In the current EWAS, we identified various genes that showed significantly different methylation between periodontitis-inflamed and uninflamed oral mucosa in periodontitis patients. The strongest differences were observed for genes with roles in wound healing (ROBO2, PTP4A3), cell adhesion (LPXN) and innate immune response (CCL26, DNAJC1, BPI). Enrichment analyses implied a role of epigenetic changes for vesicle trafficking gene sets.
Conclusions
Our results imply specific adaptations of the oral mucosa to a persistent inflammatory environment that involve wound repair, barrier integrity, and innate immune defense.
Objectives: To measure and assess the economic impact of adherence to a single quality indicator (QI) regarding weaning from invasive ventilation.
Design: Retrospective observational single-centre study, based on electronic medical and administrative records.
Setting: Intensive care unit (ICU) of a German university hospital, reference centre for acute respiratory distress syndrome.
Participants: Records of 3063 consecutive mechanically ventilated patients admitted to the ICU between 2012 and 2017 were extracted, of whom 583 were eligible adults for further analysis. Patients’ weaning protocols were evaluated for daily adherence to quality standards until ICU discharge. Patients with <65% compliance were assigned to the low adherence group (LAG), patients with ≥65% to the high adherence group (HAG).
Primary and secondary outcome measures: Economic healthcare costs, clinical outcomes and patients’ characteristics.
Results: The LAG consisted of 378 patients with a median negative economic results of −€3969, HAG of 205 (−€1030), respectively (p<0.001). Median duration of ventilation was 476 (248; 769) hours in the LAG and 389 (247; 608) hours in the HAG (p<0.001). Length of stay (LOS) in the LAG on ICU was 21 (12; 35) days and 16 (11; 25) days in the HAG (p<0.001). LOS in the hospital was 36 (22; 61) days in the LAG, and within the HAG, respectively, 26 (18; 48) days (p=0.001).
Conclusions: High adherence to this single QI is associated with better clinical outcome and improved economic returns. Therefore, the results support the adherence to QI. However, the examined QI does not influence economic outcome as the decisive factor.
Background: In longitudinal studies, observations are made over time. Hence, the single observations at each time point are dependent, making them a repeated measurement. In this work, we explore a different, counterintuitive setting: At each developmental time point, a lethal observation is performed on the pregnant or nursing mother. Therefore, the single time points are independent. Furthermore, the observation in the offspring at each time point is correlated with each other because each litter consists of several (genetically linked) littermates. In addition, the observed time series is short from a statistical perspective as animal ethics prevent killing more mother mice than absolutely necessary, and murine development is short anyway. We solve these challenges by using multiple contrast tests and visualizing the change point by the use of confidence intervals.
Results: We used linear mixed models to model the variability of the mother. The estimates from the linear mixed model are then used in multiple contrast tests.There are a variety of contrasts and intuitively, we would use the Changepoint method. However, it does not deliver satisfying results. Interestingly, we found two other contrasts, both capable of answering different research questions in change point detection: i) Should a single point with change direction be found, or ii) Should the overall progression be determined? The Sequen contrast answers the first, the McDermott the second. Confidence intervals deliver effect estimates for the strength of the potential change point. Therefore, the scientist can define a biologically relevant limit of change depending on the research question.
Conclusion: We present a solution with effect estimates for short independent time series with observations nested at a given time point. Multiple contrast tests produce confidence intervals, which allow determining the position of change points or to visualize the expression course over time. We suggest to use McDermott’s method to determine if there is an overall significant change within the time frame, while Sequen is better in determining specific change points. In addition, we offer a short formula for the estimation of the maximal length of the time series.
Background
To detect changes in biological processes, samples are often studied at several time points. We examined expression data measured at different developmental stages, or more broadly, historical data. Hence, the main assumption of our proposed methodology was the independence between the examined samples over time. In addition, however, the examinations were clustered at each time point by measuring littermates from relatively few mother mice at each developmental stage. As each examination was lethal, we had an independent data structure over the entire history, but a dependent data structure at a particular time point. Over the course of these historical data, we wanted to identify abrupt changes in the parameter of interest - change points.
Results
In this study, we demonstrated the application of generalized hypothesis testing using a linear mixed effects model as a possible method to detect change points. The coefficients from the linear mixed model were used in multiple contrast tests and the effect estimates were visualized with their respective simultaneous confidence intervals. The latter were used to determine the change point(s). In small simulation studies, we modelled different courses with abrupt changes and compared the influence of different contrast matrices. We found two contrasts, both capable of answering different research questions in change point detection: The Sequen contrast to detect individual change points and the McDermott contrast to find change points due to overall progression. We provide the R code for direct use with provided examples. The applicability of those tests for real experimental data was shown with in-vivo data from a preclinical study.
Conclusion
Simultaneous confidence intervals estimated by multiple contrast tests using the model fit from a linear mixed model were capable to determine change points in clustered expression data. The confidence intervals directly delivered interpretable effect estimates representing the strength of the potential change point. Hence, scientists can define biologically relevant threshold of effect strength depending on their research question. We found two rarely used contrasts best fitted for detection of a possible change point: the Sequen and McDermott contrasts.
Background: New ischaemic brain lesions on magnetic resonance imaging (MRI) are reported in up to 86% of patients after transcatheter edge-to-edge repair of the mitral valve (TEER-MV). Knowledge of the exact procedural step(s) that carry the highest risk for cerebral embolisation may help to further improve the procedure.
Aims: The aim of this study was to identify the procedural step(s) that are associated with an increased risk of cerebral embolisation during TEER-MV with the MitraClip system. Furthermore, the risk of overt stroke and silent brain ischaemia after TEER-MV was assessed.
Methods: In this prospective, pre-specified observational study, all patients underwent continuous transcranial Doppler examination during TEER-MV to detect microembolic signals (MES). MES were assigned to specific procedural steps: (1) transseptal puncture and placement of the guide, (2) advancing and adjustment of the clip in the left atrium, (3) device interaction with the MV, and (4) removal of the clip delivery system and the guide. Neurological examination using the National Institutes of Health Stroke Scale (NIHSS) and cerebral MRI were performed before and after TEER-MV.
Results: Fifty-four patients were included. The number of MES differed significantly between the procedural steps with the highest numbers observed during device interaction with the MV. Mild neurological deterioration (NIHSS ≤3) occurred in 9/54 patients. New ischaemic lesions were detected in 21/24 patients who underwent MRI. Larger infarct volume was significantly associated with neurological deterioration.
Conclusions: Cerebral embolisation is immanent to TEER-MV and predominantly occurs during device interaction with the MV. Improvements to the procedure may focus on this procedural step.
Background
Postoperative delirium (POD) and postoperative cognitive dysfunction (POCD) are frequent and serious complications after surgery. We aim to investigate the association between genetic variants in cholinergic candidate genes according to the Kyoto encyclopedia of genes and genomes - pathway: cholinergic neurotransmission with the development of POD or POCD in elderly patients.
Methods
This analysis is part of the European BioCog project (www.biocog.eu), a prospective multicenter observational study with elderly surgical patients. Patients with a Mini-Mental-State-Examination score ≤ 23 points were excluded. POD was assessed up to seven days after surgery using the Nursing Delirium Screening Scale, Confusion Assessment Method and a patient chart review. POCD was assessed three months after surgery with a neuropsychological test battery. Genotyping was performed on the Illumina Infinium Global Screening Array. Associations with POD and POCD were analyzed using logistic regression analysis, adjusted for age, comorbidities and duration of anesthesia (for POCD analysis additionally for education). Odds ratios (OR) refer to minor allele counts (0, 1, 2).
Results
745 patients could be included in the POD analysis, and 452 in the POCD analysis. The rate of POD within this group was 20.8% (155 patients), and the rate of POCD was 10.2% (46 patients). In a candidate gene approach three genetic variants of the cholinergic genes CHRM2 and CHRM4 were associated with POD (OR [95% confidence interval], rs8191992: 0.61[0.46; 0.80]; rs8191992: 1.60[1.22; 2.09]; rs2067482: 1.64[1.10; 2.44]). No associations were found for POCD.
Conclusions
We found an association between genetic variants of CHRM2 and CHRM4 and POD. Further studies are needed to investigate whether disturbances in acetylcholine release and synaptic plasticity are involved in the development of POD.
Introduction: Patients undergoing revision total hip surgery (RTHS) have a high prevalence of mild and moderate preoperative anemia, associated with adverse outcomes. The aim of this study was to investigate the association of perioperative allogeneic blood transfusions (ABT) and postoperative complications in preoperatively mild compared to moderate anemic patients undergoing RTHS who did not receive a diagnostic anemia workup and treatment before surgery. Methods: We included 1,765 patients between 2007 and 2019 at a university hospital. Patients were categorized according to their severity of anemia using the WHO criteria of mild, moderate, and severe anemia in the first Hb level of the case. Patients were grouped as having received no ABT, 1–2 units of ABT, or more than 2 units of ABT. Need for intraoperative ABT was assessed in accordance with institutional standards. Primary endpoint was the compound incidence of postoperative complications. Secondary outcomes included major/minor complications and length of hospital and ICU stay. Results: Of the 1,765 patients, 31.0% were anemic of any cause before surgery. Transfusion rates were 81% in anemic patients and 41.2% in nonanemic patients. The adjusted risks for compound postoperative complication were significantly higher in patients with moderate anemia (OR 4.88, 95% CI: 1.54–13.15, p = 0.003) but not for patients with mild anemia (OR 1.93, 95% CI: 0.85–3.94, p < 0.090). Perioperative ABT was associated with significantly higher risks for complications in nonanemic patients and showed an increased risk for complications in all anemic patients. In RTHS, perioperative ABT as a treatment for moderate preoperative anemia of any cause was associated with a negative compound effect on postoperative complications, compared to anemia or ABT alone. Discussion: ABT is associated with adverse outcomes of patients with moderate preoperative anemia before RTHS. For this reason, medical treatment of moderate preoperative anemia may be considered.
During gestation, the most drastic change in oxygen supply occurs with the onset of ventilation after birth. As the too early exposure of premature infants to high arterial oxygen pressure leads to characteristic diseases, we studied the adaptation of the oxygen sensing system and its targets, the hypoxia-inducible factor- (HIF-) regulated genes (HRGs) in the developing lung. We draw a detailed picture of the oxygen sensing system by integrating information from qPCR, immunoblotting, in situ hybridization, and single-cell RNA sequencing data in ex vivo and in vivo models. HIF1α protein was completely destabilized with the onset of pulmonary ventilation, but did not coincide with expression changes in bona fide HRGs. We observed a modified composition of the HIF-PHD system from intrauterine to neonatal phases: Phd3 was significantly decreased, while Hif2a showed a strong increase and the Hif3a isoform Ipas exclusively peaked at P0. Colocalization studies point to the Hif1a-Phd1 axis as the main regulator of the HIF-PHD system in mouse lung development, complemented by the Hif3a-Phd3 axis during gestation. Hif3a isoform expression showed a stepwise adaptation during the periods of saccular and alveolar differentiation. With a strong hypoxic stimulus, lung ex vivo organ cultures displayed a functioning HIF system at every developmental stage. Approaches with systemic hypoxia or roxadustat treatment revealed only a limited in vivo response of HRGs. Understanding the interplay of the oxygen sensing system components during the transition from saccular to alveolar phases of lung development might help to counteract prematurity-associated diseases like bronchopulmonary dysplasia.
Acute post-operative delirium (POD) and long-term post-operative cognitive dysfunction (POCD) are frequent and associated with increased mortality, dependency on care giving and institutionalization rates. The POCD-related cost burden on the German long-term care insurance provides an indication for the savings potential from risk-adapted treatment schemes. Comprehensive estimates have not been assessed or published so far.
A model-based cost-analysis was designed to estimate POCD-related costs in the long-term care insurance. Comprehensive analysis of inpatient operations and procedures (OPS-codes) served as the base for case number calculations, which were then used as input to the actual cost model. POCD-incidence rates were obtained from the BioCog study. Various sensitivity analyses were performed to assess uncertainty of the model results.
Total POCD related annual costs in the German long-term care insurance account for approximately 1.6 billion EUR according to the base case of our analysis. Total annual costs for all POCD cases depend on surgery numbers, incidence rates, other assumptions, and uncertain input parameters.
The financial burden to the long-term care insurance is substantial, even in a conservative scenario of the cost model. Variability of results stems from uncertain assumptions, POCD-incidence rates and from uncertain patient numbers who are undergoing surgery and are therefore at risk to develop POCD.
A comparison study on modeling of clustered and overdispersed count data for multiple comparisons
(2021)
Data collected in various scientific fields are count data. One way to analyze such data is to compare the individual levels of the factor treatment using multiple comparisons. However, the measured individuals are often clustered – e.g. according to litter or rearing. This must be considered when estimating the parameters by a repeated measurement model. In addition, ignoring the overdispersion to which count data is prone leads to an increase of the type one error rate. We carry out simulation studies using several different data settings and compare different multiple contrast tests with parameter estimates from generalized estimation equations and generalized linear mixed models in order to observe coverage and rejection probabilities. We generate overdispersed, clustered count data in small samples as can be observed in many biological settings. We have found that the generalized estimation equations outperform generalized linear mixed models if the variance-sandwich estimator is correctly specified. Furthermore, generalized linear mixed models show problems with the convergence rate under certain data settings, but there are model implementations with lower implications exists. Finally, we use an example of genetic data to demonstrate the application of the multiple contrast test and the problems of ignoring strong overdispersion.
Background
A peripheral venous catheter (PVC) is the most widely used device for obtaining vascular access, allowing the administration of fluids and medication. Up to 25% of adult patients, and 50% of pediatric patients experience a first-attempt cannulation failure. In addition to patient and clinician characteristics, device features might affect the handling and success rates. The objective of the study was to compare the first-attempt cannulation success rate between PVCs with wings and a port access (Vasofix® Safety, B. Braun, abbreviated hereon in as VS) with those without (Introcan® Safety, B. Braun, abbreviated hereon in as IS) in an anesthesiological cohort.
Methods
An open label, multi-center, randomized trial was performed. First-attempt cannulation success rates were examined, along with relevant patient, clinician, and device characteristics with univariate and multivariate analyses. Information on handling and adherence to use instructions was gathered, and available catheters were assessed for damage.
Results
Two thousand three hundred four patients were included in the intention to treat analysis. First-attempt success rate was significantly higher with winged and ported catheters (VS) than with the non-winged, non-ported design (IS) (87.5% with VS vs. 78.2% with IS; PChi < .001). Operators rated the handling of VS as superior (rating of “good” or “very good: 86.1% VS vs. 20.8% IS, PChi < .001). Reinsertion of the needle into the catheter after partial withdrawal—prior or during the catheterization attempt—was associated with an increased risk of cannulation failure (7.909, CI 5.989–10.443, P < .001 and 23.023, CI 10.372–51.105, P < .001, respectively) and a twofold risk of catheter damage (OR 1.999, CI 1.347–2.967, P = .001).
Conclusions
First-attempt cannulation success of peripheral, ported, winged catheters was higher compared to non-ported, non-winged devices. The handling of the winged and ported design was better rated by the clinicians. Needle reinsertions are related to an increase in rates of catheter damage and cannulation failure.
A brief questionnaire for measuring alarm fatigue in nurses and physicians in intensive care units
(2023)
When exposed to hundreds of medical device alarms per day, intensive care unit (ICU) staff can develop “alarm fatigue” (i.e., desensitisation to alarms). However, no standardised way of quantifying alarm fatigue exists. We aimed to develop a brief questionnaire for measuring alarm fatigue in nurses and physicians. After developing a list of initial items based on a literature review, we conducted 15 cognitive interviews with the target group (13 nurses and two physicians) to ensure that the items are face valid and comprehensible. We then asked 32 experts on alarm fatigue to judge whether the items are suited for measuring alarm fatigue. The resulting 27 items were sent to nurses and physicians from 15 ICUs of a large German hospital. We used exploratory factor analysis to further reduce the number of items and to identify scales. A total of 585 submissions from 707 participants could be analysed (of which 14% were physicians and 64% were nurses). The simple structure of a two-factor model was achieved within three rounds. The final questionnaire (called Charité Alarm Fatigue Questionnaire; CAFQa) consists of nine items along two scales (i.e., the “alarm stress scale” and the “alarm coping scale”). The CAFQa is a brief questionnaire that allows clinical alarm researchers to quantify the alarm fatigue of nurses and physicians. It should not take more than five minutes to administer.