Nowadays, physicians have at their hands a huge amount of data produced by a large set of diagnostic and instrumental tests integrated with data obtained by high-throughput technologies. If such data were opportunely linked and analysed, they might be used to strengthen predictions, so that to improve the prevention and the time-to-diagnosis, reduce the costs of the health system, and bring out hidden knowledge. Machine learning is the principal technique used nowadays to leverage data and gain useful information. However, it has led to various challenges, such as improving the interpretability and explainability of the employed predictive models and integrating expert knowledge into the final system. Solving those challenges is of paramount importance to enhance the trust of both clinicians and patients in the system predictions. To solve the aforementioned issues, in this paper we propose a software workflow able to cope with the trustworthiness aspects of machine learning models and considering a multitude of heterogeneous data and models.
One of the first step in RNA-Sequencing (RNA-Seq) data analysis consists of aligning (Next Generation Sequencing) reads to a reference genome. In literature, there are several tools implemented by practitioners and researchers for the alignment step. However, two tools are the de-facto-standard used by bioinformatics researchers in their pipelines: HISAT (version 2) and STAR (version 2). The aim of this study is to determine the impact of the alignment tool on the RNA -Seq analysis in terms of biological relevance of the results and computational time. The two implemented pipelines return different results on the biological side. This is due to assumptions the used tools made and to the specific characteristics of the underlying (statistical) models. The study provides valuable insights for researchers interested in optimizing their RNA-Seq pipelines and making informed decisions about which pipeline to use. As lesson learned, we suggest bioinformatics researchers to use more pipelines when make experiments to reduce the prediction errors induced by assumption of a specific tool or method.
The paper discusses a novel system for medical diagnostics that integrates patient data from various sources to address the fragmentation of healthcare information. By generating and merging knowledge graphs from raw medical texts focused on key biomedical entities (Gene, Disease, Chemical, Species, Mutation, Cell Type), the system facilitates a comprehensive understanding of a patient’s medical history. It accurately extracts and connects critical entities, creating individual and combined knowledge graphs that elucidate a patient’s medical journey. This approach helps bridge diagnostic gaps, offering a visual tool for practitioners to detect patterns and discrepancies in patient data. Despite limitations such as language dependency and validation scope, this system sets the stage for future enhancements toward a more universally accessible and clinically useful healthcare system.
One of the first step in RNA-Sequencing (RNA-Seq) data analysis consists of aligning (Next Generation Sequencing) reads to a reference genome. In literature, there are several tools implemented by practitioners and researchers for the alignment step. However, two tools are the de-facto-standard used by bioinformatics researchers in their pipelines: HISAT (version 2) and STAR (version 2). The aim of this study is to determine the impact of the alignment tool on the RNA -Seq analysis in terms of biological relevance of the results and computational time. The two implemented pipelines return different results on the biological side. This is due to assumptions the used tools made and to the specific characteristics of the underlying (statistical) models. The study provides valuable insights for researchers interested in optimizing their RNA-Seq pipelines and making informed decisions about which pipeline to use. As lesson learned, we suggest bioinformatics researchers to use more pipelines when make experiments to reduce the prediction errors induced by assumption of a specific tool or method.
e-Health applications, as a cornerstone of modern distributed systems, must synergize with advanced analysis methodologies, incorporating image processing, statistical, and predictive techniques to expedite diagnosis and optimize therapeutic strategies. Cardiovascular disease (CVD) presents a formidable health challenge, claiming 18 million lives annually, with projections set to worsen due to population aging, the rise of metabolic diseases, and gaps in effective prevention and precise risk stratification. A pivotal indicator of cardiovascular health, the epicardial adipose tissue (EAT) thickness, is traditionally estimated by medical professionals without a standardized and precise procedure. This paper chronicles our endeavor to
automate the delineation of EAT from echocardiogram videos, a fundamental precursor to its thickness quantification. We confronted the intricate task of interpreting echocardiographic data and trialed a variety of image processing methods aimed at clarifying the EAT’s representation amidst the heart’s dynamic
activity and inherent imaging noise. Our study’s narrative contributes to the pervasive computing domain, envisaging the deployment of such medical applications as on-demand cloud services for medical experts and institutions, thus fostering collaborative, efficient, and accurate cardiovascular health real-time assessment. Unfortunately, our study failed and in this paper we analyse the reasons and we report the lesson learned.