Quelle est notre mission ?

Notre voulons ameliorer la vie des enfants asthmatiques en la rendant plus sûre. Nous voulons aider les jeunes patients, leurs parents et les médecins. Notre rêve est d’avoir un outil moderne au domicile de chaque enfant malade qui permette un suivi efficace et complet de la maladie au quotidien. Nous sommes convaincus que les meilleurs soins médicaux peuvent commencer à la maison.

Nous voulons être pionnier et leader dans la création de solutions médicales avancées, fiables, sûres et d’utilisation intuitive. Nous avons créé StethoMe® pour permettre aux personnes du monde entier d’accéder facilement à des diagnostics de haute qualité, soutenus par des technologies d’IA de pointe.

Notre voulons ameliorer la vie des enfants asthmatiques en la rendant plus sûre. Nous voulons aider les jeunes patients, leurs parents et les médecins. Notre rêve est d’avoir un outil moderne au domicile de chaque enfant malade qui permette un suivi efficace et complet de la maladie au quotidien. Nous sommes convaincus que les meilleurs soins médicaux peuvent commencer à la maison.

Nous voulons être pionnier et leader dans la création de solutions médicales avancées, fiables, sûres et d’utilisation intuitive. Nous avons créé StethoMe® pour permettre aux personnes du monde entier d’accéder facilement à des diagnostics de haute qualité, soutenus par des technologies d’IA de pointe.

Qui sommes-nous ?

Wojciech Radomski

CEO & Co-Founder

Stratégiste et chef de projet avec plus de 13 ans d’expérience dans le secteur informatique. Il a fondé et géré avec succès le Software House Programa.pl et trois startups technologiques. Il est également le co-créateur d’iTraff Technology, une technologie de reconnaissance d’image utilisée par des entreprises telles que Coca-Cola, EA et PepsiCo. En privé, c’est un passionné de sport – vice-champion d’ATV Polska (VTT Pologne), marathonien, triathlète, fan d’enduro.

Paweł Elbanowski

COO & Co-Founder

Chef de projet, fort de 13 ans d’expérience dans le secteur informatique. Spécialiste du développement commercial et de la gestion de projets. Il a participé à des projets pour des marques mondiales du secteur privé.

Marcin Szajek

CTO/CSO & Co-Founder

Plus de 10 ans d’expérience dans le secteur informatique. Spécialiste de l’apprentissage automatique avec un master en informatique, spécialisé dans les technologies de traitement des données. Il a obtenu les certifications PRINCE2, Cisco CCNA et IBM DB2. Finaliste du concours « Innovators under 35 » en 2018.

Dr Honorata Hafe-Dys

VP Product & Co-Founder

L’initiatrice de StethoMe® a plus de 15 ans d’expérience dans la direction de projets scientifiques interdisciplinaires. Innovatrice, personne interdisciplinaire : docteur en biophysique, acousticienne, psychophysicienne avec plus de 30 publications scientifiques. Conférencière TEDx, lauréate du prix Young Scientists de l’Association européenne d’acoustique (EAA) et du prix du recteur de l’université Adam Mickiewicz de Poznań pour la découverte d’un nouveau mécanisme de traitement de l’information sonore.

Prof. Jędrzej Kociński

VP Regulatory & Co-Founder

Auditeur selon la norme ISO 13485, responsable du système de gestion de la qualité de l’entreprise. Il mène des recherches scientifiques depuis plus de 15 ans et s’occupe également de questions juridiques sur le marché des dispositifs médicaux. Docteur en physique avec une habilitation en biophysique. Maître de conférences, chercheur et responsable de projets scientifiques à l’université Adam Mickiewicz de Poznań.

Publications et recherches scientifiques

Chez StethoMe®, nous accordons une grande importance à la science qui sous-tend nos solutions. Nous partageons nos connaissances en publiant les résultats de nos recherches dans des revues reconnues et nous coopérons intensivement avec la communauté scientifique. Nous en avons les preuves !

PLoS ONE

The accuracy of lung auscultation in the practice of physicians and medical students

Background

Auscultation is one of the first examinations that a patient is subjected to in a GP’s office, especially in relation to diseases of the respiratory system. However it is a highly subjective process and depends on the physician’s ability to interpret the sounds as determined by his/ her psychoacoustical characteristics.
Here, we present a cross-sectional assessment of the skills of physicians of different specializations and medical students in the classification of respiratory sounds in children.

Methods and findings

185 participants representing different medical specializations took part in the experiment. The experiment comprised 24 respiratory system auscultation sounds. The participants were tasked with listening to, and matching the sounds with provided descriptions of specific sound classes. The results revealed difficulties in both the recognition and description of respiratory sounds. The pulmonologist group was found to perform significantly better than other groups in terms of number of correct answers. We also found that performance significantly improved when similar sound classes were grouped together into wider, more general classes.

Conclusions

These results confirm that ambiguous identification and interpretation of sounds in auscultation is a generic issue which should not be neglected as it can potentially lead to inaccurate diagnosis and mistreatment. Our results lend further support to the already widespread acknowledgment of the need to standardize the nomenclature of auscultation sounds (according to European Respiratory Society, International Lung Sounds Association and American Thoracic Society). In particular, our findings point towards important educational challenges in both theory (nomenclature) and practice (training).

Honorata Hafke-Dys, Anna Bręborowicz, Paweł Kleka, Jędrzej Kociński, Adam Biniakowski
European Journal of Pediatrics

Practical implementation of artificial intelligence algorithms in pulmonary auscultation examination

Lung auscultation is an important part of a physical examination. However, its biggest drawback is its subjectivity. The results depend on the experience and ability of the doctor to perceive and distinguish pathologies in sounds heard via a stethoscope. This paper investigates a new method of automatic sound analysis based on neural networks (NNs), which has been implemented in a system that uses an electronic stethoscope for capturing respiratory sounds. It allows the detection of auscultatory sounds in four classes: wheezes, rhonchi, and fine and coarse crackles. In the blind test, a group of 522 auscultatory sounds from 50 pediatric patients were presented, and the results provided by a group of doctors and an artificial intelligence (AI) algorithm developed by the authors were compared. The gathered data show that machine learning (ML)–based analysis is more efficient in detecting all four types of phenomena, which is reflected in high values of recall (also called as sensitivity) and F1-score.

Conclusions: The obtained results suggest that the implementation of automatic sound analysis based on NNs can significantly improve the efficiency of this form of examination, leading to a minimization of the number of errors made in the interpretation of auscultation sounds.

Tomasz Grzywalski, Mateusz Piecuch, Marcin Szajek, Anna Bręborowicz, Honorata Hafke-Dys, Jędrzej Kociński, Anna Pastusiak, Riccardo Belluzzo
ERS International Congress

Respiratory system auscultation using machine learning - a big step towards objectivisation?

A stethoscope, introduced more than two centuries ago, is still a tool providing potentially valuable information gained during one of the most common examinations. However, the biggest drawback of auscultation is its subjectivity. It depends mainly on the experience and ability of the doctor to perceive and distinguish pathological signals. Many research has shown very low efficiency of doctors in this area.

Moreover, most of physicians are aware of this problem and needs supporting device. Therefore we have developed the Artificial Intelligence (AI) algorithms which recognise pathological sounds (wheezes, rhonchi, fine and coarse crackles). Here we present the comparison of the performance of physicians and AI in detection of those sounds.

A database of more than 10 000 recordings described by a consilium of specialists (pulmonologists and acousticians) was used for AI learning. Then another set of more than 500 real auscultatory sounds were used to investigate the efficiency of AI in comparison to a group of doctors. The standard F1-score was used for evaluation, because it considers both the precision and the recall. For each phenomena, the results for the AI is higher than for doctors with an average advantage of 8.4 percentage points, reaching even 13,5 p.p. for fine crackles.

The results suggest that the implementation of AI can significantly improve the efficiency of auscultation in everyday practice making it more objective, leading to a minimization of errors. The solution is now being tested with a group of hospitals and medical providers and proves its efficiency and usability in everyday practice making this examination faster and more reliable.

Tomasz Grzywalski, Marcin Szajek, Honorata Hafke-Dys, Anna Bręborowicz, Jędrzej Kociński, Anna Pastusiak, Riccardo Belluzzo
Artificial Intelligence in Medicine

Fully Interactive Lungs Auscultation with AI Enabled Digital Stethoscope

Performing an auscultation of respiratory system normally requires the presence of an experienced doctor, but the most recent advances in artificial intelligence (AI) open up a possibility for the laymen to perform this procedure by himself in home environment. However, to make it feasible, the system needs to include two main components: an algorithm for fast and accurate detection of breath phenomena in stethoscope recordings and an AI agent that interactively guides the end user through the auscultation process. In this work we present a system that solves both of these problems using state-of-the-art machine learning al gorithms. Our breath phenomena detection model was trained on 5000 stethoscope recordings of both sick (hospitalized) and healthy children. All recordings were labeled by a pulmonologist and acousticians. Trained model shows nearly optimal performance in terms of both sensitivity and specificity when tested on unseen recordings. The agent is able to accurately assess patient’s lung health status by auscultating only 3 out of 12 locations on average. The decision about each next auscultation location or end of examination is made dynamically, after each recording, based on breath phenomena detected so far. This allows the agent to make best prediction even if the auscultation is time-constrained.

Tomasz Grzywalski, Riccardo Belluzzo, Mateusz Piecuch, Marcin Szajek, Anna Bręborowicz, Anna Pastusiak, Honorata Hafke-Dys, Jędrzej Kociński
Conference on Agents and Artificial Intelligence - ICAART

Interactive Lungs Auscultation with Reinforcement Learning Agent

To perform a precise auscultation for the purposes of examination of respiratory system normally requires the presence of an experienced doctor. With most recent advances in machine learning and artificial intelligence, automatic detection of pathological breath phenomena in sounds recorded with stethoscope becomes a reality. But to perform a full auscultation in home environment by layman is another matter, especially if the patient is a child. In this paper we propose a unique application of Reinforcement Learning for training an agent that interactively guides the end user throughout the auscultation procedure. We show that intelligent selection of auscultation points by the agent reduces time of the examination fourfold without significant decrease in diagnosis accuracy compared to exhaustive auscultation.

Tomasz Grzywalski, Riccardo Belluzzo, Szymon Drgas, Agnieszka Cwalińska, Honorata Hafke-Dys
IEEE International Conference on Big Data

Parameterization of Sequence of MFCCs for DNN-based voice disorder detection

In this article a DNN-based system for detection of three common voice disorders (vocal nodules, polyps and cysts; laryngeal neoplasm; unilateral vocal paralysis) is presented. The input to the algorithm is (at least 3-second long) audio recording of sustained vowel sound /a:/. The algorithm was developed as part of the ”2018 FEMH Voice Data Challenge” organized by Far Eastern Memorial Hospital and obtained score value (defined in the challenge specification) of 77.44. This was the second best result before final submission. Final challenge results are not yet known during writing of this document. The document also reports changes that were made for the final submission which improved the score value in cross-validation by 0.6% points.

Tomasz Grzywalski., Adam Maciaszek, Adam Biniakowski, JanOrwat, Szymon Drgas, Mateusz Piecuch, Riccardo Belluzzo, Krzysztof Joachimiak, Dawid Niemiec, Jakub Ptaszyński, Krzysztof Szarzyński
Biochemistry, Molecular Biology & Allergy

Opportunities for domestic monitoring of children with an electronic stethoscope with automatic auscultation sound analysis system

In case of children suffering from chronic diseases of respiratory system, including asthma, it is very important to track any changes in the respiratory system condition. Domestic patient monitoring is becoming more and more popular. It is much more comfortable for patients who are less stressed, being relieved from any necessity to attend doctor’s offices, and are not exposed to pathogens present in medical facilities. Furthermore, it is also important for the attending physician who is provided with documented data. Until now, any aggravation of a past disease has been reported by children’s parents during medical appointments. Such method for providing information entails potential miscommunication, misjudgement and highly biased evaluation. a solution might be an electronic stethoscope, providing easy way to examine children in domestic conditions and to record auscultation results. Currently, it is possible to record auscultation sounds, provide a doctor with remote access to such records, and also to report any appearance of specific sounds and their intensity. Based on collaboration with scientific centres, there is a solution being developed: StethoMe®, a smart stethoscope, designed to provide a patient with a method for domestic auscultation. This system enables recording of auscultation sounds, submitting them to a physician and automatic classification of recorded sounds in four lasses: wheezes, fine crackles, coarse crackles and rhonchi, according to [1]. a physician may see a panel with provided access to sounds, their spectrograms, being visualisations of sounds facilitating their interpretation, and also an algorithm report, related to potential appearance of specific pathologies. This solutions is currently under development and in a testing phase in Europe.

Honorata Hafke-Dys, Anna Zelent

Investisseurs

Devenez un investisseur de StethoMe®

Partenaires

Avis sur StethoMe®

Opinions sur StethoMe®

Les médias à propos de StethoMe®

Les succès de StethoMe®

Prix

Inscrivez-vous à la newsletter

Le responsable des données personnelles que vous fournissez est StethoMe sp. z o.o., dont le siège social se situe à Poznań, ul. Winogrady 18A, 61-663. Il inscrit au registre des entrepreneurs, tenu par le tribunal d’arrondissement de Poznań – Nowe Miasto et Wilda à Poznań (Sąd Rejonowy Poznań – Nowe Miasto i Wilda), 8e Chambre économique du registre judiciaire national sous le numéro KRS 0000558650, NIP (numéro d’identification fiscale) 7831726542, REGON (numéro d’enregistrement légal) 361535342. Veuillez lire le Devoir d’information (article 13 du RGPD).

Politique de confidentialité

Ce site web utilise des cookies.

L'utilisation de ce site Web sans modifier les paramètres de votre navigateur liés aux cookies signifie qu'ils seront stockés dans la mémoire de votre appareil.