StethoMe®. Mój inteligentny stetoskop.

Ty badasz.
StethoMe® analizuje.
Lekarz decyduje.

Kaszel, katar, gorączka… Czy to już czas na lekarza? Znasz to pytanie. Korzystając ze StethoMe®, Twoje wątpliwości zostaną szybko rozwiane. Zbadaj się bez wychodzenia z domu i wyślij dane do lekarza, który zadecyduje o dalszym postępowaniu.

Zweryfikowane klinicznie

Dowiedz się więcej o StethoMe®

Kaszel, katar, gorączka… Czy to już czas na lekarza? Znasz to pytanie. Korzystając ze StethoMe®, Twoje wątpliwości zostaną szybko rozwiane. Zbadaj się bez wychodzenia z domu i wyślij dane do lekarza, który zadecyduje o dalszym postępowaniu.

Zweryfikowane klinicznie

Dowiedz się więcej o StethoMe®

Czym jest StethoMe®?

Inteligentny sposób
na kontrolę zdrowia.

StethoMe® to pierwszy system wykrywający nieprawidłowości w układzie oddechowym. StethoMe® opiera się na medycznych algorytmach AI (CE 2274) współpracujących z bezprzewodowym stetoskopem oraz dedykowaną aplikacją.

Dzięki wykorzystaniu unikalnych technologii zapewniających kontrolę jakości badania, StethoMe® może być używane przez pacjentów w warunkach domowych.

StethoMe® jest przeznaczone do zastosowań telemedycznych, umożliwiając integrację m.in. z HIS, EDM czy systemami telemedycznymi.

Szybszy i precyzyjniejszy wynik badania

StethoMe® AI to pierwszy certyfikowany system medyczny do analizy dźwięków wspierający proces diagnostyczny. System opiera się na algorytmach sztucznej inteligencji, które zostały stworzone w oparciu o 1 015 866 tagów dźwiękowych i 38 530 szczegółowych opisów medycznych.

O 29% bardziej dokładny niż ucho specjalisty

Algorytmy StethoMe® AI są o 29% skuteczniejsze niż pulmonolodzy w wykrywaniu i klasyfikowaniu nieprawidłowych dźwięków, pojawiających się m.in. w przebiegu infekcji, zapalenia płuc, czy astmy.

Jakość potwierdzona certyfikatem medycznym

Algorytmy StethoMe® AI i stetoskop posiadają certyfikat medyczny (CE 2274) kategorii 2a przyznawany urządzeniom medycznym. To pierwsza taka certyfikacja na świecie.

Badanie StethoMe®

Jak to działa? To proste!

1

Badanie

Przyłóż stetoskop we wskazanych miejscach i rozpocznij badanie.

2

Analiza

Po nagraniu system StethoMe®AI rozpocznie analizę dźwięków.

3

Diagnoza

Po zakończonej analizie system StethoMe® poinformuje Cię o pojawieniu się nieprawidłowych dźwięków, a wynik badania prześle do lekarza, który zadecyduje o dalszym postępowaniu.

Sukcesy StethoMe®

Nagrody

StethoMe® w pigułce

Najważniejsze funkcje

Sztuczna inteligencja

Wspiera lekarza w procesie diagnostycznym.

Historia medyczna

Śledzenie procesu leczenia oraz monitoring chorób przewlekłych.

Bezprzewodowy stetoskop

Stetoskop łączy się ze smartfonem za pomocą bluetooth.

Certyfikaty medyczne

Stetoskop oraz algorytmy AI posiadają certyfikat klasy 2a.

Badanie płuc

Wykrywa nieprawidłowe dźwięki w układzie oddechowym.

Kontrola hałasu

Informuje o zbyt dużym hałasie w pomieszczeniu.

Badanie serca

Umożliwia zdalne badanie serca i precyzyjnie określa BPM.

Kontrola jakości nagrań

Algorytmy AI weryfikują poprawność wykonania badania.

Automatyczny start/stop

Rejestruje dźwięki tylko wtedy, gdy stetoskop jest prawidłowo przyłożony.

Telemedycyna

Pełna elastyczność w integracji oraz API dla panelu analitycznego dla lekarza.

Opinie o StethoMe®

Zdania na temat StethoMe®

StethoMe® i nauka

Publikacje naukowe / Badania kliniczne

W StethoMe® przykładamy olbrzymią wartość do nauki jaka stoi za naszymi rozwiązaniami. Dzielimy się naszą wiedzą ze społeczeństwem publikując wyniki naszych badań w najlepszych naukowych czasopismach. Intensywnie współpracujemy ze środowiskiem naukowym. Rozpoczęliśmy międzynarodowe badania kliniczne w EU i USA, które zakończą się w 2020 roku.

PLoS ONE

The accuracy of lung auscultation in the practice of physicians and medical students

Background

Auscultation is one of the first examinations that a patient is subjected to in a GP’s office, especially in relation to diseases of the respiratory system. However it is a highly subjective process and depends on the physician’s ability to interpret the sounds as determined by his/ her psychoacoustical characteristics.
Here, we present a cross-sectional assessment of the skills of physicians of different specializations and medical students in the classification of respiratory sounds in children.

Methods and findings

185 participants representing different medical specializations took part in the experiment. The experiment comprised 24 respiratory system auscultation sounds. The participants were tasked with listening to, and matching the sounds with provided descriptions of specific sound classes. The results revealed difficulties in both the recognition and description of respiratory sounds. The pulmonologist group was found to perform significantly better than other groups in terms of number of correct answers. We also found that performance significantly improved when similar sound classes were grouped together into wider, more general classes.

Conclusions

These results confirm that ambiguous identification and interpretation of sounds in auscultation is a generic issue which should not be neglected as it can potentially lead to inaccurate diagnosis and mistreatment. Our results lend further support to the already widespread acknowledgment of the need to standardize the nomenclature of auscultation sounds (according to European Respiratory Society, International Lung Sounds Association and American Thoracic Society). In particular, our findings point towards important educational challenges in both theory (nomenclature) and practice (training).

Honorata Hafke-Dys, Anna Bręborowicz, Paweł Kleka, Jędrzej Kociński, Adam Biniakowski
European Journal of Pediatrics

Practical implementation of artificial intelligence algorithms in pulmonary auscultation examination

Lung auscultation is an important part of a physical examination. However, its biggest drawback is its subjectivity. The results depend on the experience and ability of the doctor to perceive and distinguish pathologies in sounds heard via a stethoscope. This paper investigates a new method of automatic sound analysis based on neural networks (NNs), which has been implemented in a system that uses an electronic stethoscope for capturing respiratory sounds. It allows the detection of auscultatory sounds in four classes: wheezes, rhonchi, and fine and coarse crackles. In the blind test, a group of 522 auscultatory sounds from 50 pediatric patients were presented, and the results provided by a group of doctors and an artificial intelligence (AI) algorithm developed by the authors were compared. The gathered data show that machine learning (ML)–based analysis is more efficient in detecting all four types of phenomena, which is reflected in high values of recall (also called as sensitivity) and F1-score.

Conclusions: The obtained results suggest that the implementation of automatic sound analysis based on NNs can significantly improve the efficiency of this form of examination, leading to a minimization of the number of errors made in the interpretation of auscultation sounds.

Tomasz Grzywalski, Mateusz Piecuch, Marcin Szajek, Anna Bręborowicz, Honorata Hafke-Dys, Jędrzej Kociński, Anna Pastusiak, Riccardo Belluzzo
ERS International Congress

Respiratory system auscultation using machine learning - a big step towards objectivisation?

A stethoscope, introduced more than two centuries ago, is still a tool providing potentially valuable information gained during one of the most common examinations. However, the biggest drawback of auscultation is its subjectivity. It depends mainly on the experience and ability of the doctor to perceive and distinguish pathological signals. Many research has shown very low efficiency of doctors in this area.

Moreover, most of physicians are aware of this problem and needs supporting device. Therefore we have developed the Artificial Intelligence (AI) algorithms which recognise pathological sounds (wheezes, rhonchi, fine and coarse crackles). Here we present the comparison of the performance of physicians and AI in detection of those sounds.

A database of more than 10 000 recordings described by a consilium of specialists (pulmonologists and acousticians) was used for AI learning. Then another set of more than 500 real auscultatory sounds were used to investigate the efficiency of AI in comparison to a group of doctors. The standard F1-score was used for evaluation, because it considers both the precision and the recall. For each phenomena, the results for the AI is higher than for doctors with an average advantage of 8.4 percentage points, reaching even 13,5 p.p. for fine crackles.

The results suggest that the implementation of AI can significantly improve the efficiency of auscultation in everyday practice making it more objective, leading to a minimization of errors. The solution is now being tested with a group of hospitals and medical providers and proves its efficiency and usability in everyday practice making this examination faster and more reliable.

Tomasz Grzywalski, Marcin Szajek, Honorata Hafke-Dys, Anna Bręborowicz, Jędrzej Kociński, Anna Pastusiak, Riccardo Belluzzo
Artificial Intelligence in Medicine

Fully Interactive Lungs Auscultation with AI Enabled Digital Stethoscope

Performing an auscultation of respiratory system normally requires the presence of an experienced doctor, but the most recent advances in artificial intelligence (AI) open up a possibility for the laymen to perform this procedure by himself in home environment. However, to make it feasible, the system needs to include two main components: an algorithm for fast and accurate detection of breath phenomena in stethoscope recordings and an AI agent that interactively guides the end user through the auscultation process. In this work we present a system that solves both of these problems using state-of-the-art machine learning al gorithms. Our breath phenomena detection model was trained on 5000 stethoscope recordings of both sick (hospitalized) and healthy children. All recordings were labeled by a pulmonologist and acousticians. Trained model shows nearly optimal performance in terms of both sensitivity and specificity when tested on unseen recordings. The agent is able to accurately assess patient’s lung health status by auscultating only 3 out of 12 locations on average. The decision about each next auscultation location or end of examination is made dynamically, after each recording, based on breath phenomena detected so far. This allows the agent to make best prediction even if the auscultation is time-constrained.

Tomasz Grzywalski, Riccardo Belluzzo, Mateusz Piecuch, Marcin Szajek, Anna Bręborowicz, Anna Pastusiak, Honorata Hafke-Dys, Jędrzej Kociński
Conference on Agents and Artificial Intelligence - ICAART

Interactive Lungs Auscultation with Reinforcement Learning Agent

To perform a precise auscultation for the purposes of examination of respiratory system normally requires the presence of an experienced doctor. With most recent advances in machine learning and artificial intelligence, automatic detection of pathological breath phenomena in sounds recorded with stethoscope becomes a reality. But to perform a full auscultation in home environment by layman is another matter, especially if the patient is a child. In this paper we propose a unique application of Reinforcement Learning for training an agent that interactively guides the end user throughout the auscultation procedure. We show that intelligent selection of auscultation points by the agent reduces time of the examination fourfold without significant decrease in diagnosis accuracy compared to exhaustive auscultation.

Tomasz Grzywalski, Riccardo Belluzzo, Szymon Drgas, Agnieszka Cwalińska, Honorata Hafke-Dys
IEEE International Conference on Big Data

Parameterization of Sequence of MFCCs for DNN-based voice disorder detection

In this article a DNN-based system for detection of three common voice disorders (vocal nodules, polyps and cysts; laryngeal neoplasm; unilateral vocal paralysis) is presented. The input to the algorithm is (at least 3-second long) audio recording of sustained vowel sound /a:/. The algorithm was developed as part of the ”2018 FEMH Voice Data Challenge” organized by Far Eastern Memorial Hospital and obtained score value (defined in the challenge specification) of 77.44. This was the second best result before final submission. Final challenge results are not yet known during writing of this document. The document also reports changes that were made for the final submission which improved the score value in cross-validation by 0.6% points.

Tomasz Grzywalski., Adam Maciaszek, Adam Biniakowski, JanOrwat, Szymon Drgas, Mateusz Piecuch, Riccardo Belluzzo, Krzysztof Joachimiak, Dawid Niemiec, Jakub Ptaszyński, Krzysztof Szarzyński
Biochemistry, Molecular Biology & Allergy

Opportunities for domestic monitoring of children with an electronic stethoscope with automatic auscultation sound analysis system

In case of children suffering from chronic diseases of respiratory system, including asthma, it is very important to track any changes in the respiratory system condition. Domestic patient monitoring is becoming more and more popular. It is much more comfortable for patients who are less stressed, being relieved from any necessity to attend doctor’s offices, and are not exposed to pathogens present in medical facilities. Furthermore, it is also important for the attending physician who is provided with documented data. Until now, any aggravation of a past disease has been reported by children’s parents during medical appointments. Such method for providing information entails potential miscommunication, misjudgement and highly biased evaluation. a solution might be an electronic stethoscope, providing easy way to examine children in domestic conditions and to record auscultation results. Currently, it is possible to record auscultation sounds, provide a doctor with remote access to such records, and also to report any appearance of specific sounds and their intensity. Based on collaboration with scientific centres, there is a solution being developed: StethoMe®, a smart stethoscope, designed to provide a patient with a method for domestic auscultation. This system enables recording of auscultation sounds, submitting them to a physician and automatic classification of recorded sounds in four lasses: wheezes, fine crackles, coarse crackles and rhonchi, according to [1]. a physician may see a panel with provided access to sounds, their spectrograms, being visualisations of sounds facilitating their interpretation, and also an algorithm report, related to potential appearance of specific pathologies. This solutions is currently under development and in a testing phase in Europe.

Honorata Hafke-Dys, Anna Zelent

Partnerzy

Media o StethoMe®

Dołącz do newslettera

i bądź na bieżąco z informacjami o StethoMe®

Administratorem Twoich danych osobowych, które przekazujesz, jest StethoMe sp. z o.o. z siedzibą w Poznaniu przy ul. Winogrady 18 A, 61 - 663 Poznań, wpisana do rejestru przedsiębiorców prowadzonego przez Sąd Rejonowy Poznań - Nowe Miasto i Wilda w Poznaniu, VIII Wydział Gospodarczy Krajowego Rejestru Sądowego pod numerem KRS 0000558650, NIP 7831726542, REGON 361535342. Prosimy zapoznaj się z Obowiązkiem informacyjnym (art. 13 RODO).

Poznajmy się!

Razem możemy zrobić naprawdę wiele!

Tomir Kosowski

Sales Director
kosowski@StethoMe.com

Sławomir Kmak

Business Development Director
kmak@StethoMe.com

Współpraca naukowa/ Badania kliniczne

research@StethoMe.com
Polityka prywatności

Ta strona wykorzystuje pliki cookies.

Korzystanie z serwisu bez zmiany ustawień przeglądarki dotyczących cookies oznacza, że będą one zapisane w pamięci urządzenia.