StethoMe®. My smart stethoscope.

You examine.
StethoMe® analyses.
Doctor diagnoses.

Coughing, a runny nose, a fever… Should you call the doctor immediately? Or should you wait a while and see? StethoMe® quickly dispels these doubts. Examine yourself without leaving your home and send the results to your doctor, who then decides what to do next.

Clinically validated

More about StethoMe®

What is StethoMe®?

A smart way to keep your health in check.

StethoMe® is the first system that detects abnormalities in lungs and heart. StethoMe® relies on medical AI algorithms (CE 2274) working together with a wireless stethoscope and dedicated application.

Thanks to using unique technologies that ensure control over examination quality, StethoMe® can be used by both health care professionals and by patients at their homes.

StethoMe® is intended for telemedical applications, enabling integration with, among others, HIS, EDM, or telemedical systems.

Quick and accurate medical examination

StethoMe® AI is the first certified medical system intended for sound analysis supporting the diagnostic process. The system relies on artificial intelligence algorithms created on the basis of 1 015 866 sound tags and 38 530 detailed medical descriptions.

29% more accurate than a specialist’s ear

StethoMe® AI algorithms are 29% more effective than pulmonologists when it comes to detection and classification of abnormal sounds appearing, among others, in the course of infections, pneumonia, or asthma.

Quality confirmed by a medical certificate

StethoMe® AI algorithms and the stethoscope feature a category 2a medical certificate (CE 2274) conferred to medical devices. It is the first such certification in the world.

Examination with StethoMe®

How does it work? It’s simple!

1

Examination

Put the StethoMe® to the points indicated in the application, recording will start automatically

2

Analysis in the cloud

The recorded sounds are received and analysed by the StethoMe®AI system in the cloud.

3

Diagnosis

After finishing the analysis the StethoMe® system will notify you of appearance of any abnormal sounds and send the examination results to a doctor, who will decide what to do next.

StethoMe® successes

Our achievements

Baillie Gifford Discovery Competition
Winner
EC2VC Competition
Winner
IOT/WT Innovation World Cup
Healthcare
Winner
Future X Healthcare Start-up Award
3rd Place Winner
Audience Award
Startup World Cup & Summit
European Winner
BioTech Award
Philips Innovation Challenge
Champion Award

StethoMe® in a nutshell

The most important functions

Artificial intelligence

Supports a doctor in the diagnostic process.

Medical history

Tracking of the treatment process and monitoring of chronic diseases.

Wireless stethoscope

The stethoscope connects to the smartphone via Bluetooth.

Medical certificates

The stethoscope and the AI algorithms have a class 2a certificate.

Lung examination

Detects and classifies abnormal sounds in the respiratory system.

Noise control

Informs of excessive noise in the room.

Heart examination

Detects heart murmurs and determines BPM accurately.

Recording quality control

AI algorithms verify the examination performance correctness.

Automatic start/stop

Records sounds only when the stethoscope is put correctly to the body.

Telemedicine

Full flexibility in terms of integration and API for the doctor’s analytical panel.

About StethoMe®

Opinions about StethoMe®

StethoMe® and science

Scientific publications / Clinical studies

In StethoMe® we attach a great deal of weight to the science behind our solutions. We share our knowledge with the public by publishing the results of our research in the best scientific magazines. We collaborate intensely with the scientific community. We have also commenced international clinical studies in the EU and the USA, which will have finished in 2020.

PLoS ONE

The accuracy of lung auscultation in the practice of physicians and medical students

Background

Auscultation is one of the first examinations that a patient is subjected to in a GP’s office, especially in relation to diseases of the respiratory system. However it is a highly subjective process and depends on the physician’s ability to interpret the sounds as determined by his/ her psychoacoustical characteristics.
Here, we present a cross-sectional assessment of the skills of physicians of different specializations and medical students in the classification of respiratory sounds in children.

Methods and findings

185 participants representing different medical specializations took part in the experiment. The experiment comprised 24 respiratory system auscultation sounds. The participants were tasked with listening to, and matching the sounds with provided descriptions of specific sound classes. The results revealed difficulties in both the recognition and description of respiratory sounds. The pulmonologist group was found to perform significantly better than other groups in terms of number of correct answers. We also found that performance significantly improved when similar sound classes were grouped together into wider, more general classes.

Conclusions

These results confirm that ambiguous identification and interpretation of sounds in auscultation is a generic issue which should not be neglected as it can potentially lead to inaccurate diagnosis and mistreatment. Our results lend further support to the already widespread acknowledgment of the need to standardize the nomenclature of auscultation sounds (according to European Respiratory Society, International Lung Sounds Association and American Thoracic Society). In particular, our findings point towards important educational challenges in both theory (nomenclature) and practice (training).

Honorata Hafke-Dys, Anna Bręborowicz, Paweł Kleka, Jędrzej Kociński, Adam Biniakowski
European Journal of Pediatrics

Practical implementation of artificial intelligence algorithms in pulmonary auscultation examination

Lung auscultation is an important part of a physical examination. However, its biggest drawback is its subjectivity. The results depend on the experience and ability of the doctor to perceive and distinguish pathologies in sounds heard via a stethoscope. This paper investigates a new method of automatic sound analysis based on neural networks (NNs), which has been implemented in a system that uses an electronic stethoscope for capturing respiratory sounds. It allows the detection of auscultatory sounds in four classes: wheezes, rhonchi, and fine and coarse crackles. In the blind test, a group of 522 auscultatory sounds from 50 pediatric patients were presented, and the results provided by a group of doctors and an artificial intelligence (AI) algorithm developed by the authors were compared. The gathered data show that machine learning (ML)–based analysis is more efficient in detecting all four types of phenomena, which is reflected in high values of recall (also called as sensitivity) and F1-score.

Conclusions: The obtained results suggest that the implementation of automatic sound analysis based on NNs can significantly improve the efficiency of this form of examination, leading to a minimization of the number of errors made in the interpretation of auscultation sounds.

Tomasz Grzywalski, Mateusz Piecuch, Marcin Szajek, Anna Bręborowicz, Honorata Hafke-Dys, Jędrzej Kociński, Anna Pastusiak, Riccardo Belluzzo
ERS International Congress

Respiratory system auscultation using machine learning - a big step towards objectivisation?

A stethoscope, introduced more than two centuries ago, is still a tool providing potentially valuable information gained during one of the most common examinations. However, the biggest drawback of auscultation is its subjectivity. It depends mainly on the experience and ability of the doctor to perceive and distinguish pathological signals. Many research has shown very low efficiency of doctors in this area.

Moreover, most of physicians are aware of this problem and needs supporting device. Therefore we have developed the Artificial Intelligence (AI) algorithms which recognise pathological sounds (wheezes, rhonchi, fine and coarse crackles). Here we present the comparison of the performance of physicians and AI in detection of those sounds.

A database of more than 10 000 recordings described by a consilium of specialists (pulmonologists and acousticians) was used for AI learning. Then another set of more than 500 real auscultatory sounds were used to investigate the efficiency of AI in comparison to a group of doctors. The standard F1-score was used for evaluation, because it considers both the precision and the recall. For each phenomena, the results for the AI is higher than for doctors with an average advantage of 8.4 percentage points, reaching even 13,5 p.p. for fine crackles.

The results suggest that the implementation of AI can significantly improve the efficiency of auscultation in everyday practice making it more objective, leading to a minimization of errors. The solution is now being tested with a group of hospitals and medical providers and proves its efficiency and usability in everyday practice making this examination faster and more reliable.

Tomasz Grzywalski, Marcin Szajek, Honorata Hafke-Dys, Anna Bręborowicz, Jędrzej Kociński, Anna Pastusiak, Riccardo Belluzzo
Artificial Intelligence in Medicine

Fully Interactive Lungs Auscultation with AI Enabled Digital Stethoscope

Performing an auscultation of respiratory system normally requires the presence of an experienced doctor, but the most recent advances in artificial intelligence (AI) open up a possibility for the laymen to perform this procedure by himself in home environment. However, to make it feasible, the system needs to include two main components: an algorithm for fast and accurate detection of breath phenomena in stethoscope recordings and an AI agent that interactively guides the end user through the auscultation process. In this work we present a system that solves both of these problems using state-of-the-art machine learning al gorithms. Our breath phenomena detection model was trained on 5000 stethoscope recordings of both sick (hospitalized) and healthy children. All recordings were labeled by a pulmonologist and acousticians. Trained model shows nearly optimal performance in terms of both sensitivity and specificity when tested on unseen recordings. The agent is able to accurately assess patient’s lung health status by auscultating only 3 out of 12 locations on average. The decision about each next auscultation location or end of examination is made dynamically, after each recording, based on breath phenomena detected so far. This allows the agent to make best prediction even if the auscultation is time-constrained.

Tomasz Grzywalski, Riccardo Belluzzo, Mateusz Piecuch, Marcin Szajek, Anna Bręborowicz, Anna Pastusiak, Honorata Hafke-Dys, Jędrzej Kociński
Conference on Agents and Artificial Intelligence - ICAART

Interactive Lungs Auscultation with Reinforcement Learning Agent

To perform a precise auscultation for the purposes of examination of respiratory system normally requires the presence of an experienced doctor. With most recent advances in machine learning and artificial intelligence, automatic detection of pathological breath phenomena in sounds recorded with stethoscope becomes a reality. But to perform a full auscultation in home environment by layman is another matter, especially if the patient is a child. In this paper we propose a unique application of Reinforcement Learning for training an agent that interactively guides the end user throughout the auscultation procedure. We show that intelligent selection of auscultation points by the agent reduces time of the examination fourfold without significant decrease in diagnosis accuracy compared to exhaustive auscultation.

Tomasz Grzywalski, Riccardo Belluzzo, Szymon Drgas, Agnieszka Cwalińska, Honorata Hafke-Dys
IEEE International Conference on Big Data

Parameterization of Sequence of MFCCs for DNN-based voice disorder detection

In this article a DNN-based system for detection of three common voice disorders (vocal nodules, polyps and cysts; laryngeal neoplasm; unilateral vocal paralysis) is presented. The input to the algorithm is (at least 3-second long) audio recording of sustained vowel sound /a:/. The algorithm was developed as part of the ”2018 FEMH Voice Data Challenge” organized by Far Eastern Memorial Hospital and obtained score value (defined in the challenge specification) of 77.44. This was the second best result before final submission. Final challenge results are not yet known during writing of this document. The document also reports changes that were made for the final submission which improved the score value in cross-validation by 0.6% points.

Tomasz Grzywalski., Adam Maciaszek, Adam Biniakowski, JanOrwat, Szymon Drgas, Mateusz Piecuch, Riccardo Belluzzo, Krzysztof Joachimiak, Dawid Niemiec, Jakub Ptaszyński, Krzysztof Szarzyński
Biochemistry, Molecular Biology & Allergy

Opportunities for domestic monitoring of children with an electronic stethoscope with automatic auscultation sound analysis system

In case of children suffering from chronic diseases of respiratory system, including asthma, it is very important to track any changes in the respiratory system condition. Domestic patient monitoring is becoming more and more popular. It is much more comfortable for patients who are less stressed, being relieved from any necessity to attend doctor’s offices, and are not exposed to pathogens present in medical facilities. Furthermore, it is also important for the attending physician who is provided with documented data. Until now, any aggravation of a past disease has been reported by children’s parents during medical appointments. Such method for providing information entails potential miscommunication, misjudgement and highly biased evaluation. a solution might be an electronic stethoscope, providing easy way to examine children in domestic conditions and to record auscultation results. Currently, it is possible to record auscultation sounds, provide a doctor with remote access to such records, and also to report any appearance of specific sounds and their intensity. Based on collaboration with scientific centres, there is a solution being developed: StethoMe®, a smart stethoscope, designed to provide a patient with a method for domestic auscultation. This system enables recording of auscultation sounds, submitting them to a physician and automatic classification of recorded sounds in four lasses: wheezes, fine crackles, coarse crackles and rhonchi, according to [1]. a physician may see a panel with provided access to sounds, their spectrograms, being visualisations of sounds facilitating their interpretation, and also an algorithm report, related to potential appearance of specific pathologies. This solutions is currently under development and in a testing phase in Europe.

Honorata Hafke-Dys, Anna Zelent

Partners

Media about StethoMe®

Subscribe to the newsletter

and be up to date with information regarding StethoMe®

Controller of your personal data, which you provide is StethoMe sp. z o.o. with its registered office in: Winogrady 18A, 61-663 Poznań, Poland, NIP: 7831726542, REGON: 361535342, registered in the Regional Court Poznań - Nowe Miasto i Wilda in Poznań, VIII Commercial Division of the National Court Register, registered in the commercial register (KRS) under no. 0000558650. Please familiarize yourself with our the Obligation to provide information (Art. 13 of GDPR).

Let us get to know each other!

Together we can make a great difference!

Tomir Kosowski

Sales Director
kosowski@StethoMe.com

Sławomir Kmak

Business Development Director
kmak@StethoMe.com

Honorata Hafke-Dys

Scientific collaboration/ Clinical studies
hafke@StethoMe.com
Privacy policy

This website uses cookies.

Using this website without changing your browser settings related to cookies means that they will be stored in your device’s memory.