Created on 11th February 2024
•
Clinicians typically spend an average of 16 minutes per patient encounter, dedicating a significant portion of that time to chart review (33%), documentation (24%), and ordering (17%). This involves going through extensive past medical reports to form conclusions. This process consumes valuable time that could otherwise be utilized for direct patient care or attending to other patients. By providing a pre-summarized version of this information, both patients and doctors can save time, ensuring that essential questions are addressed beforehand and allowing healthcare professionals to focus more efficiently on treatment solutions.
Novel techniques in generative AI have the capacity to accelerate both clinically accurate and semantic understanding of prior records or generate new documentation based on an encounter. Streamline information exchange, improve continuity of care, increase efficiency in healthcare delivery, and reduce the administrative burden on clinicians to allow for more impactful interaction with the patient.
When an individual’s health record is stitched together from various sources from where an individual receives care or generates signals about one’s health, it enhances communication and decision-making among healthcare providers, ensuring a more holistic view of the patient’s medical history. We have developed a solution that intelligently summarizes and renders an individual’s exhaustive medical history for streamlined and accelerated access and understanding.
We have faced and tackled multiple issues in each phase of the project:
1.) Data extraction- Creating synthetic data was necessary to simulate EHR records for model development, considering limitations on real patient data due to confidentiality. Our approach involved generating fictitious data based on diverse health parameters, seamlessly integrated with the RAG (Retrieval-Augmented Generation) model.
2.) Model training & Implementation: We have used LLaMA model to summarize and provide reasoning for the medical history of the patient. For the sake of this project, we have used Hugging Face to implement that. Since Hugging Face was down the whole time, and throwinng 503 errors, we faced difficulty training and running the model. To tackle, this we started implementing the open source LLaMA on our local machines, however being such a powerful model, the memory requirements were not sufficient to run on local machines. Due to this reason, we tried to run it on cloud, but exposing apis through cloud environment was a challenge due to not having enough cloud credits and lack of experience with new cloud environments like INTEL. We switched back to Hugging Face when it came up late night on Saturday and finally we are serving our usecase from that for now. With more resources at our disposal, we can definitely build, train and run our own model which can give a great accuracy compared to what I have now.
Tracks Applied (3)
Carelon