ScribeEMR ML Case Study
|Case Study Title||ScribeEMR Enables Doctors and Patience with Machine Learning Insights|
|Case Study Short Description||Triumphtech helped move ScribeEMR from human-based medical transcription to a scalable, ML-based transcription solution.|
|Customer Challenge||With the amount of data ScribeEMR works with on a daily basis, human transcription is not profitable while meeting customer demand. ScribeEMR would need to move to a ML-based service to continue to grow. As ScribeEMR did not have any data engineers or scientists, TriumphTech would supply the know-how to deliver ScribeEMR a cost-effective solution.|
|Proposed Solution and Architecture||TriumphTech proposed using a combination of AWS Transcribe Medical and Comprehend Medical to handle ScribeEMR’s workload. Transcribe medical transcribes audio conversations between doctors and patients. This is then sent to Comprehend Medical for processing where insights can be derived.
By relying on AWS ML services, this allows ScribeEMR to come to market without on-staff data scientists and engineers. As such, pre-trained ML models were used. When low confidence words are detected those can be tracked and stored in a library where they’ll be reviewed to see if a custom language model is needed.
Data Flow leveraged S3 as the main data store for raw audio files. Lambda calls connected S3 to AWS Step Function where the transcription jobs were created and checked for completion. At completion, results are stored in S3 and move to colder storage based on client-specified lifecycles. Once the transcriptions are completed, the files containing the text are sent to Comprehend Medical for processing and understanding.
|Outcomes of Project & Success Metrics||A small proof of concept was conducted to ensure the audio files could be converted to text. The output of the PoC was successful to show that accurate transcriptions could be provided.
In leveraging AWS ML services, ScribeEMR was able to come to market without on staff data scientists and engineers. Instead money could be invested into building a product and allowed for a much faster launch. Also, by not “reinventing the wheel” and spending energy on on re-training models that already existed, ScribeEMR reduced their carbon footprint of their product development lifecycle.
|Date Entered into Production||July 2022|
|Lessons Learned||Using pre-trained models can both speed up delivery in the development process and create a scalable, sustainable product for customers|
|Summary Of Customer Environment||Cloud environment is native cloud. The entire stack is running on Amazon Web Services. Stack is being deployed in the US-East-1 region.|
View more articles View more articles