AI-Powered Telemedicine Platform
A full-stack, intelligent healthcare platform designed to bridge the gap between patients and doctors, especially in low-connectivity environments. This project leverages a microservices architecture and advanced AI to provide accessible, real-time, and asynchronous medical consultations. [Patient: tester@example.com, 1234567890], [Doctor: testing@example.com, 1234567890]
Built a dual-mode system allowing for real-time video calls with doctors and AI powered auto report generation from the call and an asynchronous "MedReach" feature for patients to send video/audio messages when doctors are unavailable.
Integrated a client-side Speech-to-Text model (Whisper) that transcribes patient recordings directly in the browser, ensuring functionality even in poor internet conditions.
Developed an intelligent triage system that analyzes patient transcripts for severity. It automatically routes high-severity cases to doctors and mobile clinics, and medium-severity cases to community health workers (ASHA).
Engineered a critical safety net that alerts public sector units (police/fire) for transport or oxygen support if no medical staff are available to respond to an emergency.
Created a RAG-based medical chatbot using Google Gemini for real-time translation and accurate responses. Also built a k-NN model to predict potential ailments from user-reported symptoms.
Designed a centralized system for patients to access and download their health records from both online consultations and physical hospital visits.
Traditional telemedicine fails in rural areas.
Built the system with an offline-first approach. By processing transcription on the client's device, the platform can send a lightweight text file instead of a large video file over a weak network, ensuring no patient is left behind.
Patients often don't know how serious their condition is.
The AI triage system that instantly analyzes symptoms and flags emergencies, bypassing queues and routing alerts directly to the nearest available medical help, significantly cutting down response times.
Patients couldn't communicate effectively.
The multilingual AI chatbot, allowing users to describe their symptoms in their native language and receive clear guidance.
Integrating multiple, distinct AI models (STT, NLP, Generative, ML) into a seamless and responsive user experience.
I designed and implemented a microservices architecture. An AI backend (FastAPI) handled all intelligent processing, while the core application logic was managed by a robust Spring Boot backend made by teammates. This separation ensured scalability and maintainability.
Making the symptom checker accurate and available offline.
I trained a lightweight k-NN machine learning model that was small enough to be bundled with the mobile application, allowing for offline predictions without needing to contact a server.