Skip to content
I

InterpretWebility

Explaining your AI

Created on 21st March 2021

I

InterpretWebility

Explaining your AI

The problem InterpretWebility solves

Imagine you are a doctor who uses AI to gauge the patient’s health (for example, Pneumonia Chest X-Ray). How would you know that the AI is ethically helping you in your task? How do you know whether your AI is capturing the “right” features for your task? Enters InterpretWebility. A website that allows you to dynamically select a model and a respective Interpretability/Explainable AI technique (for Computer vision specifically; Example: SmoothGrad or Grad-CAM or Grad-CAM++ etc.) by just uploading an image. It makes provision for inference on-the-edge which helps preserve the privacy of the user. It also gives easy access for the user to understand the intuition behind how a Neural Network works for Computer Vision tasks.

Challenges we ran into

The most challenging part was to create a web app for the model. For that, we used Flask but faced many difficulties as we were doing it with Flask for the first time.

Discussion

Builders also viewed

See more projects on Devfolio