The purpose of this hack is to aid in real-time and rapid, disaster identification, by promptly identifying the degree of damaged buildings and areas using remotely sensed images (satellite images) and also providing online updates of earthquakes and forest fires on the basis of the current scenario on the field.
Currently, the disaster assessment task is done on satellite images manually by humans or by inspecting the damaged site in person which is both slow, inefficient and dangerous. Our model is able to automate the process and helps the rescue workers provide timely assistance even in hard-to-reach areas.
During a disaster, rescue authorities would like to prioritise by going to severely affected areas, and our application through deep learning and the web user interface will provide the end-user with enough details and up to date updates to prioritise the areas requiring immediate attention by acquiring images from satellite or drones which are connected to the network. And also, in disasters, people trapped under buildings have very little time to survive and it is imperative to address them as soon as possible.
In general, buildings are of different scales and sizes so traditional deep learning models were not able to effectively extract building footprints, therefore we used Multiresolution analysis (MRA) techniques to facilitate multilevel representation of data. It also has the ability to analyze the image in an adaptive manner, capturing local as well as global information to effectively extract building footprints.
While analyzing the data we found that there was huge class disparity. There was a lot of data corresponding to undamaged buildings but there was very little information on completely damaged buildings. So, we used Focal Loss to mitigate this problem as Focal loss penalizes the model to focus on hard to classify examples.
Technologies used
Discussion