Eyes in the Sky

High-Level Project Summary

Land cover data documents how much of a region is covered by forests, wetlands, impervious surfaces, agriculture, and other land and water types. The different types of land cover can be managed or used quite differently. This can be determined by analyzing satellite and drone imagery and that is what we have done. We have created an accessible and robust website made using ReactJS that will perform land cover segmentation and classification from satellites and drones at the click of a button powered by powerful deep learning models served by FastAPI. We have also showed how drastically land cover changes have occurred due to environmental calamities such as thunderstorms and floods.

Detailed Project Description

Our project "Our Eye in the Sky" uses satellite and drone images inputted by the user and uses Deep learning and Machine learning for producing land annotated and segmented image which is returned back to the user .Our Front-End was developed using HTML5, CSS 3 , React JS, Redux state management and @materialUI library for robust and smooth operating User experience. Our API's were developed in python using convoluted Unet Segnet Networks which were deployed in Fast API platform which processes our image and egresses an output image which is then displayed out in the front end and is available to download for the user. Our Prototype the API segments the flooded plains and helps in disaster Management Scenarios.

Space Agency Data

For Satellite Images:- https://sedac.ciesin.columbia.edu/data/set/ssp-ssp-literature-db-v1/data-download and

For Drone Images :- http://dronedataset.icg.tugraz.at

For Prototype data we usedSentinel-2 Satellite :-https://www.kaggle.com/franciscoescobar/satellite-images-of-water-bodies

We used the data to primarily train our deep learning models on. Each use-case has its own specific model trained to its own specific dataset. The datasets we found were mostly cleaned and update by the organizations and people of the community so a big thank you to them. There was minimal amount of cleaning and analysis to be done so we focused mostly on the model training and deployment parts.

Hackathon Journey

It has been a great learning experience for all the team members. We got a handful of exposure to many different tools and methods. We encountered some hiccups and issues in the integration of the web application but we managed to pull through almost every hardship we faced and were able to put together an accessible and robust, interactive platform for our use-case .One of our team member was unfortunately was suffering from Covid-19 and hence our cloud deployment stands delayed . Training a lot of models and serving them using FastAPI proved to be a challenging task given our limited local resources however cloud kernels such as Kaggle, and Google Colab came in clutch giving us access to free and powerful cloud resources. Figuring out relevant datasets and use-cases was also a fun challenge and we got to know a lot in this space of urban development, remote sensing and deep learning.

References

We have used VSCode and GitHub extensively for development and deployment of this project. Kaggle and Google Colab for model training and data handling.


Also a big thank you to our friends in StackOverflow who are omniprescent with their apt solutions :D.


For UI/UX:-

Adobe XD for front-end prototyping.


For Frontend Development:-

Axios for API calls.

React image uploading package is used for image uploads.

Redux is used for state management.

@emotion/react": "^11.4.1",

@material-ui

"@testing-library/jest-dom": "^5.14.1",

"@testing-library/react": "^11.2.7",

"@testing-library/user-event": "^12.8.3",

"axios": "^0.21.4", "bootstrap": "^5.1.1",

"material-ui-popup-state": "^1.9.3", "react": "^17.0.2",

"react-bootstrap": "^2.0.0-rc.0", "react-dom": "^17.0.2",

"react-images-upload": "^1.2.8", "react-images-uploading": "^3.1.3",

"react-redux": "^7.2.5", "react-router-dom": "^5.3.0", "react-scripts": "4.0.3", "redux": "^4.1.1", "redux-promise": "^0.6.0",

"styled-components": "^5.3.1", "web-vitals": "^1.1.2"


For Model Development and serving APIs:-

Kaggle cloud kernel

FastAPI Swagger UI

matplotlib==3.3.2

numpy==1.19.5

keras_segmentation==0.3.0

uvicorn==0.14.0

starlette==0.14.2

fastapi==0.68.1

Pillow==8.3.2

uvicorn==0.14.0

pydantic==1.8.2t

tensorflow-cpu

Keras

gunicorn==19.9.0

h5py

OpenCV

LibTIFF


Data:-

For Satellite Images:- https://sedac.ciesin.columbia.edu/data/set/ssp-ssp-literature-db-v1/data-download

For Drone Images :- http://dronedataset.icg.tugraz.at

For Prototype data:- We used data collected by theSentinel-2 Satellite - https://www.kaggle.com/franciscoescobar/satellite-images-of-water-bodies

Tags

#UrbanDevelopment #Satellites #Drones #Disaster_Management #Remote_Sensing #Deep_Learning

Global Judging

This project has been submitted for consideration during the Judging process.