Awards & Nominations

Team Oceanic has received the following awards and nominations. Way to go!

Global Nominee

Saving our oceans with AI

High-Level Project Summary

We built a web app that uses GIS to allow a scientist to locate an ocean on the map, takes a screenshot of the map and sends it to the AI which shows the areas containing plastic. Our solution leverages on AI to quantify and classify the amount of debris into three parts1. Plastic (one plastic)2. Nylon (Large or small floating nylons on the surface)3. Group_Plastic (This is a large plastic containing as many as 10 to more than 100 plastics together in a place)The model returns an image showing the affected areas and the location.It's very important because it can be used to monitor the garbage dump, locate highly polluted areas to clear them and save sea animal lives.

Detailed Project Description

What does the web app do?


It uses a GIS system and leverages AI to classify and quantify plastics on the ocean surface. All you need to do is to locate a region on the map, take a snapshot of the place and click on predict. It then returns the image of the map, another image showing affected areas, and the number and size of the plastics.



How does it work?


The web app consists of three parts









  1. GIS API - which is a python API called MapBox. This API allows us to locate regions on the sea and has the functionality to take pictures
  2. AI model - It was built with Tensorflow, Keras, and Python. It is an image segmentation model that...
  3. Streamlit app


AI Model


There are five stages involved in building this model









  1. Data gathering. 65 aerial view images of the ocean containing plastic debris in small and large quantities were sampled from Google to be used for training the model.
  2. Data Annotation. Visible plastics on these images were manually annotated using MakeSense.ai and exported in COCO dataset format.
  3. Data Preprocessing and Augmentation. Here, masks of the annotated images were generated using a python function and stored in a folder to be used as a target for the model. The images were also augmented to show the model various sides of the images to reduce bias.
  4. Model training. A UNet model was built with TensorFlow using CNNs (Convolutional Neural Networks). It has 8 layers and used sparse categorical cross-entropy as its loss function. Trained on Google Colaboratory CPUs in 15 epochs with a runtime of 3 mins.
  5. Model evaluation. It achieved an accuracy of 67% and this is because of the limited data used in training.


Streamlit App


The web-app is built using Streamlit which is an open-source Python library that makes it easy to create and share custom web apps for machine learning and data science. Streamlit allows for fast, easy and scalable deployment of machine learning models as in this case.

We make use of MapBox API to locate regions on the sea with the functionality to take screenshots to be used in the model. The map displayed is responsive and detailed. A dashboard is shown to aid users in deciding the number of pictures to be taken and automatically fixing the screenshots to the model.


What are its benefits?









  1. It will help scientists monitor the rate of garbage dumps in the oceans
  2. It will help scientists identify regions and areas with high plastic dumps and sent them to environmentalists to clean them up.
  3. It will save the lives of sea creatures as areas that are highly polluted will be known and immediately cleaned.
  4. Since plastics take many years to decompose, locating areas that have been polluted can speed up cleaning thereby improving the life of sea creatures.
  5. Sea creatures will benefit from clean oceans again as they won't eat or ingest harmful plastics.


What do we hope to achieve?


We hope to achieve a cleaner and safer environment where sea creatures are healthy and not harmful to humans as food.


Tools, coding languages, and software used.


AI Model


Tensorflow, Google Colab, UNet


Data Gathering and Annotation


MakeSense.ai


Model deployment


Streamlit, Github


GIS System


MapBox API

Space Agency Data

The machine debris by the U.S Department of Interior was used to know and aid us in our search for the right sources.

Hackathon Journey

Day 1


Space Apps experience has been full of fun, learning, debugging, communicating and brainstorming, sleepless nights, rewriting codes over and over again, meeting new people, and so much more. I learnt about tools including ArcGIS, MapBox API, UNet model for image segementation, even other segementation models in python, first time trying out image annotation, learnt more about GIS in AI, learnt about past research work on Plastic Debris(not computer vision where they could detect plastics based on teh ocean currents and other parameters!), so much more than I can put down.


Some months ago, a friend participated in a global cleanup challenge where they went to pickup plastics in the community and recyled then. Thi really inspired me and seeing this challenge among the challenges gingered me to solve it.


I started with brainstorming with a new person who later became my teammate, and from there, we did a lot of research since we both knew almost nothing about geographical systems. We then drew out a plan with the help of our mentor, split the tasks and we took a part to work on.


Our first setback was lack of data. It was so hard for us to find an open-source dataset fit for the kind of problem we want to solve so we had to manually source out dataset from Google.

Another major setback we faced was with the model underperforming also blackout few hours before the deadline for subbmission

References

Keras implementation of UNet - https://keras.io/examples/vision/oxford_pets_image_segmentation/

Canva - For creating the project demo.

MakeSense - For annotating images

Tensorflow, Python, Streamlit, MapBox,

Tags

#DeepLearning #GIS #ComputerVision

Global Judging

This project has been submitted for consideration during the Judging process.